Normal accidents, artificial life, and meaningful human control
Lines are blurring between natural and artificial life, and we’re facing hard questions about maintaining meaningful human control (MHC) in an increasingly complex and risky environment.
Bing’s threats are a warning shot
We don’t understand how large language models (LLMs) work. When they threaten us, we should listen.
Banning ChatGPT Won’t Work Forever
Banning a tool reduces its use, but won't stop it entirely - designing space for safe experimenting is better.
Stop Ignoring Your Stakeholders
There’s a wide spectrum between ignoring stakeholders and delegating decisions to them. Visualizing levels of engagement as steps on a ladder helps leaders make better choices about who to include and when.
Writing problematic code with AI’s help
Humans trust AI assistants too much and end up writing more insecure code. At the same time, they gain false confidence that they have written well-functioning and secure code.
Escaping pilot purgatory in Generative AI
Don’t get trapped in the frothy piloting phase of Generative AI without a clear exit to turn it into something tangible. Winners from these investments will deliberately escape pilot purgatory through a disciplined approach.
Instagram effect in ChatGPT
We only see the final, picture-perfect outputs from ChatGPT (and other Generative AI systems), which skews our understanding of its real capabilities and limitations.
A lot is required to achieve those shareable trophies of taming ChatGPT into producing what you want: tinkering, rejected drafts, invocations of the right spells (I mean prompts!), and learning from observing Twitter threads and Reddit forums. But, those early efforts remain hidden, a kind of survivorship bias, and we are lulled into a false sense of confidence in these systems being all-powerful.
Wikipedia’s Balancing Act: A Tool for Collective Intelligence or Mass Surveillance?
In this paper from Liu we explore how Collective Intelligence (CI) might face an untimely death, a “chilling effect,” when co-opted by mass surveillance mechanisms. Contributors to a CI system (and in general) desire privacy. Policies like public tracking of edit histories on Wikipedia can feed into intelligence analyses conducted by federal agencies like the NSA that intrude on privacy, thus inhibiting participation.
Emergent Collective Intelligence from Massive-Agent Cooperation and Competition
The key idea of this paper is to study the emergence of artificial collective intelligence through massive-agent reinforcement learning. The authors provide evidence that collective intelligence can emerge from massive-agent cooperation and competition, leading to behaviors beyond their expectations.
Be careful with ChatGPT
Existing Responsible AI approaches leave unmitigated risks in ChatGPT and other Generative AI systems. We need to evolve our approaches and refine our thinking.
Ethical challenges are only getting exacerbated with increasing experimentation. We are unearthing issues like the generation of very convincing scientific misinformation, biased images and avatars, hate speech, and more. How we put these systems together within human organizations and empower ourselves to take action will be critical in determining whether we get ethical, safe, and inclusive uses out of them.
Let’s dive deeper into these areas and highlight why we must act now.
What should humans do next?
It’s perhaps time to reject the idea that humans are social and machines are asocial. As machine capabilities increase and human behaviors evolve, this mode of thinking is a vestige of the 2010s. To remain relevant in a society that will be humans+machines, we need a better understanding of what strengths each of us brings and how to best put them together.