Good futurism, bad futurism: A global tour of augmented collective intelligence

Five years ago

As a bright-eyed undergrad, I went to a futurism conference in Atlanta. The closing keynote speaker, a popular science TV host, answered questions from the audience. 

One hand went up: “When will the singularity come?”

“It’s already here,” the host said, smiling. “Think about pacemakers. We are already bionic.”  

Maybe he was saying that our future will be just like our present - mundane but remarkable in its own way. Marked by incremental gains, painful regulation, and a lot of work. 

Or maybe not. A few minutes later, with stars in his eyes, he proclaimed, “The robots will inherit the earth. But that’s okay because they will be our children!” 

Time to summon the collective

Since that conference, several tech bubbles have boomed and busted. But I never forgot that guy. How are we supposed to catch a glimpse of our future as humans? Who should we listen to, and how much? 

At the time, questions about AI's future felt like brain teasers. Today, they’re more serious. People today are rightly factoring AI progress into decisions about majors, careers, when to retire, and sometimes when (or if!) to have a second child. And we have no idea who to listen to.

This summer, I’ve traveled around the world – Munich, Portland, Sapporo, Montreal – to see whether collective intelligence (CI) can help us answer that question. Here’s what I learned.

With many eyes, all bugs are shallow. 

Collective intelligence (CI) is also called “wisdom of crowds” – it is an emergent ability of a group to “find more or better solutions…than would be found by its members working individually.” There are three critical applications of CI to understanding our future with AI: prediction markets, governance, and open-sourced AI. 

Prediction markets aggregate forecasts, and quantified estimations about future events, creating a collective view of a group’s expected outcomes. This process improves collective decision-making by surfacing disagreements, increasing the likelihood of correct guesses, and revealing which “expert” is most often correct. This effect is so powerful that even the same person making multiple forecasts increases her likelihood of accuracy. In Munich, I met scholars from Rutgers and Penn State who have begun using hybrid human-AI prediction markets to benefit from both groups’ strengths – we call this augmented collective intelligence (ACI)

Governance is collective decision-making. In Montreal, we discussed how the AI models driving most social change are not democratically governed, concerning civil rights scholars and ethicists everywhere. Democratic governance stems from the principle of affected interests: "Those who are affected by a decision-making process should have input into that decision-making process.” Taiwan, Bowling Green, Kentucky, and others use ACI tools like Polis to solicit community viewpoints and concatenate them to consensus-driven policy recommendations. 

Polis can isolate disagreements in a group’s responses and identify areas of consensus where policies might be made.

Linus’s law, “with many eyes, all bugs are shallow,” is one of the core ideas behind the open-source software (OSS) movement. For nearly 30 years, OSS communities have run distributed and self-governing projects, creating excellent learning labs to understand collective problem-solving. In Portland, I met some open-source AI developers aiming to unlock historic amounts of ACI for any use under the sun. This sounds promising at first glance, but let’s look deeper. Dual-use technologies, like AI, can be used to help or harm. The same can be said for the printing press, stone tools, or fire. But open-sourced AI runs into something new, which I’m informally calling “the super-Ebola problem.” Given that any population has a percentage of antisocial people, how freely should we distribute tools that significantly reduce the resource barrier to making lethal bioweapons? CI is a dual-use technology; sometimes, the problem a crowd solves is “how to overwhelm your defenses.

Trust calibration is a tightrope.

The hardest and most important task in ACI, or any human-AI interaction, is determining how to delegate tasks. This is difficult for many reasons – humans have trouble estimating their own abilities, let alone a frequently-updating black box. A new study of radiologists by Nikhil Agarwal et al. found that the difficulty communicating between humans and AI erased any gains that could have come from working together. 

In any relationship, there are three entities - you, the partner, and the relationship between you. In human-AI interaction, we call that the interface. In Munich, I learned that intent is the most important thing for any interface to communicate. Understanding what the other agent tries to accomplish, even unsuccessfully, is a critical first step to building trust. Bringing designers in early to the process makes it more likely that this will happen. 

The seasons march on.

This fall, Abhishek Gupta and I are rolling our insights into a series of experiments. (If you’d like to be a human volunteer, send me a note!) It is our hope to understand not just the principles underlying ACI but to catch it in action in a hybrid human-AI team exercise. We want our legacy to be a set of stepping stones towards a greater understanding of human-AI teaming, its risks and benefits, and how responsible organizations can implement large language models (LLMs) in their daily work.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

A Research Roadmap for an Augmented World

Next
Next

Seeing the invisible: AIES 2023 - Day 3