Controlling creations - ALIFE 2023 - Day 5

Addressing the 2023 ALIFE attendees at the closing panel.

The best movie ever made about artificial life is Jurassic Park. The score is timeless, the special effects still look good 30+ years later, and it’s got Jeff Goldblum and Laura Dern. But more importantly, it nails the core problem in our field. If you are building a powerful new life form, you really need to keep it under control.

Innovators have always been beset by critics (see: the Spanish Inquisition, HOAs, Twitter mobs) asking, “What if this thing hurts people?” If you ask it too often, you get stuck in the Stone Age. But if you ask it too rarely, you might find your family chased by dinosaurs. 

Ask Jurassic Park Laura Dern: “Don't you see the danger, John, inherent in what you're doing here? Genetic power is the most awesome force the planet's ever seen, but you wield it like a kid that's found his dad's gun."

The advent of artificial life will be the most significant historical event since the emergence of human beings…We must take steps now to shape the emergence of artificial organisms; they have the potential to be either the ugliest terrestrial disaster or the most beautiful creation of humanity.
— ALIFE Proceedings, 1991

I want to be quite explicit here. The research community attending ALIFE with me is racing to evolve a super-life, using the best available insights from biology, machine learning, information science, and more. We are working to endow those models with agency, autonomy, and maybe even consciousness. But we’re not talking much about ensuring they don’t break badly, which needs to change. 

In AI, “alignment” determines the whole industry’s value to society. No alignment, no great flourishing future. The time has come for the Alife community to put our mighty collective intelligence towards the question of how to align artificial life. This Herculean effort requires cooperation across labs and disciplines. Fortunately, many Alifers study cooperation in uncertain and dynamic systems. Our time has come!

Three Key Insights

Vague definitions

Many of the most important concepts in Alife (and in artificial intelligence specifically) are poorly defined so far. Take the following concepts: intelligence, learning, consciousness, and creativity. These are very hard to observe and empirically measure, even where researchers agree on basic definitions. One AI researcher joked, “It's a Sisyphean task to ask AI researchers for conceptual hygiene.” 

Night falls at Hokkaido University.

The need for a shared understanding

This dynamic is important for a couple of reasons. In high-stakes policy development, reaching a shared understanding of terms is critical to designing something coherent and consistent with our broader strategic goals. We may release a system with high levels of “intelligence,” “agency,” or maybe even “consciousness” before we’ve agreed on what those terms mean at all. Developing precise, shared benchmarks of concepts like these will be really important. And also, we need to plan ahead about what we will do if the big C shows up sooner than we expected.

Ane Kristine Espeseth and Riversdale “Riv” Waldegrave discuss a shared passion for evolutionary models.

Agency

In the past, the AI community has often been too quick to ascribe agency to systems with impressive results. And that’s understandable – agency is a lot easier to guess based on externally visible behavior than it is based on internal dynamics. Agency is exactly what many researchers and organizations strive to develop in Alife and AI systems – and it’s also the most dangerous. What physical processes can an artificial system influence? Can it grow through open-ended evolution? Remember that software models are easy to copy and merge; a “single” AI can quickly transition to a large collective, given the capability to evolve.

Three Faces of ALIFE

John Wentworth is a prodigious researcher in AI Safety. He seeks to understand and model agentic behavior and identify obstacles to achieving robust AI systems aligned with human values.

Miguel Aguilera, ALIFE 2023 organizer, is a complex systems scientist whose work Abhishek and I have admired from afar. His wide-ranging body of work includes analysis of self-organizing systems, collective identity, and evaluating neural networks on biological grounds.

Ice cream tastes best on a hot day shared with friends.

Nora Ammann founded and directs “Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS),” an interdisciplinary project studying parallels between artificial and biological intelligent behavior. PIBBSS offers fellowships for promising researchers and is a great connection point for the field.

Two Sessions I Enjoyed

Science fiction author Ted Chiang and neuroscientist Anil Seth had a fireside chat open to the public, where some 700+ guests joined us online. In keeping with the event theme, Ghost in the Machine, conversations focused on tricky questions of consciousness. Is consciousness substrate-independent, meaning it can emerge in biological, digital, or other environments? Or is it nearly impossible to generate a biological phenomenon (consciousness) in a less sophisticated artificial space?

One of Japan’s world-famous firework festivals took place tonight in Sapporo. Photo courtesy of Tanner Lund.

To close out ALIFE 2023, I took the stage as part of a panel by Takashi Ikegami on how to close the gap between industry and cutting-edge research in AI and the life sciences. It was a privilege to share the stage with other brilliant thinkers and to address the community! I encouraged ALIFE attendees to invest in communications. Writing a brilliant research paper is like throwing a paper airplane over a wall – you can’t assume someone is on the other side waiting to catch it. For ALIFE insights to break through to the policymakers who need them most, our community must engage an army of technically knowledgeable, relational communications pros.

Looking forward to tomorrow

Tomorrow, after a week of heavy thinking and studies, I am headed to Tokyo, Kyoto, and Nara for vacation, where I plan to eat everything and walk everywhere. You will not hear a word from me. :)

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

War Room: Artificial Teammates, Experiment 6

Next
Next

Enabling collective intelligence: FOSSY Day 4