Entropy, measurement, and diversity: ALIFE 2023 - Day 4

ALIFE Art Exhibit

In AI, something that excites and worries many researchers is “emergent capabilities” – a phenomenon observed in all complex systems in which complex network interactions lead to entirely new traits and abilities. We build and rely on complex systems in the first place to handle change, but if that system becomes too unpredictable, it stops being useful and might even be dangerous. This is the core problem in science: can we design predictable systems with unpredictable properties? Can we find simple rules and theories to explain complex phenomena without losing their most important parts?

Science fiction is very well suited to asking philosophical questions; questions about the nature of reality, what it means to be human, how do we know the things that we think we know.
— Ted Chiang

Beating the heat with Takashi Ikegami, a fellow member of our Augmented Collective Intelligence (ACI) Journal Club.

Three Key Insights

Emergence

Emergence is “neither trivial nor magic.” it doesn’t happen all at once or for a singular reason – think of it as a combination of many facets, which can complement or contradict each other. It is a continuous, not binary, process. We need to identify each facet to understand emergence and get really good at quantifying them. 

Measurement

Unsurprisingly, emergence is really difficult to measure. It can be hard to tell much from observing emergence beyond correlation - “if this happens, then that apparently also happens.” A pressing area of future research is designing benchmarking procedures to catch the moments before, during, and after a change occurs. Our best measures so far may be measuring entropy (via the Shannon scale) and information decomposition. As usual, solving the universe’s toughest questions requires a lot of math!

Justice and human values

Many of us value diversity from a perspective of justice and human values, but it also plays a critical role in the function of complex systems. In practice, increasing diversity in an environment gives you a wider range of information signals with two functions: first, it gives you a better chance at catching deceptive patterns in a landscape, and second, it brings wider perspectives to bear on solving hard problems.

 Our best thinking happens outside!

Three Faces of ALIFE

Zach Laborde is a Ph.D. student in Neuroscience & Cognitive Science seeking to understand the brain better and apply those insights to AIs. He was the first person I met at ALIFE 2023 this year, and he works hard to build an inclusive community at ALIFE.

Trym Lindel is a frighteningly intelligent Ph.D. in student in artificial intelligence in Oslo. He is an empathetic and interdisciplinary thinker with a background in psychology and hopes to use a better understanding of human interactions to improve discourse in the field of AI.

Alessandro di Stefano is a senior lecturer in computer science passionate about game theory, artificial intelligence, and solving the “black box” problem to make safer AIs. He’s a global citizen who has lived around the world and has excellent recommendations for first-time travelers to Japan.

Two Sessions I Enjoyed

Our field trip to Sapporo Art Park got all of us out of our conference rooms and into the gorgeous mountains of Hokkaido. We explored the park’s carefully-designed ecosystems, shopped for traditional artisanal goods, and, best of all, experienced an artificial life-inspired interactive art exhibition at teamLab.

Experiencing wonder at teamLab Sapporo.

Simon McGregor, in my mind, wins the prize for “best presentation that put words to something that has been bugging me for months.” In “Is ChatGPT Really Disembodied,” he challenged popular claims that large language models aren’t “intelligent” because they fail to meet embodiment criteria. The truth is…they do! Streams of neutrinos or our sense of smell are not “tangible,” but they are definitely embodied. In the same way, AI systems interact with their environment in clearly physical but non-tangible ways. To claim otherwise, Simon says, is “digital dualism” – a pre-scientific fallback on claims that “I can’t feel it, so it isn’t real.”

Looking forward to tomorrow

Feasting together at the closing banquet.

It’s sad that ALIFE 2023 is closing, just as I’m meeting so many brilliant thinkers and engaging challenging ideas. I’m looking forward to the closing public event by Ted Chiang and participating personally in a closing panel. Hopefully, from my position in the industry, I can shed some light on how to close gaps between the critical research happening at ALIFE and everyday executive decision-making!

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Enabling collective intelligence: FOSSY Day 4

Next
Next

Embodiment and emergence: ALIFE 2023 - Day 3