Incentives and evolution: ALIFE 2023 - Day 2

Caffeinated and ready to fight jet lag on Day 2 of ALIFE!

2023 is the first in-person conference ALIFE has held since 2019, for obvious reasons. COVID-19 has weighed heavily on the field and is inspiring waves of new research to begin understanding why the world failed to prevent a pandemic and what we can do better next time. 

Multi-Agent Simulations are useful tools to predict the effects of public policies. In the last three years, with the concerns around the COVID-19 pandemic, several simulations were developed to understand the effects of lockdown, travel, etc. Even before that, MAS systems were used to plan disaster evacuation policies, transit policies, and many others…[Our] simulator considers how human mobility (pedestrian, public transportation, private transportation) interacts with large-scale events (natural disasters, entrance examinations, pandemics).
— Shiyu Jiang and his coauthors

Each death from COVID-19 was, in some ways, a failure to apply collective intelligence (CI). Better modeling, participatory policymaking, and community engagement could likely have 1) increased trust in public health agencies, 2) suggested less burdensome and more cost-effective interventions, and 3) supported better preventative policy design years before the world locked down. Today, we explored how CI can be used to design better institutional incentives that lead to better decisions.

Our most important institution this week – Hokkaido University.

Three Key Insights

Incentive structures

A big component of CI involves understanding incentive structures for multi-agent behavior and interaction. We need to solve cooperation problems like the prisoner’s dilemma to facilitate productive teamwork between any group – humans or AIs! 

Commitment

One technology we can use to develop cooperation is commitment - sacrificing our options to change the incentives in a situation. Commitments are a way of instituting Institutional reward and punishment, which can reward compliant behavior and punish noncompliant behavior to help agents trust each other.

Rewards vs. punishments

When trying to encourage commitment, rewards are far more effective than punishments. And surprisingly, having more noise (unhelpful information) during the pre-commitment stage improves compliance and commitment – perhaps by partially obscuring the benefits of making a more selfish choice.

Dinner with DeepMind’s Rory Grieg and University of Sussox’s Simon MacGregor, who I met on Day 1.

Three Faces of ALIFE

Tom Willkens is a Ph.D. candidate studying evolutionary computation at Brandeis University. Last year at ALIFE, he published “Evolving Unbounded Neural Complexity in Pursuit-Evasion Games,” which explores how arms races between neural agents in pursuit-evasion games can drive open-ended innovation, shedding light on how open-ended learning could lead to more capable AI.

Thilina Heenatigala advises the Japanese government on astronomy, artificial intelligence, and public engagement issues. Originally from Sri Lanka, he is a gifted science communicator dedicated to ensuring that the public will share the benefits of major investments in public science. 

When given the chance to choose a cyberpunk aesthetic for your demo, you should take it.

Ane Kristine Espeseth is an up-and-coming Norwegian cognitive scientist who has studied at top universities worldwide. Currently, she is determining her Ph.D. research agenda related to human-AI interaction; I’m very excited to keep tabs on her research!

Two Sessions I Enjoyed

I really enjoyed Shiyu Jiang’s presentation on “Simulating Disease Spread During Disaster Scenarios.” Modeling collective agent behaviors is an incredibly promising tool for developing pandemic prevention and response strategies – it allows us to identify core uncertainties and pivot points that would make or break policy decisions. Even seemingly straightforward decisions like “How should we invest in mask-wearing to prevent community transitions”  is a multivariable analysis, requiring assumptions about compliance, spread rate, community travel patterns, and more. We need our best tools to help figure this out!

Similarly, “Optimisation of hybrid institutional incentives for cooperation in finite populations” used the United Nations’ behavior related to public health as a case study for mapping incentive design in policymaking. Its takeaway? Predictions based on static models of agent behavior can be very misleading, even dead wrong. Good policy design needs to model the evolution of agents’ behaviors, including how they interact.

A serene walk home at the end of the day.

Looking forward to tomorrow

Tomorrow morning, mathematician David Wolpert is giving a keynote on his work in complexity science. David is a professor at the Santa Fe Institute, where I spent time earlier this year – and I’ve already drunk the Kool-Aid on the power of complexity science to explain collective intelligence. It’s going to be brilliant.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Embodiment and emergence: ALIFE 2023 - Day 3

Next
Next

An introduction to artificial life: ALIFE 2023 - Day 1