Hallucinating and moving fast

WHAT’S HAPPENING: "Move fast and break things" is broken. But we've all said that many times before. Instead, I believe we need to adopt the "Move fast and fix things" approach. Given the rapid pace of innovation and its distributed nature across many diverse actors in the ecosystem building new capabilities, realistically, it is infeasible to hope to course-correct at the same pace. Because course correction is a much harder and slow-yielding activity, this ends up amplifying the magnitude of the impact of negative consequences.

FOG OF WAR: What we need to do instead is to think ahead of how the landscape of problems and solutions is going to evolve. For example, when thinking about the problem of hallucinations in GenAI systems, it is unclear at the moment where and how they will manifest. This hinders the adoption of GenAI-powered systems by companies that seek to offer safe and reliable outcomes to their customers, e.g., in customer-service chatbots in financial services or other high-stakes scenarios. 


UPGRADING APPROACHES: The solution then lies in being able to better explore the capability overhang, the latent capabilities and attendant risks in large-scale AI systems. Techniques that are helpful in doing so include the idea of humane-holistic design (HHD) that incorporates ideas from systems thinking and complexity theory to more comprehensively illuminate the landscape of possibilities and risks with these systems and engage in the development of future-proof approaches, such as violet-teaming AI systems. 

DOING BETTER: One of the upsides of the current zeitgeist is that there is a lot more discussion around issues of ethics and societal impacts compared to the era when social media platforms were just taking off. That said, history may not repeat itself, but it certainly rhymes. We are struggling with similar, if not identical, issues around governance approaches to a powerful general-purpose technology that is even more distributed and deep-impacting than other technological waves. 

COMPLEXITY AND STOCHASTICITY: Another complicating factor is the nature of the complexity of AI systems, especially production-grade systems, where it isn't easy to apply policies and governance mechanisms to comprehensively cover all failure modes. The non-deterministic behavior of these systems only exacerbates this problem. Companies hesitate to incorporate such systems into high-stakes scenarios because of this issue. It leads to much strife between product development and governance, risk & compliance (GRC) functions within an organization. 


HINDSIGHT IS USEFUL: So, what we need then is not just a retrospective on how we approached policymaking and regulations from the era of social media but to adopt a more forward-looking approach that accounts for some of the things I mentioned in this and the previous response to develop a responsive and flexible approach to handle the fast-evolving technology that is GenAI. Lessons to be learned from the previous era include getting the fundamentals right, as we should always do, such as getting privacy regulations in order to enable responsible innovation in the field of GenAI. 

GHOSTS OF REGULATIONS PAST: Not having invested enough in getting those privacy regulations right before has only made things harder now. What we need to get working on immediately includes addressing privacy, data rights, and copyrights and empowering regulatory agencies to take meaningful action. The goal is not to stifle innovation. It is to facilitate it in a responsible manner. What allows an F1 driver to go really fast on the racetrack is not just the powerful engine but the high-quality brakes, which gives them the confidence that they have full control when they need it. 

Abhishek Gupta

Founder and Principal Researcher, Montreal AI Ethics Institute

Director, Responsible AI, Boston Consulting Group (BCG)

Fellow, Augmented Collective Intelligence, BCG Henderson Institute

Chair, Standards Working Group, Green Software Foundation

Author, AI Ethics Brief and State of AI Ethics Report

https://www.linkedin.com/in/abhishekguptamcgill/
Previous
Previous

Poor facsimile: The problem in chatbot conversations with historical figures

Next
Next

Moving the needle on the voluntary AI commitments to the White House