Moving the needle on the voluntary AI commitments to the White House

The White House will be working with allies towards establishing a strong international framework to govern AI. They have already consulted Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.

The recent voluntary commitments secured by the White House from the core developers of advanced AI systems (OpenAI, Microsoft, Anthropic, Inflection, Amazon, Google, and Meta) presents an important first step in building and using safe, secure, and trustworthy AI. While it is easy to shrug aside voluntary commitments as "ethics washing," we believe that they are a welcome change.

MOVING FORWARD: These voluntary commitments will help to move the AI ecosystem in the right direction and serve as a foundation for operationalizing the Blueprint for an AI Bill of Rights and bring together more actors under a shared banner to advance responsible innovation. The artifacts and knowledge generated from these efforts will build capacity across the ecosystem, ultimately making Responsible AI the norm rather than the exception.

WHAT THEY GET RIGHT: There are several key items that help to push the Responsible AI agenda forward.

Supplementing initiatives like the Frontier Model Forum comprised of the companies that made these voluntary commitments will boost the health of the ecosystem with knowledge exchange on best practices to operationalize Responsible AI, particularly for advanced AI systems.

Given the broader layoffs in the technology world, especially in trust and safety teams, a renewed commitment towards auditing and analysis through public release will help lean on external expertise to fill the internal capacity shortages.

Rigorous documentation in the form of audit reports and disclosures will empower agencies such as the FTC tasked with protecting consumers from deceptive and unfair practices with better evidence for enforcement.

WHAT CAN BE IMPROVED: No initiative can be perfect and a few tweaks and enhancements will boost the efficacy of these voluntary commitments.

Notably missing from the commitments is a focus on mitigating the environmental impacts of AI systems which can be significant in the case of foundation models.

Civil society engagement via consultations are mentioned as a pillar for achieving these commitments. Efficacy here requires deep engagement with civil society, those who will ultimately be affected by this.

Providing timelines on when and how these commitments will be operationalized will serve to boost trust from the general public, especially given many ethical violations of AI in the recent past. Notably, it is important that the pace of implementation of responsible AI commitments keeps pace with that of product releases.


The commitments are a great mechanism to lay the foundations for concrete action on operationalizing Responsible AI. Hopefully, this will trigger greater financial commitments towards safety, security, and trust in AI systems, including investments in internal talent training, capacity building, and technological solutions for AI governance. Once the Biden-Harris administration issues an Executive Order and works on bipartisan legislation, there will be higher impact from the voluntary commitments that will be supported by the law of the land.

Abhishek Gupta

Founder and Principal Researcher, Montreal AI Ethics Institute

Director, Responsible AI, Boston Consulting Group (BCG)

Fellow, Augmented Collective Intelligence, BCG Henderson Institute

Chair, Standards Working Group, Green Software Foundation

Author, AI Ethics Brief and State of AI Ethics Report

https://www.linkedin.com/in/abhishekguptamcgill/
Previous
Previous

Hallucinating and moving fast

Next
Next

War Room: Artificial Teammates, Experiment 6