Top AI Companies Meet at White House, Commit to Voluntary Governance Measures

Top AI Companies Meet at White House, Commit to Voluntary Governance Measures
Image credit: Open AI

The White House yesterday convened a meeting with senior leadership from seven major US AI companies and research labs for pivotal talks on governance. The gathering reflected growing momentum around collaboratively establishing guardrails guiding AI’s development for societal benefit.

Following the meeting, companies announced voluntary commitments intended as interim measures until government policies can be enacted. While not official legislation, they represent significant progress aligning AI governance with shared human values.

The White House says it is currently in the process of developing an executive order and will pursue bipartisan legislation.

The commitments focus on safety evaluations, cybersecurity, synthetic media detection, transparency, and managing societal risks. They underline a proactive, collaborative approach to governance between industry and government.

Attending the meeting were senior leadership from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI represented by:

  • Brad Smith, President, Microsoft
  • Kent Walker, President, Google
  • Dario Amodei, CEO, Anthropic
  • Mustafa Suleyman, CEO, Inflection AI
  • Nick Clegg, President, Meta
  • Greg Brockman, President, OpenAI
  • Adam Selipsky, CEO, Amazon Web Services

The companies have commited to the following:

  1. Internal and External Red-Teaming: Companies commit to rigorous internal and external red-teaming of models, focusing on areas including misuse, societal risks, and national security concerns. They pledge to advance research in this area, to develop a robust red-teaming regime for major public releases, and to publicly disclose their safety procedures.
  2. Information Sharing: Companies commit to fostering information sharing among themselves and governments regarding trust and safety risks, emergent capabilities, and attempts to bypass safeguards. They aim to establish or join forums to develop, advance, and adopt shared standards and best practices for AI safety.
  3. Investment in Cybersecurity: Companies commit to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, treating them as core intellectual property, especially concerning cybersecurity and insider threat risks.
  4. Third-Party Discovery Incentives: Companies commit to incentivize third-party discovery and reporting of system vulnerabilities, establishing bounty systems, contests, or prizes for the responsible disclosure of weaknesses.
  5. User Understanding of AI-Generated Content: Companies commit to develop and deploy mechanisms enabling users to understand if audio or visual content is AI-generated, including the creation of robust provenance and watermarking systems for AI-generated content.
  6. Public Reporting of Capabilities and Limitations: Companies commit to publicly report model capabilities, limitations, and domains of appropriate and inappropriate use, including discussions of societal risks such as effects on fairness and bias.
  7. Research on Societal Risks: Companies commit to prioritize research on societal risks posed by AI systems, including efforts to avoid harmful bias and discrimination, and to protect privacy.
  8. Development of Frontier AI Systems: Companies commit to develop and deploy frontier AI systems to help address society’s greatest challenges, such as climate change mitigation, early cancer detection, and combating cyber threats. They also pledge to support initiatives fostering education and training in AI.

While voluntary, the commitments made at this meeting underline a proactive approach to managing the risks and benefits of AI, setting a high benchmark in AI governance. The collective action taken by these industry titans sends a clear message about their dedication to the responsible development and use of AI technology.

This comes as China last week issued the world's first comprehensive regulatory framework governing generative AI, hoping to encourage innovation while upholding the country's ideological values and maintaining tight control over online content.

The contrast between China's top-down regulatory approach and the voluntary commitments proposed by industry leaders in the West underscores the divergent paths being explored globally for AI governance. It also highlights the complexities and challenges involved in creating an effective governance framework for a technology as transformative and far-reaching as AI.

Many governance experts argue formal regulations and bipartisan legislation will ultimately be essential to manage risks and maximize benefits from transformative AI systems. Others fear that stringent regulations could stifle innovation, slow down the pace of AI development, and give an undue advantage to established players who can more easily navigate the regulatory landscape. Striking the right balance between principles and regulation will be critical.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe