Anthropic, Google, Microsoft, and OpenAI Establish Frontier Model Forum to Promote AI Safety

Anthropic, Google, Microsoft, and OpenAI Establish Frontier Model Forum to Promote AI Safety
Image Credit: Maginative

Google, Microsoft, OpenAI and Anthropic today announced the launch of the Frontier Model Forum, a new industry initiative focused on promoting the safe and responsible development of frontier AI models. The news comes on the heels of last week's gathering of top AI companies at the White House, where attendees committed to new voluntary governance measures around responsible AI development.

Frontier models refer to the largest, most advanced AI systems which currently exceed the capabilities of existing models across a variety of tasks. The Forum aims to bring together leading technology companies to collaborate on AI safety research, establish best practices, and support policymakers in governing these rapidly evolving technologies.

The need for such collaboration comes as recent breakthroughs in large language models like GPT-4 have highlighted both the vast potential and possible risks of unrestrained AI progress. While holding great promise to benefit humanity, advanced AI also poses complex technical challenges and potential harms which no single organization can fully address alone.

"We're excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation," said Kent Walker, President of Global Affairs at Google's parent company Alphabet. "We're all going to need to work together to make sure AI benefits everyone."

The Frontier Model Forum has outlined three core objectives:

  • Advancing AI safety research and enabling standardized safety evaluations of frontier models to minimize risks.
  • Identifying best practices for responsible development and deployment, helping public understanding of frontier model capabilities and limitations.
  • Collaborating with policymakers, academics, and civil society to share knowledge about trust and safety risks.

The Forum also aims to support developing AI applications that address pressing global priorities like healthcare, climate change, and cybersecurity.

Over the next year, the Forum will focus on three key areas:

  • Best Practices: Promoting knowledge sharing and standards on safety and responsibility in collaboration with industry, government, and civil society.
  • Safety Research: Funding and coordinating critical open research questions on AI safety through initiatives like developing public benchmarks.
  • Information Sharing: Establishing mechanisms for responsible disclosure among companies and governments on AI risks.

Membership to the Forum is open to organizations that develop and deploy frontier models, demonstrate a strong commitment to frontier model safety, and are willing to actively contribute to the Forum's mission. The Forum's establishment invites all organizations meeting these criteria to collaborate on ensuring the safe and responsible development of frontier AI models. An Advisory Board representing diverse views will guide strategy and priorities.

While a welcome step, the Forum is just one part of a broader ecosystem needed to govern AI. Initiatives from government bodies like the EU and G7 have laid groundwork, and collaboration with advocacy groups like the Partnership on AI remains vital.

"It is vital that AI companies align on common ground and advance thoughtful safety practices to ensure powerful AI tools have the broadest benefit possible," noted Anna Makanju, VP of Policy at OpenAI.

The Forum aims to consult widely and feed into existing efforts by policymakers and researchers. But continued scrutiny will be needed to ensure the organization lives up to its promises of transparency and commitment to the public good.

Ultimately, the Forum represents a watershed moment of tech companies acknowledging the need for cooperation on AI safety, not just competition in developing the most capable models.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” stated Brad Smith, President of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly.”

The path forward will require sustained engagement between companies, governments, and civil society to ensure frontier AI evolves responsibly. While questions remain, the Forum signals a meaningful step towards collective responsibility in shepherding these transformative technologies.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe