Top US Companies Join AI Safety Institute Consortium

The AISIC represents a crucial step in implementing President Biden's executive order on AI, which directed federal agencies to address emerging issues like AI security, red team testing, deepfakes detection, and algorithmic discrimination.

Top US Companies Join AI Safety Institute Consortium

Over 200 leading AI companies, academic institutions, and civil society organizations have signed on to support the US government's efforts to develop concrete AI safety standards and guidelines. Announced Thursday by Secretary of Commerce Gina Raimondo, the U.S. AI Safety Institute Consortium (AISIC) represents an unprecedented collaboration across the AI community on the critical issue of safety, trust, and responsible development of rapidly advancing AI systems.

Housed under the Commerce Department's National Institute of Standards and Technology (NIST) U.S. AI Safety Institute, the AISIC membership roster reads like a "who's who" of AI heavyweights, including Google, Meta, Microsoft, Apple, Nvidia, and other tech titans. The consortium's over 200 inaugural members span industry leaders creating state-of-the-art AI, academic researchers studying its societal impacts, government entities evaluating its risks, and civil rights groups advocating for its ethical use.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," said Secretary Raimondo in announcing the consortium. "That's precisely what the U.S. AI Safety Institute Consortium is set up to help us do."

The establishment of the AISIC represents a crucial step in implementing President Biden's landmark October executive order on AI, which directed federal agencies to address emerging issues like AI security, red team testing, deepfakes detection, and algorithmic discrimination.

The AISIC will tackle many of these challenges directly by facilitating collaboration on developing guidelines and standards for AI safety testing and risk management. Consortia working groups plan to focus on priorities like creating frameworks for evaluating AI system security, establishing transparency requirements to detect synthetic media, and devising operational governance policies to reduce harmful bias and effects.

Bringing together such a diversity of stakeholders is key to this effort, explained Under Secretary of Commerce for Standards and Technology Laurie E. Locascio. “We are going to need the best and the brightest who represent a diversity of thought, of experiences, of expertise, to ensure that we can reap the benefits of AI that is trustworthy and safe,” said Locascio.

Through this unprecedented public-private partnership, the Biden Administration aims to solidify America's leadership in AI by proving it can innovatively manage risks without stifling progress. If the AISIC is successful in its ambitious mission, its guidelines and standards could become global benchmarks for AI accountability. That could go a long way towards ensuring that AI is broadly beneficial and lifts up society as a whole rather than further dividing it.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe
Mastodon