OpenAI to Provide Government with Early Access to Next Foundation Model

OpenAI to Provide Government with Early Access to Next Foundation Model

OpenAI CEO Sam Altman has announced an impending agreement with the U.S. AI Safety Institute to provide early access to their next major AI model for safety testing. This collaboration marks a significant step in the ongoing dance between AI innovation and responsible development.

The partnership mirrors a recent deal OpenAI struck with the UK government, granting priority access to models for research and safety purposes. These government collaborations are becoming increasingly common in the AI industry as companies navigate the complex landscape of rapid advancement and public accountability.

However, this latest move comes with a backdrop of controversy. Earlier this year, OpenAI disbanded its superalignment team led by former chief scientist Ilya Sutskever that focused on developing controls for superintelligent AI systems. This sparked criticism that the company was prioritizing product development over safety research.

OpenAI's record on AI safety and security is complex and evolving. In recent months, the company has taken several noteworthy steps in this arena. May saw the establishment of a Safety and Security Committee to evaluate AI practices and advise the board on critical decisions. The following month, former NSA Director General Paul Nakasone joined OpenAI's board and safety committee, bringing significant national security expertise to the table.

In July, the company pledged to allocate 20% of its computing resources to safety efforts. Simultaneously, they eliminated non-disparagement clauses, potentially opening the door for more transparent internal discourse. And of course, we can't ignore how much OpenAI has contributed to the broader AI safety discourse by open-sourcing research and sharing frameworks and policy recommendations.

These actions paint a picture of a company actively engaged in AI safety issues. However, they also continue to surface questions about the balance between innovation and caution in AI development. The tech community continues to debate whether these measures are sufficient safeguards against the potential risks of advanced AI systems.

The timing of this U.S. AI Safety Institute agreement also particularly noteworthy. Just this week, OpenAI’s Vice President of Global Affairs Anna Makanju endorsed three Senate bills, including the Future of AI Innovation Act, which provides Congressional backing for the new U.S. AI Safety Institute as it works to build best practices that minimize the potential risks posed by this new technology. This sequence of events has raised eyebrows, with some observers questioning whether OpenAI is attempting to exert influence over federal AI policymaking.

Adding to this complex picture is Altman's position on the Department of Homeland Security's AI Safety and Security Board and OpenAI's dramatically increased lobbying expenditures this year - from $260,000 in all of 2023 to $800,000 in just the first six months of 2024.

If for nothing else, the collaboration between OpenAI and the U.S. AI Safety Institute underscores the growing recognition that ensuring AI safety requires a balance between the private sector innovation and government oversight. As AI capabilities continue to advance rapidly, this interplay between tech companies, government, and regulatory frameworks becomes not just important, but critical.

Yet, this partnership also casts a spotlight on a pressing concern: the potential for regulatory capture in the rapidly evolving AI landscape. As leading AI companies like OpenAI become more entwined with government agencies, the public must remain vigilant. The challenge lies in fostering collaboration that enhances safety without compromising the integrity of regulatory processes.

This development sets the stage for a new chapter in AI governance. It prompts us to consider how we can harness the collective expertise of industry and government while maintaining the checks and balances necessary for responsible AI development. Onwards and upwards.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe