OpenAI has quietly showcased its latest AI breakthrough, codenamed “Strawberry,” to U.S. national security officials, according to a new report by The Information. This demonstration aligns with the company's recently announced safety and security protocols, which include providing early government access to new AI models.
Strawberry, previously known as Q*, is reportedly significant technical leap forward in AI capabilities, particularly in complex problem-solving and reasoning. The technology is expected to excel in areas where current AI models struggle, such as mathematical problems and programming tasks.
According to sources familiar with the project, OpenAI is using Strawberry to generate high-quality training data for its next flagship language model, codenamed "Orion." This approach could potentially reduce errors and "hallucinations" in AI responses by providing more accurate examples of complex reasoning.
However, OpenAI’s ambitions with Strawberry extend beyond improving Orion. The company is also exploring a smaller, distilled version of Strawberry for potential integration into chat-based applications like ChatGPT. This model could enhance reasoning capabilities in scenarios where users require more thoughtful, detailed answers rather than quick responses.
Earlier this month, OpenAI announced its collaboration with the U.S. AI Safety Institute, granting the government early access to its next major AI model for safety testing. The agreement is part of a growing trend where frontier model providers (Google, Meta, Anthropic, Microsoft et al.) engage with government agencies to ensure that advancements in AI align with national security and safety concerns. The idea is to prioritize transparency, preemptively address concerns related to AI’s rapid development, and stave off unnecessary regulation.
For OpenAI, these efforts are part of a larger strategy to engage policymakers and strengthen its safety credentials. The partnership with the U.S. AI Safety Institute signals a deliberate shift toward greater collaboration with governments—a move mirrored by a similar agreement with the UK. This renewed focus on safety comes in response to earlier criticisms that the company may have been sidelining safety research, particularly after the controversial disbanding of its superalignment team, which was responsible for controlling superintelligent AI systems.
However, since then, OpenAI has established a Safety and Security Committee and brought on safety and security experts like former NSA Director General Paul Nakasone and Carnegie Mellon's, Zico Kolter to its board. Additionally, the company has committed 20% of its computing resources to safety efforts, further underscoring its dedication to responsible AI development.
As OpenAI continues to push to make larger and more capable AI models, the community remains divided on whether these measures provide sufficient safeguards against potential risks.
While the company remains tight-lipped about specific product launches, OpenAI’s recent interactions with government officials and its ongoing work on Orion suggest that something big is on the horizon. Whether Strawberry will emerge as a standalone product or remain a critical tool for training future AI models like Orion, its development could mark a turning point for the AI industry.