
OpenAI is turning its offices into something closer to a classified government facility than a tech startup, complete with fingerprint scanners and computers that never touch the internet, according to a new Financial Times report. The reason? The company believes Chinese rivals are systematically stealing its AI breakthroughs.
Key Points:
- OpenAI implemented "information tenting" policies that drastically limit which employees can access sensitive algorithms and discuss projects in shared spaces
- The company now uses biometric fingerprint scanners to control office access and keeps proprietary technology on isolated, offline computer systems
- These security measures accelerated after OpenAI accused Chinese startup DeepSeek of using "distillation" techniques to improperly copy its GPT models in January
The changes started quietly last summer, but they kicked into high gear after DeepSeek's shock AI release in January sent tech stocks tumbling and raised uncomfortable questions about how a Chinese startup built such capable models so cheaply. OpenAI's answer: they think DeepSeek copied their homework.
"Distillation" is the technical term for what OpenAI claims happened. It's a common technique where you train a smaller, cheaper AI model to mimic a larger, more expensive one by feeding it the bigger model's outputs. Think of it like learning to paint by copying a master's work stroke by stroke. The problem? This typically violates the terms of service of companies like OpenAI.
According to multiple sources close to the company, OpenAI believes it has evidence that DeepSeek used this approach to recreate the performance of ChatGPT at a fraction of the cost. David Sacks, President Trump's AI czar, went further, telling Fox News there's "substantial evidence" of what he called knowledge theft.
DeepSeek hasn't responded to these allegations, but the episode transformed how OpenAI thinks about security. The company now operates under "information tenting" policies that would make intelligence agencies proud. When OpenAI was developing its o1 reasoning model last year—internally codenamed "Strawberry"—only employees specifically cleared for the project could discuss it in communal office spaces. Others had to stay quiet or move conversations elsewhere.
The restrictions got so tight that some staff found them almost impossible to work with. "You either had everything or nothing," one person familiar with the policies told the Financial Times. The company has since refined the approach, but the core principle remains: compartmentalization is king.
Physical security got the full treatment too. OpenAI now requires fingerprint scans to access certain office areas, keeps its most sensitive technology on computers that never connect to the internet, and operates under a "deny-by-default" policy where nothing can access external networks without explicit approval. The company has also beefed up security at its data centers and hired cybersecurity veterans from the defense world.
The security overhaul reflects a much bigger problem facing Silicon Valley. Chinese espionage against US tech companies has exploded in recent years, with AI firms becoming particularly juicy targets. Earlier this year, the Justice Department charged former Google engineer Linwei Ding with stealing AI trade secrets for Chinese companies. Unlike stealing chip designs or biotech formulas, AI models are basically code—meaning stolen algorithms can be deployed almost immediately.
OpenAI brought in heavy hitters to lead the response. The company hired Dane Stuckey as its chief information security officer from Palantir, the data intelligence firm known for its extensive military and government work. Retired Army General Paul Nakasone joined OpenAI's board specifically to help oversee cybersecurity threats. These aren't the kind of people you hire unless you're expecting sophisticated adversaries.
The broader context makes OpenAI's paranoia understandable. US authorities have been sounding alarms about Chinese espionage for years, but the warnings have intensified as AI becomes central to national security competition. Multiple universities have ended partnerships with Chinese institutions amid espionage concerns.
But the DeepSeek controversy highlights how murky the rules actually are. While distillation typically violates terms of service, the practice is widespread in the AI industry. The legal landscape around AI training also remains a mess, with copyright law struggling to keep up with technological reality. Proving that DeepSeek specifically copied OpenAI's models could be extremely difficult, especially since only the final model is public, not the training data or process.
The irony is thick: an industry built on the free flow of information and open research collaboration is rapidly closing itself off. The same companies that once celebrated the democratizing potential of AI are now building fortresses to keep competitors out.