
Anthropic just revealed that the company has built custom AI models specifically for US national security customers, and they're already deployed at "the highest level" of classified government operations.
The news matters because it completes a remarkable transformation across the AI industry. Just 18 months ago, OpenAI prohibited any military use of its technology. Today, its models are heading to actual battlefields through a partnership with defense contractor Anduril. Now Anthropic—long seen as the more cautious, safety-focused alternative to OpenAI—is openly embracing the defense market with purpose-built "Claude Gov" models.
Key Points:
- Claude Gov is Anthropic’s AI line designed specifically for classified U.S. national security use.
- Models are fine-tuned for intelligence, threat analysis, and handling sensitive data.
- The move puts Anthropic in direct competition with OpenAI and Palantir for what's become a multi-billion dollar government AI market
The Money Trail
The timing isn't coincidental. Anthropic is preparing to raise new funding at a potential $40 billion valuation, and government contracts represent one of the few reliable paths to massive AI revenue that isn't just selling chatbot subscriptions to consumers.
Palantir's Maven Smart System alone is worth over $1 billion to the Pentagon, and that's just one program. Scale AI just landed a multimillion-dollar deal for the Pentagon's "flagship" AI agent program. The defense tech market that venture firms more than doubled their investment in to $40 billion by 2021 is now delivering the returns AI companies desperately need.
What makes Anthropic's announcement particularly significant is what the company isn't saying. These "Claude Gov" models feature "improved handling of classified materials, as the models refuse less when engaging with classified information," according to the company. Translation: the safety guardrails that prevent regular Claude from discussing certain topics have been dialed down for government users.
That's a big deal. AI safety guardrails are designed to prevent models from generating harmful, biased, or dangerous content. When Anthropic says its government models "refuse less," it's acknowledging that national security work requires AI that can engage with sensitive topics that consumer models won't touch.
Playing Catch-Up
Anthropic's previous government work came through partnerships with Palantir and AWS, essentially making it a subcontractor. These new Claude Gov models suggest the company wants to sell directly to agencies rather than going through intermediaries—and capture more of the revenue.
The company is late to this party. OpenAI has been aggressively courting Pentagon contracts, recently hiring security executives from Palantir and completely reversing its stance on military applications. Meta opened its Llama models to military use. Even traditionally enterprise-focused companies are chasing defense dollars.
But Anthropic's approach differs in a crucial way: transparency about the safety trade-offs. While other companies have quietly relaxed their restrictions, Anthropic is explicitly stating that its government models operate under different rules. Whether that honesty helps or hurts in the long run remains to be seen.
The Bigger Picture
This isn't just about one company's policy shift. Anthropic recently removed several Biden-era AI safety commitments from its website, signaling a broader recalibration as the industry adapts to the Trump administration's different approach to AI regulation.
The rapid militarization of AI companies reflects a simple reality: building large language models is expensive, and government contracts pay well. Companies like Anthropic are pursuing FedRAMP accreditation to make government sales easier, treating national security as a vertical market like finance or healthcare.
For the broader tech industry, Anthropic's announcement confirms that the age of AI companies avoiding defense work is over. The question now isn't whether AI will be used for national security—it's which companies will dominate that market, and how much they'll compromise their original safety principles to win those contracts.
Anthropic says its models "underwent the same rigorous safety testing as all of our Claude models," but when those models are designed to "refuse less" in classified settings, it raises questions about what "safety" means when billions of dollars are at stake.