
The Biden administration issued a comprehensive national security memorandum Thursday directing defense and intelligence agencies to accelerate their adoption of artificial intelligence while establishing new guardrails for its use. The directive marks the government's most detailed roadmap yet for deploying AI in national security applications.
"There is probably no other technology that will be more critical to our national security in the years ahead," said National Security Advisor Jake Sullivan in a speech announcing the policy. "We have to be faster in deploying AI in our national security enterprise than America's rivals are in theirs."
The memorandum requires federal agencies to take several concrete steps:
- Designate Chief AI Officers to oversee AI implementation and risk management
- Reform hiring practices to better attract and retain AI talent
- Streamline procurement processes to work more effectively with AI companies
- Monitor and assess AI systems for privacy violations, bias, discrimination and other potential harms
The policy establishes the AI Safety Institute within the Commerce Department as the primary point of contact between government and private AI developers. Within six months, the institute must begin testing frontier AI models before their public release to evaluate potential security risks.
A key focus is protecting U.S. AI advantages against foreign competitors. The directive makes collecting intelligence on threats to the U.S. AI sector a top priority and requires agencies to quickly share cybersecurity information with AI developers.
The memorandum also addresses infrastructure needs, with Sullivan warning that the U.S. must rapidly expand its power grid capacity by "tens or even hundreds of gigawatts" to support AI development – potentially representing up to 25% of current U.S. electricity consumption.
For agencies deploying AI systems deemed "high-impact," the policy mandates minimum risk management practices, including assessing data quality, testing for bias, and maintaining human oversight. However, agency AI chiefs can waive these requirements if they would "create an unacceptable impediment to critical operations."
"Uncertainty breeds caution," Sullivan noted. "When we lack confidence about safety and reliability, we're slower to experiment, to adopt, to use new capabilities – and we just can't afford to do that in today's strategic landscape."
The directive comes as Congress has yet to pass comprehensive AI regulation. It builds on Biden's October 2023 executive order on AI safety and arrives ahead of a global AI safety summit in San Francisco next month.
While the memorandum sets an aggressive timeline for implementation, some provisions may face uncertainty with the approaching presidential election. The Republican Party platform calls for repealing Biden's earlier AI executive order, though specific positions on national security applications remain unclear.
Human rights advocates note that while the policy requires agencies to protect privacy and prevent discrimination, similar AI technologies developed for national security have previously been adopted by law enforcement for domestic surveillance without public disclosure.
The administration maintains that providing clear rules will actually accelerate responsible AI adoption by removing uncertainty about what agencies can and cannot do. Implementation of these ambitious directives across the sprawling national security apparatus will be the key test of the policy's effectiveness.