Microsoft Deploys Secure, Air-Gapped GPT-4 AI Model for US Intelligence Agencies

Microsoft Deploys Secure, Air-Gapped GPT-4 AI Model for US Intelligence Agencies

Microsoft has deployed a top-secret AI platform designed for US intelligence agencies. The platform, based on OpenAI's GPT-4 framework, is the first of its kind to operate entirely offline and will enable the processing of highly classified information.

Microsoft's CTO for Strategic Missions and Technology, William Chappell, said the company had spent 18 months developing the technology, including overhauling an existing AI supercomputer in Iowa.

The platform is designed to be used on a secure, government-only network, addressing the security concerns that have previously prevented intelligence agencies from adopting AI technology.

"This is the first time we've ever had an isolated version—when isolated means it's not connected to the internet—and it's on a special network that's only accessible by the US government," Chappell said.

The platform went live on Thursday but has not yet been accredited for use by the Pentagon and other government departments. It will now undergo testing and accreditation by the intelligence community.

Chappell said the platform is designed to read files but not learn from them, ensuring that secret information is not absorbed into the system. "You don't want it to learn on the questions that you're asking and then somehow reveal that information," he said.

Intelligence agencies have been keen to adopt AI technology to help process the vast amounts of data they collect. Sheetal Patel, assistant director of the CIA's Transnational and Technology Mission Center, said at a security conference last month: "There is a race to get generative AI onto intelligence data. The first country to use generative AI for their intelligence would win that race, and I want it to be us."

While the platform offers a secure way for intelligence agencies to adopt AI technology, there are potential drawbacks. One serious concern is the inherent limitation of AI language models to confabulate, or make things up, which could mislead officials if not used properly.

It remains to be seen how the platform will be overseen, limited, and audited for accuracy.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe