The United States Space Force has instituted a temporary ban on the use of web-based artificial intelligence tools like ChatGPT, citing concerns over potential data leaks and security vulnerabilities.
The ban, detailed in a memorandum dated September 29 and sent to the Guardian Workforce—official parlance for Space Force members—restricts the application of government data on these AI platforms.
It explicitly states that such tools "are not authorized" for deployment on official systems without special sanction. This comes at a time when tools such as OpenAI’s ChatGPT have surged in adoption, tapping into LLMs that use enormous data troves to both predict and generate new textual content. These systems are not just linguistic prodigies; they can sift through vast document banks, extracting crucial information and representing it in assorted styles.
The memo, authored by Lisa Costa, the Space Force's Chief Technology and Innovation Officer, said the pause aims to safeguard personnel and agency data while officials determine appropriate policies for integrating AI capabilities that support missions. Costa acknowledged the transformative potential of generative AI, but emphasized the need for "responsible" adoption given cybersecurity, data handling, and procurement uncertainties.
While the specifics remain under wraps, experts have highlighted the potential risks LLMs could pose. The extensive, and occasionally non-public, data used in training these models may inadvertently leak or become susceptible to hacking attempts. Costa has committed to provide updated and more detailed guidelines within 30 days.
The Space Force's move has already impacted at least 500 personnel who were utilizing a secure generative AI platform called Ask Sage, according to the company's founder Nicolas Chaillan.
Chaillan criticized the pause as short-sighted given the Pentagon's calls for accelerated AI adoption. He expressed fears of the US lagging behind global competitors, notably China, due to such decisions. He communicated these concerns directly to Costa and several high-ranking defense officials. He further pointed out that Ask Sage already complies with security mandates and is widely used across defense sectors. The software has found favor with numerous defense personnel, some even shouldering the subscription cost personally to facilitate their work.
Yet security experts caution that there are still many unknown risks around large language models. While valuable, deploying them requires careful governance.
The temporary halt follows broader calls to carefully govern AI as its influence grows. While no technology is risk-free, the Space Force is showing seriousness about addressing generative AI's specific vulnerabilities before permitting unfettered usage. Responsible innovation requires identifying dangers as much as opportunities.
The agency's measured approach may frustrate those advocating faster AI adoption. But prudence now can prevent larger problems later. With thoughtfully designed policies, AI's national security applications can be realized consistent with the law and American values. For technologies shaping the future, getting things right matters more than getting them right now.