AI Sparks Security Fears at US Space Force
Adding its name to the list of companies and organizations prohibiting the use of generative AI, the United States Space Force issued a temporary order to its servicemembers (or “Guardians”) halting the use of personal generative AI accounts while on duty.
In a memo first reported by Reuters and confirmed to Decrypt by a Space Force representative, the September 29 memo issued by Lisa Costa—U.S. Space Force Deputy Chief of Space Operations for Technology and Innovation—aimed to provide guidance on the responsible use of generative AI and large language models.
“A strategic pause on the secure integration of generative AI and large language models within the U.S. Space Force has been implemented as we determine the best path forward to align this capability into the USSF mission,” Space Force spokesperson Major Tanya Downsworth told Decrypt. “This is a temporary measure to protect the data of our service and Guardians.”
Although the memo did not mention a specific AI model being used by servicemembers, the Space Force is the latest U.S. government body to acknowledge the use of AI tools and attempt to put up guardrails.
In July, the Chief Administrative Office of the U.S. House of Representatives issued a letter to staffers limiting the use of OpenAI’s ChatGPT, saying that only the subscription-based premium ChatGPT Plus service was allowed—and only under specific conditions.
“Every Guardian has the responsibility to comply with all cybersecurity, data handling, and procurement requirements when purchasing and using [generative AI],” Costa wrote in the memorandum.
While the department does not track the number of Guardians signing up to use generative AI tools, Downsworth said that it does monitor activities on its networks. Cybersecurity and privacy have become significant concerns for policymakers and corporations. In May, Samsung and Apple banned employees from using ChatGPT, citing fears of data and intellectual property loss as such programs ingest inputted data.
The Space Force memo listed several key points for employees to adhere to, including that all AI model trials must be approved by the CTIO. AI accounts bought for personal use must not be affiliated or connected to the Guardian’s government identity, organization, location, or function. Certain types of Generative AI tools are furthermore not allowed on government devices, and government data must not be used in third-party AI models.
With the launch of more sophisticated generative AI chatbots like ChatGPT, Google Bard, and Anthropic’s Claude this year, AI has rapidly entered the mainstream. Everyone from students and teachers to corporate engineers and developers are using such AI chatbots for quick answers, as well as solutions to complex problems and questions.
Echoing the storyline from the Black Mirror episode “Joan is Awful,” the memo also told servicemembers not to accept or agree to any generative AI or large language model terms of service (TOS) or end-user license agreements without prior review and approval.
Despite these concerns and the “strategic pause,” the Space Force is optimistic about the future use of artificial intelligence in space and military endeavors.
“These technologies will undoubtedly revolutionize our workforce and enhance Guardians’ ability to operate at speed in areas such as Space Domain Awareness and Command and control,” Downsworth concluded. “The Space Force’s CTIO is actively involved in the Department of Defense’s GenAI task force, TF-Lima, which aims to harness the power of these technologies in a responsible and strategic manner.”