U.S. Department of War Signs Deal with OpenAI for Military Use
Klaus Solko, co-executive editor

As of Feb. 27, OpenAI has a partnership with the US Department of War. OpenAI, the company that owns ChatGPT, made an official statement on Feb. 28 that was updated on March 2. The only author listed for the statement is OpenAI, so it is unclear if their official statement was written with AI.
OpenAI is the first AI company to partner with the Pentagon. As mentioned in their statement, “We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” Anthropic, which is the company that owns Claude, also had conversations with the Department of War. They claim they are “the first frontier AI company to deploy our models in the US government’s classified networks.” This statement was released on Feb. 26 and does not have an author listed, so it once again is not clear if it was written with AI or not.
We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.
Open ai
Anthropic had two bounds in their proposed contract with the Department of War. These boundaries are that their AI systems can not be used for mass domestic surveillance and can not be used to power fully autonomous weapons. “AI has utility [for] things like making proteins and, like, mapping them. Like with science specifically. [However] AI gets things wrong and when you are dealing with people’s lives at stake it could get things wrong […] People will die in response,” comments Augsburg Student Sam Bartelt when asked what they think about these being the only two limitations to this technology.
The stated reason Anthropic gave for not wanting AU technology to be used to surveil citizens is that “under current law, the government can purchase detailed records of Americans’ movements, web browsing and associations from public sources without obtaining a warrant […] powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at [a] massive scale.” There does not seem to be any evidence that this surveillance technology can not be used on people outside the United States.
Anthropic claimed that these two safeguards were the reason they were unable to come to an agreement with the Department of War. “The Department of War has stated they will only contract with AI companies who accede to ‘any lawful use’ and remove the safeguards in the cases mentioned above.” Is what Anthropic claimed happened leading to the Department of War not partnering with them.
OpenAI has the same two limitations on its usage within the Department of War, along with adding “No use of OpenAI technology for high-stakes automated decisions.” OpenAI also claims that their safeguards will be upheld through a “multi-layered approach,” including OpenAI personnel being kept in the loop about usage.
OpenAI clarified their agreement on domestic surveillance in the update on March 2 saying, “Throughout our discussions, the Department made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance.”
In the FAQ portion of OpenAI’s statement, they clarified why they believe they were able to come to an agreement with the Department of War and Anthropic wasn’t. They said their agreement has more safeguards and provides better guarantees, though they do not know why Anthropic was able to close their deal.
OpenAI believes that many more AI companies should find deals and partnerships with the Department of War as they believe “the US military absolutely needs strong AI models to support their mission, especially in the face of growing threats from potential adversaries.” When asked for comment on this quote one Augsburg student said, “notice how they say ‘potential adversaries’ yeah, it’s because it’s fear-mongering. It always is. We aren’t in danger. We are the danger. This is dangerous. Anyone who is still using AI after shit like this is part of the problem. They are supporting the murder of innocent people under the idea of a threat.”
https://openai.com/index/our-agreement-with-the-department-of-war