WASHINGTON - Open Artificial Intelligence (AI) says it has agreed changes to the "opportunistic and sloppy" deal it struck with the US government over the use of its technology in classified military operations. Last Monday OpenAI...

chief executive Sam Altman said the company would add the language to its agreement, including explicitly prohibiting the use of its systems to spy on Americans. The deal had emerged last Friday following a fallout between OpenAI's rival Anthropic and the Department of Defense, over concerns around the use of its AI model Claude for mass surveillance and in fully autonomous weapons.
But it has raised questions over how AI is used in war and how much power rests with government and private companies. A statement made last Saturday by OpenAI claimed its agreement with the Pentagon had "more guardrails than any previous agreement for classified AI deployments, including Anthropic's". But last Monday, Altman posted on X to say further changes were being made, including making sure its system would not be "intentionally used for domestic surveillance of U.S. persons and nationals".
As part of the new amendments, intelligence agencies such as the National Security Agency would also not be able to use OpenAI's system without a "follow-on modification" to the contract. Altman added the company had made a mistake by rushing "to get this out last Friday". "The issues are super complex, and demand clear communication," he said. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." OpenAI has faced backlash from users following its announcement it was working with the Pentagon. According to data from Sensor Tower, the number of people uninstalling ChatGPT has surged since the news of OpenAI's partnership with the Department of Defense was announced last Friday.
The market intelligence firm said the daily average uninstall rate was up by 200% compared to normal rates. Meanwhile, Anthropic's Claude rose to the top of Apple's App Store ranking, where it still remains on Tuesday. The AI model was blacklisted by the Trump adminstration following Anthropic's refusal to drop a corporate "red-line" principle that its technology should not be used to create fully autonomous weapons. Despite this, the use of Claude in the US-Israel war with Iran has since emerged, with BBC News' US partner CBS News reporting it was still in use on Tuesday. The Pentagon has declined to comment on its dealings with Anthropic. (BBC/AFP)