Tech • 2h ago
Anthropic Denies It Could Sabotage AI Tools During War
**US Military's Use of Generative AI Model Claude at Center of Heated Dispute**
The US military's use of Anthropic's generative AI model Claude has been at the forefront of a contentious debate between the Pentagon and the leading AI lab. In a court filing on Friday, Anthropic's head of public sector, Thiyagu Ramasamy, denied allegations that the company could sabotage its AI tools during war. The statement was made in response to accusations from the Trump administration about potential tampering with its AI tools.
According to Ramasamy, Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations. He emphasized that Anthropic does not have the access required to disable the technology or alter the model's behavior before or during ongoing operations. This assertion is part of a broader court dispute between Anthropic and the Pentagon over the limits of the company's technology usage for national security.
The Pentagon has been using Claude to analyze data, write memos, and help generate battle plans, as reported by WIRED. However, the government has labeled Anthropic a supply-chain risk, a designation that will prevent the Department of Defense from using the company's software, including through contractors, over the coming months. Other federal agencies have also abandoned Claude. Anthropic has filed two lawsuits challenging the constitutionality of the ban and is seeking an emergency order to reverse it.
Government attorneys have argued that the Department of Defense "is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations." In response, Ramasamy rejected the possibility of Anthropic disrupting active military operations by turning off access to Claude or pushing harmful updates. He emphasized that Anthropic does not maintain any back door or remote "kill switch" and that the technology simply does not function that way.
Anthropic executives maintain that the company does not want veto power over military tactical decisions. In a court filing, Sarah Heck, head of policy, wrote that Anthropic was willing to guarantee this in a contract proposed on March 4. "For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any right to control or veto lawful Department of Defense operations," Heck stated.
A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. The judge could decide on a temporary reversal soon after. Meanwhile, customers have begun canceling deals, and the dispute between Anthropic and the Pentagon continues to escalate.