Anthropic’s Rift with Pentagon Over Safeguards Could Impact DOE Labs
The ongoing dispute between Anthropic and the Department of War is raising questions, including who is responsible for implementing safeguards on AI products used for military objectives, and what those safeguards should entail. The dispute is also impacting the Department of Energy, which is currently reviewing the use of Anthropic products in the national labs.
The rift between Anthropic and the DOW stems from a fundamental disagreement between the two entities on the topic of AI safeguards. In short, the DOW demands that third-party AI models it uses have zero pre-configured safeguards and that AI vendors agree to “any lawful use” of that AI by the DOW.
Anthropic generally agrees to those terms. The company has worked with the US military and said it has never run into any ethical issue. However, the Claude-maker insists that the DOW carve out two exceptions in its contract: mass domestic surveillance and fully autonomous weapons.
“We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values,” Anthropic CEO Dario Amodei wrote in a February 26 blog post. On fully autonomous weapons, he stated: “We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
Claude has been banned for use within all US Government agencies (Source: Anthropic)
Anthropic’s stance on these items is non-negotiable, Amodei added. “It is the Department’s prerogative to select contractors most aligned with their vision,” he wrote. “We cannot in good conscience accede to their request.”
The next day, Secretary of War Pete Hegseth announced he would designate Anthropic as a supply chain risk, which effectively ended the Pentagon’s contract with Anthropic that was worth up to $200 million. “Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic,” Secretary of War Pete Hegseth said on X.
President Donald Trump also announced in a Truth Social post that he directed all federal agencies in the US government “to immediately cease all use of Anthropic’s technology,” adding that there will be a six-month phase-out period for agencies like the DOW.
Anthropic responded on March 9 by filing two lawsuits against the DOW, in California and Washington, D.C. The company said that its designation as a supply chain risk was unlawful and violated its free speech and due process rights. “Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term,” the company wrote in the lawsuit.
Anthropic’s AI model, Claude, has been widely adopted by the US military as well as government scientific institutions. Claude is used by the military and national security agencies for intelligence analysis, modeling and simulation, operational planning, and cyber operations, among other uses. Claude has also been widely adopted by DOE National Labs for pursuing scientific research in biology, life sciences, energy, and other areas, and is involved in DOE Genesis Mission projects.
However, it appears that the DOE has begun the process of eliminating Anthropic products from its national labs. In response to an HPCwire question, a DoE spokesperson stated:
“As directed by President Trump, the Department of Energy is reviewing all existing contracts and uses of Anthropic technology. The Department remains firmly committed to ensuring that the technology we employ serves the public interest, protects America’s energy and national security, and advances our mission.”
Anthropic’s Claude is one of the top foundation models in the world and has been widely adopted in a range of use cases. Tests show Claude outperforming competitors like Google’s Gemini and OpenAI’s GPT models in areas like text generation, programming, document analysis, and search. Claude Opus is considered one of the top reasoning models and is highly used by software engineers. Anthropic has also emphasized safety with Claude.
Generative AI is a new technology, and best practices are still being fleshed out. How and where to apply built-in checks on AI decision-making, which are commonly called guardrails or safeguards, is a topic that experts are still wrestling with.
Anthropic’s emphasis on safety may not mesh with emerging use cases, particularly with the US military. Amodei’s insistence that it retain safeguards, rather than the US Government, clearly has put it at odds with the DOW.
While the AI community may be wrestling with how and where to apply safeguards, the US military is not, according to Ben Van Roo, the CEO and founder of AI platform firm Legion Intelligence (formerly Yurts).
AI can help defend against drone attacks (Anelo/Shutterstock)
In an interview with HPCwire, Van Roo said Anthropic displayed a basic misunderstanding of how the military employs technology to help automate or speed up decision-making, adding that we’re a long way from seeing Terminator-style killer robots unleashed on the battlefield.
“There’s military doctrine in how you take certain steps along the way. There’s very explicit tests in how you advance things like targeting using prioritization algorithms,” he said. “Military doctrine has existed for hundreds of years longer than Anthropic.”
Clearly, few would have an issue with AI playing a role in defensive technology. For instance, if 10,000 attack drones were inbound on San Francisco right now, “do you want a human deciding every single drone that gets attacked by a counter measure?” Van Roo asked? “I would want to use any technology to help improve a missile intercept.”
The extent to which AI is used for more offensive use cases, such as the attack on Iranian military installations by the US and Israel that began February 28, is unknown. In any case, the basic math is the same: Anything that can improve the quality of decisions should be used, and when it comes to AI specifically, the safeguards already exist in US military doctrine.
At the end of the day, the government needs “reliable vendors” that don’t insist on setting the rules for use, said Van Roo, who also bemoaned the “hype and hysteria” in Silicon Valley and the mainstream media around the story.
US Navy Admiral Brad Cooper, CENTCOM Commander for Operation Epic Fury
“We can’t walk into these scenarios without our eyes wide open with the technology that we’re creating,” Van Roo said. “It is the most disruptive, powerful technology in human history. Of course the US Government’s going to want to use it. I think there’s a perception that they’re using it without intention and without thoughtfulness.” That is not the case, he added.
In a video message posted to X on March 11, US Navy Admiral Brad Cooper, CENTCOM Commander for Operation Epic Fury, provided some insight into how the US is using AI in the war in Iran.
“Our warfighters are leveraging a variety of AI tools,” Cooper said. “These system let us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react. Humans will always make final decisions on what to shoot and what not to shoot, and when to shoot. But advanced AI tools can turn processes that used to take hours and sometimes even days into seconds.”
Editor’s note: This story was updated with a statement on AI use by Admiral Cooper.
This article first appeared on HPCwire.
Related

