Business News
.png)
4 min read | Updated on February 27, 2026, 09:33 IST
SUMMARY
US-based AI firm Anthropic has refused to remove key safeguards from its AI systems despite mounting pressure from the US Department of Defense.

Anthropic has reportedly sought assurances against uses such as mass surveillance or autonomous lethal decision-making. Image: Shutterstock
US artificial intelligence company Anthropic has said it will not remove certain safeguards on its AI models despite pressure from the US Department of Defense, escalating a dispute over the use of advanced AI systems in military and intelligence operations.
In a statement on Friday, Anthropic CEO Dario Amodei said the company remains committed to supporting US national security but would not agree to demands that it accede to “any lawful use” of its technology without restrictions.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies,” Amodei wrote. But he said the company “cannot in good conscience accede” to demands that it allow “any lawful use” of its technology without restrictions.
Anthropic said its AI model Claude is extensively deployed across the US Department of Defense and other national security agencies for intelligence analysis, modelling and simulation, operational planning and cyber operations.
The company said it was the first frontier AI firm to deploy models in the US government’s classified networks and at the National Laboratories.
But tensions escalated after Pentagon officials indicated they would only contract with AI firms that agree to unrestricted use of their systems for any lawful military purpose.
In a January speech, Defense Secretary Pete Hegseth said the Pentagon would not use AI models that constrain lawful military operations, adding that “Department of War AI will not be woke. It will work for us.”
The company, however, identified two use cases that it said were outside the bounds of what today’s AI systems can safely and reliably do: mass domestic surveillance and fully autonomous weapons.
Anthropic said it supports the use of AI for lawful foreign intelligence and counterintelligence missions but opposes what it described as “mass domestic surveillance.”
It also said that fully autonomous weapons systems are not yet reliable enough to be deployed without proper oversight and safeguards.
"We will not knowingly provide a product that puts America’s warfighters and civilians at risk," Amodei said. "We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer."
While partially autonomous systems are already used in conflicts such as Ukraine, Anthropic said fully autonomous weapons require safeguards and oversight that “don’t exist today.”
According to US media reports, Hegseth had given Anthropic until Friday to provide the US military unrestricted use of its AI technology or risk losing its $200 million contract and being effectively blacklisted from future government work.
Amodei confirmed that the Pentagon has threatened to remove the company from its systems, designate it a “supply chain risk” and potentially invoke the Defense Production Act if it refuses to lift its safeguards.
“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” he said.
The dispute comes months after the Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities aimed at advancing US national security.
The confrontation reportedly intensified after the US military used Claude during a January operation to capture former Venezuelan President Nicolás Maduro, though Anthropic said it did not discuss the use of its model for specific operations with the Department of Defense.
Anthropic said it has offered to work with the department on research and development to improve the reliability of AI systems but that the offer had not been accepted.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei said.
Meanwhile, Anthropic has removed a key self-imposed commitment to pause development of more powerful models and replaced it with a more flexible, nonbinding structure.
In a blog post this week, Anthropic released Version 3.0 of its Responsible Scaling Policy (RSP), acknowledging that elements of its two-year-old framework could hinder its ability to compete in a rapidly evolving AI market.
Under its earlier policy, Anthropic had committed to pausing the training of more capable models if their abilities outstripped the company’s capacity to ensure they were safe and controllable.
The company argued that unilateral pauses by “responsible” developers, while competitors press ahead, could “result in a world that is less safe.”
“The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public Benefit,” the policy document stated.
By signing up you agree to Upstox’s Terms & Conditions
About The Author
.png)
Next Story
By signing up you agree to Upstox’s Terms & Conditions