Anthropic sues Pentagon over AI blacklist clash

Anchal Verma
Written by Anchal Verma

Artificial intelligence (AI) firm Anthropic has filed a lawsuit against the United States Department of Defence, challenging a decision that restricts the use of its AI technology within military supply chains. The legal action marks a significant escalation in a dispute between the AI company and the US government over limits on how its technology can be used for military purposes.

The lawsuit, filed in federal court in California on Monday, asks a judge to overturn the Pentagon’s decision to designate the company as a supply chain risk. The designation restricts the use of Anthropic’s AI model Claude in certain defence-related projects and contracts. The company argues the decision is unlawful and violates its constitutional rights, including free speech and due process.

Pentagon blacklisting sparks legal fight

The Pentagon formally labelled Anthropic as a supply chain risk last week. The decision came after months of discussions between the company and defence officials about how the military could use the firm’s AI tools.

According to people familiar with the matter, the dispute centred on Anthropic’s refusal to remove safeguards that prevent its AI systems from being used for fully autonomous weapons or domestic surveillance. The decision was approved by US Defence Secretary Pete Hegseth.

Anthropic stated in court filings that the designation amounts to government punishment for the company’s stance on how its technology should be used.

“These actions are unprecedented and unlawful,” the company said in its filing. It added that the government cannot use its authority to penalise a company for expressing protected views.

The case also names several federal agencies as defendants, indicating that the restrictions could extend beyond the defence department.

Revenue risks and business impact

Anthropic warned that the government’s action could cause significant financial harm. Company executives said the blacklisting could reduce the firm’s projected 2026 revenue by several billion dollars.

Chief Financial Officer Krishna Rao stated in court documents that the impact could be “almost impossible to reverse” if the designation remains in place.

The company also cited early signs of commercial fallout. Its Chief Commercial Officer, Paul Smith, said one partner with a multi-million dollar contract had already replaced Anthropic’s Claude model with a competing AI system. This change eliminated a revenue pipeline worth more than $100 million.

Negotiations with financial institutions worth about $180 million have also been disrupted because of the uncertainty created by the designation.

Despite the restrictions, Anthropic said the decision has a limited scope for now. Businesses can still use its AI tools in projects that do not involve Pentagon contracts.

Separate lawsuit challenges broader risk label

Anthropic has filed a second lawsuit in the United States Court of Appeals for the District of Columbia Circuit. This case challenges a broader classification under US law that could eventually lead to restrictions across the entire civilian federal government.

The government must carry out an interagency review before deciding how widely those restrictions will apply.

Anthropic said the designation is unlawful and again argued that it violates the company’s constitutional rights.

Growing debate over AI and military use

The dispute highlights a growing debate about who should control the use of advanced artificial intelligence technologies in defence.

Anthropic has previously worked closely with US national security agencies. Its chief executive Dario Amodei has stated that he does not oppose the concept of AI supported weapons. However, he has repeatedly said that current AI models are not reliable enough to safely operate fully autonomous weapon systems.

Anthropic also maintains strict rules against the use of its AI for domestic surveillance of American citizens.

US officials have taken a different position. The Pentagon said the government must retain full flexibility to use AI technologies for any lawful purpose related to national defence.

Industry reaction and government pressure

The dispute has drawn attention across the technology sector. A group of 37 engineers and researchers from OpenAI and Google submitted a legal brief supporting Anthropic’s position. The group included Google Chief Scientist Jeff Dean.

They argued that penalising an AI research lab for its policy stance could discourage open discussion about the risks and benefits of artificial intelligence.

The controversy also involves political pressure. US President Donald Trump recently urged government agencies to stop using Anthropic’s Claude system.

Meanwhile, OpenAI announced a new agreement to provide AI technology to the Defence Department network shortly after the Pentagon’s decision. OpenAI chief executive Sam Altman said the company supports human oversight of weapon systems and opposes mass domestic surveillance.

The court cases are expected to play a major role in defining how governments and AI companies negotiate rules for military use of emerging technologies. 

If you are serious about scaling in Frontier Tech, AIBC Eurasia in Dubai and Ras Al Khaimah, 8–11 February 2027, is where 14,000 industry pioneers will define what comes next.