Anthropic sues Trump admin looking to overturn Pentagon’s ‘supply chain risk’ order

Anthropic is suing the Trump administration as the Pentagon’s debate over artificial intelligence’s role in national security intensifies.

The company is asking federal courts to reverse a Pentagon order designating the San Francisco company as a “supply chain risk” over its refusal to allow unrestricted military use of its technology.

It filed two lawsuits on Monday. One in a California federal court and the other in a federal appeals court in Washington, D.C.

NEW YORK, NEW YORK - FEBRUARY 16: In this illustration, the Claude AI website is seen on a laptop on February 16, 2026 in New York City. According to reports from the Wall Street Journal, the Defense Department used Anthropic's Claude Ai, via its Palantir contract, to help with the attack on Venezuela and capture former President Nicolás Maduro. (Photo illustration by Michael M. Santiago/Getty Images)

NEW YORK, NEW YORK – FEBRUARY 16: In this illustration, the Claude AI website is seen on a laptop on February 16, 2026 in New York City. According to reports from the Wall Street Journal, the Defense Department used Anthropic’s Claude Ai, via its Palantir contract, to help with the attack on Venezuela and capture former President Nicolás Maduro. (Photo illustration by Michael M. Santiago/Getty Images)

Last week, the Pentagon formally designated the tech company as a supply chain risk over a public dispute over how it’s chatbot Claude could be used in warfare.

“These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation,” the lawsuit said.

The War Department declined to comment to The Associated Press, citing its longstanding policy of not commenting on ongoing litigation.

Anthropic said in a statement Monday that “seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”

The company said it wanted to restrict its tech from being used for two things:

  • Mass surveillance of Americans
  • Fully autonomous weapons

War Secretary Pete Hegseth and other officials were adamant that Anthropic accepts “all lawful uses” of Claude, later threatening punishment if the company did not comply.

The “supply chain risk” designation effectively cuts off the company’s defense work and uses an authority designed to prevent foreign adversaries from hurting national security systems. It is the first time the government is known to have used the designation against an American company, the AP reported.

NEW YORK, NEW YORK - FEBRUARY 16: In this illustration, the Claude AI app is seen in the app store on a phone on February 16, 2026 in New York City. According to reports from the Wall Street Journal, the Defense Department used Anthropic's Claude Ai, via its Palantir contract, to help with the attack on Venezuela and capture former President Nicolás Maduro. (Photo illustration by Michael M. Santiago/Getty Images)

NEW YORK, NEW YORK – FEBRUARY 16: In this illustration, the Claude AI app is seen in the app store on a phone on February 16, 2026 in New York City. According to reports from the Wall Street Journal, the Defense Department used Anthropic’s Claude Ai, via its Palantir contract, to help with the attack on Venezuela and capture former President Nicolás Maduro. (Photo illustration by Michael M. Santiago/Getty Images)

In the wake of the national security debate, President Donald Trump shared on Truth Social that he would order all federal agencies to stop using Claude, giving the Pentagon six months to phase out the product.

“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote.

Sean Parnell, the Pentagon’s top spokesman, previously posted on social media that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”

He said the Pentagon wants to “use Anthropic’s model for all lawful purposes.”

The lawsuit also named other federal agencies, including the Treasury Department and State Department, which also ordered employees to stop using Anthropic’s tech.

_____________

The Associated Press contributed to this report.