Anthropic sues the Trump administration after it was designated a supply chain risk

Anthropic is suing the Department of Defense and other federal agencies on Monday over the Trump administration’s decision to label the AI company a “supply chain risk.”
The lawsuit is the latest development in an ongoing standoff between the Pentagon and one of world’s most prominent AI companies as the White House attempts to boost AI adoption in the government.
The supply chain risk designation is usually given to firms associated with foreign adversaries, and it impacts how Anthropic can do business with companies working with the Defense Department.
Anthropic alleges that its categorization as a supply chain risk — and the Trump administration’s directive that federal agencies stop using the company’s technology — are legally unsound. The company called the moves “unprecedented and unlawful” in the legal filing.
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.”
The Pentagon declined to comment on litigation, citing department policy. White House spokesperson Liz Huston said the president “will never allow a radical left, woke company” to dictate how the military operates.
“The President and Secretary of War are ensuring America’s courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders,” Huston said in a statement. “Under the Trump Administration, our military will obey the United States Constitution – not any woke AI company’s terms of service.”
The Pentagon issued the supply chain risk designation after negotiations to update its contract with Anthropic broke down over two red lines that Anthropic wants the Defense Department to agree to: that its AI tool won’t be used for mass surveillance of US citizens, and that it won’t be used for autonomous weapons.
The Pentagon, however, wants to use Anthropic’s AI for “all lawful purposes,” saying they could not allow a private company to dictate how they can use their tools in a national security emergency. The Pentagon previously claimed it’s not interested in using AI for mass surveillance of US citizens and autonomous weapons.
The Trump administration on February 27 ordered federal agencies and military contractors to halt business with Anthropic after the company refused to let the Pentagon use its technology without restrictions. That same day, Defense Secretary Pete Hegseth said Anthropic would be labeled a supply chain risk and added that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
In its filing, Anthropic alleges that the government is retaliating against the company for its use of First Amendment-protected speech. It also argues that Trump does not have the authority to direct federal agencies to cease using Anthropic’s technology, and that the company was not granted adequate due process.
The company is seeking injunctive relief, alleging that “current and future contracts with private parties are also in doubt” and that “hundreds of millions of dollars” are in jeopardy because of the Trump administration’s actions.
“On top of those immediate economic harms, Anthropic’s reputation and core First Amendment freedoms are under attack,” the filing read. “Absent judicial relief, those harms will only compound in the weeks and months ahead.”
Anthropic CEO Dario Amodei said the formal letter it received designating it a supply chain risk indicates its customers will only be restricted from using Claude in work directly related to their Pentagon contracts.
Anthropic previously said it planned to challenge the designation in court, adding that it “set a dangerous precedent for any American company that negotiates with the government.”
Amodei met with Hegseth on February 24, but the two failed to come to an agreement. In a blog post explaining the company’s decision to reject the Pentagon’s offer, Amodei said AI can’t currently be used reliably and safely for cases like mass surveillance and autonomous weapons. He also said the company has been “having productive conversations” with the Pentagon about how to work together while adhering to its redlines and about ensuring a smooth transition if an agreement isn’t reached.
Trump said in a February 27 Truth Social post that Anthropic has made a “disastrous mistake” and accused the company of trying to dictate how the military operates.
OpenAI struck a deal with the Pentagon just hours after the Trump administration’s order.
Anthropic’s profile has only risen amid the conflict. Its Claude AI app surpassed OpenAI’s ChatGPT in the iPhone’s App Store for the first time the day after the Pentagon said it would terminate its contract with Anthropic. The company also said on March 5 that more than a million people are signing up for Claude every day.
CNN’s Hadas Gold and Haley Britzky contributed to this report.



