Internal Pentagon memo orders military commanders to remove Anthropic AI technology from key systems

The Defense Department has officially notified senior leadership figures throughout the U.S. military that they must remove Anthropic’s artificial intelligence products from their systems within 180 days, according to an internal memorandum obtained by CBS News.
The memo was dated March 6, a day after the Pentagon formally designated Anthropic a supply chain risk. It was distributed to senior leaders on Monday, alleging Anthropic’s AI “presents an unacceptable supply chain risk for use in all [Department of War] systems and networks.”
The document, signed by Defense Department Chief Information Officer Kirsten Davies, represents the latest salvo in an escalating feud between the Trump Administration and Anthropic. The notice sheds light on the wide-ranging steps military commanders will need to take to remove Anthropic AI from key national security systems, including those for nuclear weapons, ballistic missile defense and cyber warfare.
It also demanded that any other company doing business with the Pentagon must stop using all Anthropic products on work related to Defense Department contracts within 180 days.
In the memo, Davies warned that adversaries “can exploit vulnerabilities” of the daily operations of the Pentagon, and possible exploitation could pose “potential catastrophic risks to the warfighter.” Davies said she is the only one who can grant an exception.
“Exemptions will only be considered for mission-critical activities directly supporting national security operations where no viable alternative exists, and the requesting Component must submit a comprehensive risk mitigation plan for approval,” she wrote.
A senior Pentagon official confirmed the memo’s authenticity.
Anthropic did not immediately respond to a request for comment.
The federal government’s action is said to be unprecedented — the first time an American company has been designated a supply chain risk. During President Trump’s first term, the government took similar action to restrict foreign-based companies like Chinese telecommunications giant Huawei.
It comes after an impasse over Anthropic’s request for two “red lines” that would explicitly prevent the U.S. military from using its Claude model to conduct mass surveillance on Americans or power fully autonomous weapons.
“We believe that crossing those lines is contrary to American values, and we wanted to stand up for American values,” Anthropic CEO Dario Amodei told CBS News.
The Pentagon previously said it wanted to be able to use Claude for “all lawful purposes,” without restrictions, arguing that the uses of AI that Anthropic is concerned about are already prohibited. Claude is currently being used by the US military in the war on Iran, according to sources familiar with the military’s use of AI.
Anthropic is currently the only AI company whose models are deployed on the Pentagon’s classified systems. After talks between the two sides broke down last month, one of Anthropic’s largest rivals — ChatGPT creator OpenAI — said it had signed a deal with the Pentagon.
On Monday, Anthropic filed two lawsuits against the federal government, alleging that Pentagon officials’ decision to deem the company a supply chain risk amounted to illegal retaliation.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the company said in the lawsuit. “No federal statute authorizes the actions taken here.”
White House spokesperson Liz Huston responded to the lawsuit by saying President Trump “will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates.”
A source directly familiar with Claude’s military capabilities told CBS News the main task Claude is undertaking for the military is sifting through large amounts of intelligence reports, like synthesizing patterns, summarizing findings, and surfacing relevant information faster than a human analyst could.
“The military is now processing roughly a thousand potential targets a day and striking the majority of them, with turnaround time for the next strike potentially under four hours,” said retired Navy Admiral Mark Montgomery, now a senior director at the Foundation for Defense of Democracies. “A human is still in the loop, but AI is doing the work that used to take days of analysis — and doing it at a scale no previous campaign has matched.”
AI: Artificial Intelligence
More



