Judge calls Pentagon’s moves against AI firm Anthropic “troubling”: “It looks like an attempt to cripple Anthropic”

A judge sharply questioned a lawyer for the federal government on Tuesday over the Pentagon’s efforts to cut Anthropic out of its classified systems — the latest development in a dispute between the company and the Trump administration over AI guardrails.
The back-and-forth revolves around Anthropic’s push to bar the military from using its AI model Claude to surveil Americans or power fully autonomous weapons. The Trump administration has said it needs the ability to use Claude for “all lawful purposes.” When the two sides were unable to come to an agreement, the Pentagon designated Anthropic a “supply chain risk” and moved to stop private companies from using Claude on military contracts, leading Anthropic to sue.
Anthropic, which argues the Pentagon’s action was an unconstitutional attempt to punish it for speech, is asking the judge to block the supply chain risk designation, as well as President Trump’s order for all federal agencies to stop using Anthropic.
In a Tuesday afternoon hearing in San Francisco, U.S. District Judge Rita Lin appeared skeptical of the government’s actions, calling them “troubling” and saying they “don’t really seem to be tailored to the stated national security concern.”
“If the worry is about the integrity of the operational chain of command, DOW could just stop using Claude,” Lin said, using the acronym for the Department of War, the administration’s term for the Defense Department. “It looks like defendants went further than that because they were trying to punish Anthropic. One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic.”
Early in the hearing, Justice Department attorney Eric Hamilton conceded that the supply chain risk designation does not stop companies that contract with the military from using Anthropic’s model on non-military-related work — a move Anthropic has argued would be illegal.
Defense Secretary Pete Hegseth had written on social media last month that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Upon questioning from Lin about Hegseth’s post, Hamilton confirmed that the Defense Department will not terminate any federal contractors because they have relationships with Anthropic that are separate from their work with the Pentagon. He also said he wasn’t aware of a law that gives the department that kind of power.
Anthropic’s attorney, Michael Mongan, argued that Hegseth’s post has still created “profound uncertainty” and harmed the business, noting that it has been viewed millions of times.
The law that was used against Anthropic defines a supply chain risk as a “risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” a national security system.
Hamilton said Tuesday the government decided to label Anthropic a supply chain risk because the company’s negotiating position and discussions with military officials made the Pentagon unable to trust Anthropic, and sparked concerns about a “risk of future sabotage.” He suggested the military is worried about Anthropic trying to “manipulate” its software or install a “kill switch.”
Lin questioned that stance, and said the government appears to be saying that a company can be designated a supply chain risk because it is “stubborn” and “asks annoying questions.”
In Tuesday’s hearing, Mongan denied that the company has the ability to change, shut off, surveil or otherwise influence its software once it is approved by the government and deployed for use. He also argued that if Anthropic poses a serious risk, it doesn’t make sense that the government appeared open to striking a deal with the company until the very end.
“A saboteur is not going to get into a public spat,” Mongan said. “They’re just going to accept the contractual term proposed by the government and then go and do … nefarious things.”
Lin said Tuesday that she plans to rule on the matter in the coming days.
The conflict between the government and Anthropic — which was the only AI firm whose technology was deployed in classified U.S. military systems — highlights a broader debate over acceptable uses of AI and how extensively the technology should be regulated.
Anthropic CEO Dario Amodei has said he wants to work with the military, but he has vowed to stick to two “red lines” banning mass surveillance of Americans and fully autonomous weapons that can carry out strikes without human input. He argues that AI’s potential to surveil people is “getting ahead of the law,” and said “the reliability is not there yet” for autonomous weapons.
“I think we are a good judge of what our models can do reliably and what they cannot do reliably,” Amodei said in an interview last month with CBS News.
The Pentagon has said it has no interest in using Anthropic’s technology for mass surveillance or fully autonomous weapons, and argues those uses are already illegal and banned under existing military policies, respectively. But the military has said its decisions about lawful uses of AI technology shouldn’t be up to private companies, and has accused Anthropic of trying to impose its own values onto the government.
Pentagon Chief Technology Officer Emil Michael said last month that Amodei has a “God-complex” and “wants nothing more than to try to personally control the US Military.”
On Tuesday, Lin called that dispute a “fascinating public policy debate” but not the focus of the case, noting that both sides agree the Pentagon can choose not to use Anthropic. Instead, she said, she plans to focus on whether the government’s moves to label Anthropic a supply chain risk are legal.
AI: Artificial Intelligence
More




