Claude Hits No. 1 on App Store As ChatGPT Users Defect to Anthropic

While OpenAI locks down Washington, Anthropic is locking down users and rocketing to the top of the App Store.
Anthropic has been sidelined in Washington following a public dispute with the Department of Defense over how its AI models would be deployed. President Donald Trump ordered federal agencies to phase out its technology.
Meanwhile, OpenAI has secured new ground, with CEO Sam Altman announcing in a Friday night post on X that it had reached an agreement with the Department of Defense to deploy AI models in its classified network.
OpenAI’s agreement has left some loyal ChatGPT users uneasy about OpenAI’s ambitions, prompting online debates about the ethical implications — and some saying they were defecting to its rival Claude.
As of 6:38 p.m. ET on Saturday, Claude ranked number one among the most downloaded productivity apps on Apple’s App Store.
Every time Lakshmi publishes a story, you’ll get an alert straight to your inbox!
Stay connected to Lakshmi and get more of their work as it publishes.
BI
Converts have taken to social media to share screenshots documenting their switch.
Pop musician Katy Perry wrote that she was “done” on X, alongside a screenshot of Claude’s pricing page, with a red heart around the $20-per-month “Pro” plan.
Another X user, Adam Lyttle, wrote “Made the switch,” alongside a screenshot of his email inbox with a receipt from Anthropic and cancellation confirmation from OpenAI.
On Reddit’s ChatGPT subreddit, dozens of users say they’ve deleted their accounts and are urging others to do the same.
“Cancel ChatGPT” has become a common refrain online, while some users have taken a more personal tone, saying Altman’s move “crossed the line.”
The agreement hasn’t polarized all AI users, however.
In one Reddit thread, several commenters said the news does not affect their choice of AI model, arguing that Anthropic’s work with Palantir raises similar concerns. In November 2024, Anthropic, Palantir, and Amazon Web Services struck an agreement to provide US intelligence and defense agencies access to Claude models.
After Secretary of Defense Pete Hegseth said he would designate Anthropic as a “supply chain risk to national security,” Anthropic said it would “challenge any supply chain risk designation in court.”
In his Friday post, Altman said the Department of Defense had agreed with two of OpenAI’s safety principles.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
By Saturday afternoon, OpenAI published a more detailed description of its contract with the Department of Defense, including the specific language it used surrounding the use of its models for surveillance and autonomous weapons.
On the topic of autonomous weapons, OpenAI said:
The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
On the topic of mass surveillance, OpenAI said:
The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.
While some chatbot users suggested it’s all fair in business, war, and federal procurement, others suggested the Pentagon’s stance may have handed Anthropic a public relations win.
X user Tae Kim joked that Hegseth might need a new title: “Secretary Hegseth Chief of Claude Marketing.”




