Anthropic misanthropic toward China’s AI labs • The Register

Having built a business by remixing content created by others, Anthropic worries that Chinese AI labs are stealing its data.
The US-based maker of Claude models on Monday accused China-based DeepSeek, Moonshot AI, and MiniMax of conducting “industrial-scale campaigns” to siphon knowledge from its models through a technique known as “distillation.”
Model distillation is a deep learning technique in which a large “teacher” model can be made to transfer its learned patterns to a smaller “student” model. It’s a form of data compression that ideally produces a smaller, more efficient model without significant performance loss. Useful for explainable AI – shining a light into black box algorithms – it’s also a handy way to copy a model.
Anthropic, like its leading rivals, has been sued for alleged copyright infringement or unauthorized web scraping several times. Claims include: Bartz v. Anthropic; Carreyrou v. Anthropic; Concord Music Group, Inc. v. Anthropic; MacKinnon v. Anthropic (Canada); and Reddit, Inc. v. Anthropic.
While courts mull whether training AI models on copyrighted material without consent violates the law, Anthropic and its peers worry that Chinese companies are ripping them off.
DeepSeek, Moonshot AI, and MiniMax, the company said, have been using networks of fraudulent accounts to probe Claude models on a vast scale.
“These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions,” the company said in a blog post.
These attacks take the form of slightly varied prompts designed to elicit responses that can be used to train models. Anthropic refers to the distributed infrastructure used for model distillation as “hydra clusters,” though it failed to establish that the underlying technology differs enough from commercial proxy services enough to warrant the menacing multi-headed mythological reference.
Anthropic expressed concern that the unwelcome distillation of its models by foreign AI labs will enable authoritarian regimes to undertake cyberattacks, disinformation campaigns, and mass surveillance.
It’s unclear how this would be any different from the world we now inhabit. But the AI biz suggests that things would be worse still if these developers of distilled models open source their work.
“If distilled models are open-sourced, this risk multiplies as these capabilities spread freely beyond any single government’s control,” the company said.
Two weeks ago, Anthropic’s arch-competitor OpenAI sent a memo [PDF] to the US House Select Committee on China warning that adversaries in China and to a lesser extent Russia have ramped up their efforts to raid frontier models.
“For example, Chinese actors have moved beyond Chain-of-Thought (CoT) extraction toward more sophisticated, multi-stage pipelines that blend synthetic-data generation, large-scale data cleaning, and reinforcement-style preference optimization,” OpenAI said. “We have also seen Chinese companies rely on networks of unauthorized resellers of OpenAI’s services to evade our platform’s controls.”
OpenAI specifically cited the predation of DeepSeek, warning that its models “lack meaningful guardrails against dangerous outputs in high-risk domains like chemistry and biology, or offer limited protections for copyrighted material.”
(We note that OpenAI is the defendant in “In re: OpenAI, Inc. Copyright Infringement Litigation,” a consolidation of 16 copyright lawsuits.)
OpenAI’s memo calls for the US to protect the national AI industry and Anthropic’s blog post follows suit, arguing that foreign model makers threaten national security.
“Illicitly distilled models lack necessary safeguards, creating significant national security risks,” the company said. “Anthropic and other US companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities. Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”
According to the Forecasting Research Institute’s fifth wave report of predictions by the Longitudinal Expert AI Panel (LEAP), released on Monday, “Experts and superforecasters expect the performance gap between US and Chinese AI models to narrow by 2031, with parity anticipated by 2041.”
DeepSeek, Moonshot, and MiniMax did not immediately respond to requests for comment. ®



