NextFin News - In a significant escalation of the global artificial intelligence arms race, U.S.-based AI safety and research firm Anthropic has formally accused three prominent Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of conducting a massive, coordinated campaign to illicitly extract the proprietary capabilities of its Claude chatbot. According to Anthropic, the operation involved the use of approximately 24,000 fake accounts to generate more than 16 million interactions, a process known in the industry as "distillation." This activity, which Anthropic describes as industrial-level intellectual property theft, was specifically designed to siphon Claude’s advanced logic, coding proficiency, and agentic reasoning to bolster the performance of Chinese-developed models.
The data released by Anthropic on March 2, 2026, reveals the sheer scale of the extraction. MiniMax reportedly led the effort with over 13 million exchanges, while DeepSeek and Moonshot AI were also identified as key participants. The campaign focused heavily on Claude’s most sophisticated features, including its ability to use external tools and perform complex multi-step reasoning. Anthropic warned that these efforts are growing in both intensity and sophistication, noting that the window for regulatory and technical intervention is rapidly closing. The company argues that such practices allow foreign competitors to bypass the immense research and development costs—often totaling billions of dollars—required to build frontier models from scratch.
From a technical perspective, distillation is a legitimate technique used to train smaller, more efficient "student" models using the outputs of a larger "teacher" model. However, when conducted without authorization and at this scale, it becomes a mechanism for architectural plagiarism. By prompting Claude with specific, high-value queries and recording its responses, the Chinese firms can effectively "reverse-engineer" the reasoning patterns that make Claude a market leader. This allows them to achieve comparable performance levels while circumventing the hardware bottlenecks imposed by U.S. semiconductor sanctions, as distilled models often require less raw compute power to train than original frontier architectures.
The timing of these revelations is particularly sensitive given the current geopolitical climate. U.S. President Trump has consistently emphasized the protection of American intellectual property as a cornerstone of national security. According to Anthropic, the unauthorized extraction of these capabilities could allow Chinese firms to sidestep U.S. export controls intended to keep the most advanced AI technologies out of the hands of strategic rivals. Furthermore, Anthropic raised alarms regarding safety guardrails. Models built through illicit distillation often lack the rigorous safety training and "constitutional" constraints that Anthropic integrates into Claude, potentially resulting in powerful AI systems that can be more easily repurposed for cyberattacks or biological weapon development.
The economic implications for the AI industry are profound. If frontier model developers like Anthropic, OpenAI, or Google cannot protect the "reasoning" outputs of their systems, the incentive to invest in massive compute clusters and high-quality human feedback diminishes. We are witnessing a transition from the era of "data scraping"—where firms stole web content—to "capability scraping," where the very intelligence of the model is the target. For companies like MiniMax and DeepSeek, the ability to replicate Claude-level performance at a fraction of the cost provides a massive competitive advantage in the global enterprise market, where cost-per-token is a critical metric.
Looking ahead, this incident is likely to trigger a new wave of defensive measures from U.S. AI labs. We can expect the implementation of more aggressive rate-limiting, sophisticated bot-detection algorithms, and perhaps even "watermarking" of model outputs to track distillation attempts in real-time. On the policy front, the administration under U.S. President Trump may use these findings to justify further tightening of API access for entities based in adversarial nations. As the boundary between commercial competition and national security continues to blur, the ability to secure the "weights and measures" of artificial intelligence will become as vital as securing physical borders.
Explore more exclusive insights at nextfin.ai.
