NextFin News - In a significant escalation of the global artificial intelligence arms race, OpenAI has formally accused the Chinese startup DeepSeek of utilizing sophisticated "distillation" techniques to extract the capabilities of leading American AI models. According to a memorandum sent to the House Select Committee on China on Thursday, February 12, 2026, OpenAI alleges that DeepSeek engaged in a sustained effort to "free-ride" on the massive research and development investments made by U.S. institutions. The memo, which was first reported by Bloomberg, claims that DeepSeek’s R1 model was developed by systematically querying American systems to replicate their reasoning patterns and output quality, thereby bypassing the immense computational costs typically associated with training frontier models from scratch.
The controversy centers on the practice of model distillation—a process where a smaller, more efficient "student" model is trained using the outputs of a larger, more capable "teacher" model. While distillation is a recognized academic technique, OpenAI contends that DeepSeek’s application of it violates terms of service and constitutes a form of industrial espionage. According to the memo, OpenAI’s internal investigations identified accounts linked to DeepSeek employees that accessed its models through third-party routers and employed "novel obfuscation techniques" to circumvent security guardrails designed to prevent such data harvesting. This development comes as U.S. President Trump’s administration continues to tighten export controls on high-end semiconductors, forcing Chinese firms to find algorithmic workarounds to remain competitive.
The timing of these allegations is critical. Since the release of DeepSeek-R1 in early 2025, the Chinese firm has been hailed domestically as a symbol of China’s ability to achieve "Sputnik-level" breakthroughs despite U.S. sanctions. However, OpenAI argues that this success is built on a foundation of stolen intelligence. Congressman John Moolenaar, chairman of the House Select Committee on China, echoed these concerns, stating that the practice is part of a broader strategy to "steal, copy, and kill" American innovation. The memorandum further points out that while American firms like OpenAI and Anthropic have invested hundreds of billions of dollars in infrastructure, Chinese competitors are offering these "distilled" models for free or at a fraction of the cost, posing a direct existential threat to the U.S. AI business model.
From a technical perspective, the impact of distillation is profound. By using the outputs of GPT-4 or its successors as training data, a developer can effectively skip the expensive "pre-training" phase where a model learns the basic structures of language and logic. Industry data suggests that training a frontier model can cost upwards of $1 billion in electricity and compute; however, a distilled model can achieve 90% of that performance for less than 5% of the cost. This economic disparity is what OpenAI describes as an "unfair edge." Furthermore, OpenAI warns that when models are copied via distillation, the safety protocols and ethical alignment baked into the original models are often lost, potentially leading to the creation of powerful tools that lack the necessary guardrails against misuse in high-risk fields like biochemistry.
The geopolitical implications are equally stark. U.S. President Trump has made technological sovereignty a cornerstone of his second term, and these allegations provide fresh ammunition for hawks in Washington seeking to further decouple the two economies. According to David Sachs, the White House AI director, there is "strong evidence" that DeepSeek’s efficiency gains are not merely the result of superior Chinese engineering but are directly tied to the extraction of American intellectual property. This narrative complicates the global perception of Chinese AI, shifting the focus from "innovation under pressure" to "sophisticated plagiarism."
Looking ahead, this conflict is likely to trigger a new wave of "defensive AI" development. OpenAI has already indicated it is developing more robust defenses to detect and block distillation attempts, but as the memo notes, these activities are becoming increasingly sophisticated and are often linked to state-affiliated actors in China and Russia. We can expect the U.S. government to consider new regulations that treat AI model weights and outputs as protected national assets, potentially leading to a "closed-loop" ecosystem where access to American AI is restricted to verified allies. As ByteDance and other Chinese giants continue to release viral models like Seedance 2.0, the battle over the "provenance of intelligence" will become the defining legal and economic struggle of the late 2020s.
Explore more exclusive insights at nextfin.ai.
