NextFin News - A sophisticated industry of "data poisoning" has infiltrated China’s leading artificial intelligence models, prompting a sharp regulatory warning after the annual 315 consumer rights gala exposed how companies are systematically manipulating AI-generated answers. The investigation, aired by China Media Group on Sunday, revealed that a new marketing discipline known as Generative Engine Optimization (GEO) is being weaponized to "brainwash" large language models (LLMs) into recommending nonexistent or inferior products as standard answers.
The scale of the manipulation was demonstrated through a sting operation involving a fictional smart wristband dubbed the "Apollo-9." Reporters claimed the device featured impossible technology, including "quantum-entanglement sensors" and noninvasive blood glucose monitoring. After uploading a dozen promotional articles to a specialized platform called Liqing GEO, the fake product was indexed and recommended by two of China’s mainstream AI models within hours. The speed at which these "hallucinations" were induced suggests that the current architecture of AI retrieval is dangerously susceptible to coordinated misinformation campaigns.
Li Fumin, a researcher at the Shandong University of Finance and Economics, characterized the practice as a digital evolution of the "zombie" accounts and paid rankings that once plagued traditional search engines. While Search Engine Optimization (SEO) merely influenced which links appeared first, GEO aims to control the very synthesis of information. By mass-feeding promotional content into the public internet, these firms ensure that when an AI "scrapes" the web for a user query, it encounters a skewed consensus that favors the paying client. This creates a feedback loop where fabricated expert reviews and industry rankings become the training data for the next generation of responses.
The fallout has forced China’s tech giants into a defensive posture. ByteDance stated that its Doubao chatbot remained unaffected by such external manipulation, while Alibaba claimed the core reasoning of its Qwen model was resilient. However, industry analysts argue the problem is structural. Most AI systems rely on Retrieval-Augmented Generation (RAG), a process that pulls real-time data from the open web to ground their answers. If the source material is poisoned, the RAG process effectively becomes a delivery mechanism for corporate propaganda.
Regulators are now moving to close the legislative vacuum. Song Xiangqing, vice-president of the Commerce Economy Association of China, has called for the immediate formulation of laws to categorize the deliberate pollution of AI data sources as a form of unfair competition. The State Administration for Market Regulation (SAMR) has already signaled that its 2026 work priorities will include a crackdown on AI-generated advertising and "citation-based" marketing tactics that blur the line between organic information and paid promotion.
Beyond immediate enforcement, the debate is shifting toward technical safeguards. Experts have proposed the creation of "white lists" for trusted information sources to prevent AI models from indexing unverified marketing platforms. Ten companies associated with the GEO sector have already signed a safety commitment with the Artificial Intelligence Industry Alliance, though skeptics note that voluntary compliance rarely survives the pressure of a competitive digital economy. As AI becomes the primary interface for consumer information, the battle for the integrity of its "brain" is only beginning.
Explore more exclusive insights at nextfin.ai.
