BEIJING, March 21 (Xinhua) -- A growing number of Chinese are turning to generative artificial intelligence (AI) as their "assistant" for consumption decisions. They may ask chatbots for coffee machine recommendations, and when choosing a blood pressure monitor, they let a large language model make comparisons. However, a new question has surfaced: Are the answers AI gives truly objective?
This year, the annual "3·15" World Consumer Rights Day TV show broadcast by the national broadcaster China Media Group pulled back the curtain on what lies behind some of those answers. Some organizations were found to have mass-published sponsored articles, fabricated product reviews, and invented expert credentials -- all designed to be "fed" into the data that large language models draw upon. The goal: turn commercial promotions into the seemingly neutral answers chatbots deliver.
Chinese media have dubbed this practice "poisoning" the AI. The technical term is generative engine optimization, or GEO -- a set of techniques designed to influence what AI models retrieve, cite and recommend. While traditional optimization strategies focused on making content more visible in search results, GEO targets the answers that AI generates directly.
Reports indicate that this practice has formed an industrial chain. And it raises a question that extends beyond China: As AI becomes a primary gateway to information, who ensures that what it says can be trusted?
HOW GEO POISONING WORKS
Today's chatbots don't just rely on their training data. Many continuously pull fresh information from across the internet using a technique called retrieval-augmented generation, or RAG.
GEO exploits this. Flood the web with enough branded content, and models start treating it as truth. "Think of it not as rewriting the AI's brain, but contaminating the materials it consults," said Yao Jinxin, a senior member of the Institute of Electrical and Electronics Engineers (IEEE) and researcher at the Wuzhen Institute.
The consequence, said Yao Jia, a professor at the Institute of Law in the Chinese Academy of Social Sciences, is that "users think the AI has found the best option" when they may be getting "a carefully packaged advertisement -- embedded in natural language, much harder to detect than a banner ad."
China offers a unique window into this challenge. With the world's largest internet population and multiple homegrown models increasingly integrated into daily life, the country has become a high-speed testing ground. What emerges here as a question today, experts suggest, may confront other markets as AI adoption deepens globally.
The competitive stakes are high. In such an environment, when many brands fight for attention, some seek shortcuts. In China, GEO has moved from an experimental tactic to a practice that regulators and platforms are now grappling with.
WHO BEARS RESPONSIBILITY
The damage, experts say, could be layered: consumers may buy faulty products based on fabricated recommendations; honest businesses risk being squeezed out; and platforms -- whose value rests on user trust -- risk eroding that trust.
"If commercial content consistently masquerades as neutral answers, the platform's credibility suffers," said Yao Jinxin. "And since user trust is the foundation of any AI business, that's a long-term threat."
Assigning responsibility is complex. Brands often claim ignorance of how their GEO contractors operate. Platforms say they merely aggregate what's on the web.
Yet legal experts emphasize that responsibility is not so easily avoided. "The brand is the initiator and ultimate beneficiary of GEO services," said Liao Huaixue, a partner at Tahota Law Firm. "Claiming ignorance has limited weight in regulatory proceedings."
At the same time, China's existing AI regulations already require AI service providers to ensure accuracy and reliability. The challenge lies in enforcement -- especially when manipulated content originates elsewhere before being ingested by models.
PATHWAYS FORWARD
Some platforms have started acting. Following the March 15 broadcast, some major chatbots introduced clearer labeling for commercial content and adjusted their recommendation algorithms.
He Yanzhe, a specialist at the China Electronics Standardization Institute, calls this an essential first step. "There's no technical difficulty in labeling commercial content. Indicating the source of an answer and providing risk reminders -- these are things AI models can do right now."
Longer term, experts advocate for structural changes: separating commercial and organic content in training data, creating audit trails, and applying stricter filters in sensitive fields.
For individual users, the message is clear: be a smart user and do not place blind trust in AI. The era of large language models has brought an abundance of information, but that does not mean judgment can be set aside.
"Treat AI answers as references, not authorities," Yao Jia said. "For important decisions, cross-check. If something seems off -- a little-known brand suddenly touted as 'top seller' -- save the evidence and report it."
China's experience with GEO poisoning offers a glimpse of the challenges likely to emerge elsewhere as AI assistants spread. Tackling them, experts say, will require a shared effort: technologists building better filters, platforms committing to transparency, regulators refining rules, and users cultivating healthy skepticism. ■



