NextFin News - Google has abruptly terminated its "What People Suggest" AI search feature, a controversial experiment that synthesized crowdsourced medical advice from amateurs into AI-generated summaries. The decision, confirmed on March 16, 2026, marks a significant retreat for the search giant as it struggles to balance the rapid deployment of generative AI with the life-and-death stakes of healthcare information. While a Google spokesperson claimed the move was part of a routine interface simplification and "had nothing to do with the quality or safety of the feature," the timing suggests a defensive pivot following months of intense scrutiny over medical misinformation.
The feature was originally launched with the ambition of "democratizing" healthcare by connecting users with the "lived experiences" of others. In practice, however, it often elevated unvetted anecdotes to the same visual prominence as clinical data. By using AI to scrape and summarize advice from strangers on public forums, Google effectively created a feedback loop where amateur opinions were laundered through a high-authority interface. This approach drew sharp criticism from the medical community, particularly after a January investigation by The Guardian revealed that AI Overviews were frequently surfacing misleading or dangerous health tips, placing vulnerable users at risk of self-diagnosis based on hallucinated or unqualified data.
The stakes for U.S. President Trump’s administration are equally high as the federal government weighs new regulatory frameworks for AI in critical infrastructure. For Google, the liability of a "hallucinated" medical recommendation—such as suggesting a toxic home remedy for a serious ailment—represents a reputational and legal risk that far outweighs the engagement metrics of a new search tool. The removal of "What People Suggest" reflects a broader industry realization: while AI is excellent at summarizing movie reviews or coding scripts, its application in "Your Money or Your Life" (YMYL) categories requires a level of factual precision that current large language models cannot yet guarantee.
Market analysts view this retreat as a win for traditional medical authorities and a cautionary tale for Silicon Valley’s "move fast and break things" ethos when applied to biology. The move also highlights a growing divide in how tech giants handle user-generated content. While Google continues to link to forums like Reddit, it has decided that using AI to synthesize that content into a definitive-looking answer is a bridge too far. This distinction is crucial; it shifts the burden of discernment back to the user rather than the algorithm, shielding the platform from the direct charge of practicing medicine without a license.
The fallout from this decision will likely ripple through the AI sector, forcing competitors like Microsoft and OpenAI to reconsider how they handle sensitive queries. As Google reverts to more traditional, source-heavy health results, the "What People Suggest" experiment will likely be remembered as a moment of overreach. The company’s insistence that safety was not a factor rings hollow to many in the industry who have watched the platform struggle to moderate its AI-generated outputs. Ultimately, the pivot suggests that in the high-stakes world of healthcare, the authority of a medical degree still carries more weight than the aggregated whispers of the crowd.
Explore more exclusive insights at nextfin.ai.
