NextFin News - Google has abruptly shuttered its "What People Suggest" search feature, a controversial tool that utilized artificial intelligence to surface crowdsourced medical advice from non-experts, marking a significant retreat in the tech giant’s high-stakes push into healthcare. The decision, confirmed by company spokespeople this week, follows a series of investigations and public backlash regarding the safety of AI-generated health summaries. While the company officially characterized the move as part of a "broader simplification" of its search interface, the timing suggests a defensive pivot as U.S. President Trump’s administration signals a more rigorous approach to AI safety and digital liability.
The removal of the feature, which often prioritized anecdotal experiences from platforms like Reddit over peer-reviewed medical literature, highlights the inherent tension in Google’s dual identity as a neutral information index and an active health advisor. Internal data and external audits had begun to paint a troubling picture: a January investigation by The Guardian revealed that AI Overviews were frequently generating misleading or outright dangerous health advice. In one instance, the system reportedly suggested that individuals could use non-toxic glue to keep cheese on pizza, a trivial error that becomes life-threatening when translated into the context of dosage recommendations or emergency medical procedures.
Despite the retreat from crowdsourced advice, Google is simultaneously doubling down on its professional-grade health AI tools. The company is expanding its Med-Gemini models, which are specifically tuned for clinical reasoning and the interpretation of complex medical data. This strategic bifurcation—killing amateur-led features while bolstering expert-level systems—reflects a realization that the "move fast and break things" ethos of Silicon Valley is incompatible with the "do no harm" mandate of medicine. By distancing itself from the "What People Suggest" model, Google is attempting to insulate its core search business from the reputational and legal risks associated with amateur medical misinformation.
The financial stakes of this pivot are immense. The global healthcare AI market is projected to exceed $180 billion by 2030, and Google’s parent, Alphabet, is competing fiercely with Microsoft and Amazon for dominance in clinical documentation and diagnostic support. Karen DeSalvo, Google’s chief health officer, had previously championed the value of "hearing from others who have similar experiences," but that vision has collided with the reality of algorithmic hallucination. The company now finds itself in a delicate balancing act: it must provide the "helpful" answers users demand without assuming the liability of a licensed physician.
This shift also mirrors a broader regulatory cooling toward unbridled AI deployment in the United States. As the Trump administration emphasizes domestic technological supremacy, there is a parallel focus on ensuring that American AI platforms do not become vectors for public health crises. For Google, the removal of "What People Suggest" is less a surrender and more a tactical repositioning. The company is betting that its future in health lies not in being a digital town square for anecdotes, but in becoming the indispensable infrastructure for the modern hospital. The era of treating the internet as a collective doctor is ending, replaced by a more disciplined, albeit more closed, ecosystem of verified intelligence.
Explore more exclusive insights at nextfin.ai.
