An AI-powered feature that collected amateur health advice from internet communities and presented it to Google Search users has been discontinued. Called “What People Suggest,” the tool was designed to complement medical expert content with peer perspectives organized by AI. Three sources confirmed its removal, and a Google spokesperson provided a vague and disputed rationale.
Google introduced the feature at a health event in New York, where then-chief health officer Karen DeSalvo highlighted its potential to help people with conditions like arthritis learn from others with the same diagnosis. The AI system was designed to make community health knowledge more navigable and accessible. The feature was launched to mobile users in the United States.
The company stated that safety played no role in the decision to remove the feature, framing it instead as a search interface simplification. Yet the cited public announcement of the change, a blog post from a Google Switzerland-based search advocate, made no mention of the feature. Critics were unimpressed with the lack of transparency.
The context includes a major investigation earlier this year that found Google’s AI Overviews were distributing false health information to two billion users every month. Google’s response, removing AI Overviews from some health queries, was characterized by health professionals as inadequate. Concerns about AI health content on Google’s platform remain live and unresolved.
The upcoming Google health event will offer a new opportunity for the company to reassert its commitment to responsible health AI. But meaningful reassurance will require more than a polished presentation — it will require honest engagement with what has gone wrong and concrete plans for how it will be fixed. The handling of “What People Suggest” suggests there is still considerable ground to cover.