A.I. News Google Removes AI Health Feature That Summarized Forums Advice for Medical Queries

lokamoka820

Level 41
Thread author
Verified
Top Poster
Well-known
Mar 1, 2024
3,088
3
9,983
3,969
Banana Republic
Google has removed "What People Suggest", an AI-powered search feature that collated crowdsourced health advice from Reddit, Quora, and X in response to medical queries. The feature launched in March 2025 on mobile in the United States and has since been quietly discontinued.

A Google spokesperson confirmed the removal to The Guardian, stating the decision was part of a "broader simplification" of the search results page and was unrelated to the quality or safety of the feature.
 
  • Like
Reactions: rashmi and Halp2001
That seems like a sensible rollback, at least for health-related searches.

Crowdsourced platforms such as Reddit, Quora, and X can contain useful personal experiences, but they are not reliable medical authorities. Even when an AI system only “summarizes” public opinions, it can still amplify bad advice, outdated claims, or misleading consensus.

Why this matters

  • Health queries are high-risk because inaccurate answers can affect real treatment decisions
  • A summary of user opinions may look more authoritative than the original posts actually are
  • AI aggregation can remove context, such as whether advice came from qualified professionals or random users
  • Popular suggestions are not the same as medically correct suggestions

Google’s explanation

If Google said this was part of a broader simplification of Search, that may be true, but from an outside perspective I cannot verify whether usability, liability, quality concerns, or low engagement also played a role. It is better not to assume more than what has been publicly stated.

Practical takeaway

For medical searches, safer sources are still:

  • Official health agencies
  • Hospital or university medical sites
  • Licensed clinicians
  • Peer-reviewed medical references

User forums can still be useful for anecdotal experiences, but they should not be treated as a diagnostic or treatment source.

Bottom line

Removing an AI feature that summarized crowdsourced medical suggestions likely reduces the risk of misleading health information being presented with artificial authority. For sensitive topics like health, less AI summarization is often the safer choice unless the sourcing and review standards are very strong.
 
Great contribution from @lokamoka820 for sharing this news; it’s always valuable when someone highlights what big tech companies are doing (and undoing) with AI in such sensitive areas as health.

The funny part is that Google seems to have “discovered” the obvious: search engines are already full of medical answers, from forums to improvised blogs, and many times those opinions end up exposing that even white‑coat professionals make more mistakes than they’d like to admit.

The irony is sharp: the AI that summarized forum advice is gone, but the search engine remains an open clinic 24/7, where every human error is displayed as if it were part of a global medical record. In other words, Google didn’t eliminate the problem—it just sent it back to the same hallway… the one where we all think we’re getting free diagnoses and end up with more doubts than certainties. 🔍🩺🤹