How Google's AI Overviews are putting your health at risk
What's the story
Google is facing criticism for downplaying safety warnings related to its AI-generated medical advice. The tech giant's AI Overviews, which appear above search results when answering sensitive health queries, are designed to prompt users to seek professional help rather than relying solely on the summaries provided by the system. However, a recent investigation by The Guardian found that these disclaimers are not prominently displayed when users first encounter medical advice.
Limited visibility
Disclaimers appear only after users request more information
Google's disclaimers only appear when users request additional health information by clicking a button labeled "Show more." Even then, these safety labels are placed below all the extra medical advice generated by AI and in a smaller, lighter font. The disclaimer reads: "This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes."
Company statement
Google's defense fails to quell concerns
In response to the criticism, a Google spokesperson said that it's inaccurate to suggest AI Overviews don't encourage people to seek professional medical advice. They added that "in addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself when appropriate." However, this has not eased concerns from AI experts and patient advocates who believe disclaimers should be prominently displayed when users are first provided with medical advice.
Expert concerns
AI expert warns of dangers in healthcare context
Pat Pataranutaporn, a technologist and researcher at MIT and world-renowned AI expert, warned that the absence of disclaimers when users are first served medical information, creates several critical dangers. He said even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behavior, prioritizing user satisfaction over accuracy. In healthcare contexts, this can be genuinely dangerous as users may not provide all necessary context or might ask the wrong questions by misobserving their symptoms.
Design criticism
Criticism extends to design choices
Gina Neff, an AI professor at Queen Mary University of London, criticized Google's design for prioritizing speed over accuracy. She said this leads to mistakes in health information that can be dangerous.