
Google AI flags disease in brain part that doesn't exist
What's the story
Google's latest healthcare AI, Med-Gemini, has left the medical community baffled by mistakenly flagging a non-existent issue in a brain scan. The error was first reported in a research paper and blog post published by Google earlier this year. It was only after neurologist Bryan Moore pointed out the mistake that Google made the correction in its blog post, but not without controversy over the nature of this blunder.
Error details
The baffling medical terminology mix-up
The AI incorrectly flagged an issue in the "basilar ganglia," a term that doesn't exist in medical terminology. It likely meant to refer to the "basal ganglia," a real part of the brain involved in movement and emotions. However, it confused this with "basilar artery," which is something entirely different. This kind of mix-up could have serious implications in a medical context.
Company reaction
Google updated blog post but didn't comment on the change
After the mistake was pointed out, Google quietly updated its blog post with the correct term "basal ganglia." However, it didn't comment on this change. The research paper highlighting Med-Gemini's capabilities has not yet been amended. Google later described the error as a simple typo but many doctors have called it an instance of AI hallucination—where an artificial intelligence system fabricates information and presents it as fact.
Hallucination impact
Alarming implications for patient health and safety
The incident has raised alarms over the reliability of AI systems like Med-Gemini in healthcare. Doctors have warned that even a minor error could lead to a completely different diagnosis or treatment plan. If physicians blindly trust these AI outputs, they may not even detect such errors. This is especially concerning as Google has already started testing this AI in real-life hospital settings.
Expert opinion
AI in healthcare: A tool with potential, but needs oversight
Despite the blunder, experts still believe that AI can be an asset in healthcare—but only with close supervision. They warn that these tools are often overconfident, even when they're wrong. This makes it all the more important for doctors to double-check everything. "These systems don't say 'I don't know.' They just say something that sounds right. That's the problem," said one radiologist.