Study finds AI chatbots wrong in 80% of medical cases
A recent study found that popular AI chatbots aren't very reliable when it comes to giving medical advice.
When tested with real-life patient symptoms, these tools got things wrong over 80% of the time in tricky cases and 40% of the time even when lab results were clear.
Researchers warn AI health advice unsafe
Researchers say these AI models tend to skip proper clinical reasoning and jump too quickly to conclusions.
Marc Succi from Massachusetts General Hospital put it simply: "Despite continued improvements, off-the-shelf large language models are not ready for unsupervised clinical-grade deployment,"
Still, about 66 million Americans are using AI for health advice, with some citing cost, time, or access barriers.
But here's the catch: one in 10 users have gotten potentially unsafe advice from these tools, highlighting why good oversight is so important as more people turn to AI for health questions.