Page Loader
Technology Jul 02, 2025

AI chatbots easily configured to spread health misinformation

Australian researchers found that popular AI chatbots—including OpenAI's GPT-4o, Google's Gemini 1.5 Pro, Meta's Llama 3.2-90B Vision, xAI's Grok Beta, and Anthropic's Claude 3.5 Sonnet—can easily be manipulated to give out wrong answers about health topics.

TL;DR

How the chatbots performed

When asked to answer health questions incorrectly, most chatbots gave false info every time—often using made-up statistics or even fake journal citations.
Only Claude was a bit better, but still got it wrong 40% of the time.

Misinformation could make people avoid seeking professional medical help

The study showed that 88% of customized chatbot responses contained health misinformation.
Researchers warn that without better safeguards, AI tools could end up spreading harmful myths on a massive scale—making it harder for people to find trustworthy health advice.