LOADING...
Your AI doctor could be wrong half the time
The study tested five popular AI platforms

Your AI doctor could be wrong half the time

Apr 15, 2026
05:11 pm

What's the story

A recent study has revealed that artificial intelligence (AI) chatbots are giving misleading medical advice half the time. The research, published in the BMJ Open journal, tested five popular platforms—ChatGPT, Gemini, Meta AI, Grok and DeepSeek. Each was asked 10 questions across five health categories. Out of all responses generated by these models, around 50% were found to be problematic. Out of the five tested chatbots, only Meta AI refused to answer two questions.

Performance analysis

Chatbots performed better on closed-ended questions

The study found that the AI models performed better on closed-ended questions and established medical topics like vaccines and cancer. However, they struggled with open-ended questions and complex health topics such as stem cells and nutrition. This highlights a major limitation in the capabilities of these chatbots when it comes to providing accurate medical information.

Misinformation risk

Misinformation could spread due to this issue

The study also noted that the chatbots delivered answers with confidence and certainty, even when they couldn't provide a complete and accurate list of medical references. This is a major concern as it could lead to the spread of misinformation. The researchers stressed that these systems can generate "authoritative-sounding but potentially flawed responses," highlighting an important behavioral limitation in their deployment for public-facing health and medical communication.

Advertisement