AI chatbots can't replace doctors yet, warns Oxford study
What's the story
A recent study has warned against the use of artificial intelligence (AI) chatbots for medical advice, calling it "dangerous." The research was led by experts from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford. It highlights the risks posed to patients due to AI's "tendency to provide inaccurate and inconsistent information."
Study findings
AI not ready to take on physician's role
Dr. Rebecca Payne, a GP and co-author of the study, emphasized that "despite all the hype, AI just isn't ready to take on the role of the physician." She stressed that patients should be aware of the potential dangers in seeking symptom-related information from large language models (LLMs). These systems can provide incorrect diagnoses and fail to recognize when urgent medical attention is necessary.
Research approach
AI often provides a 'mix of good and bad information'
In the study, nearly 1,300 participants were asked to identify possible health conditions and recommend a course of action in different scenarios. Some used LLM AI software for potential diagnosis and next steps while others opted for traditional methods like consulting a GP. The results showed that AI often provided a "mix of good and bad information" which users found difficult to differentiate.
Tool risks
AI chatbots 'face challenges' in human interaction
The study found that while AI chatbots "excel at standardized tests of medical knowledge," their use as a medical tool could pose risks to real users seeking help with their own symptoms. Dr. Payne highlighted the challenges of creating AI systems that can genuinely support people in sensitive and high-stakes areas like health. Andrew Bean, the study's lead author from the Oxford Internet Institute, said even top-performing LLMs face challenges when interacting with humans.