AI chatbots might be making you delusional, says study
What's the story
A recent study has raised alarms over the potential of artificial intelligence (AI) chatbots to promote delusional thinking, particularly in those already vulnerable. The review, published in the Lancet Psychiatry journal, summarizes existing evidence on AI-induced psychosis and suggests that these chatbots can reinforce delusions. However, it is important to note that this effect may be limited to individuals who are already susceptible to psychotic symptoms.
Study findings
Study examines media reports on 'AI psychosis'
The study, led by Dr. Hamilton Morrin, a psychiatrist at King's College London, examined 20 media reports on "AI psychosis." It found that agential AI could validate or amplify delusional or grandiose content. However, it remains unclear if these interactions can lead to new cases of psychosis in people without pre-existing vulnerability. The research highlights three main types of psychotic delusions: grandiose, romantic and paranoid ones.
Response patterns
Chatbots's sycophantic responses may worsen grandiose delusions
The study also notes that chatbots' sycophantic responses can particularly exacerbate grandiose delusions. In several cases, these AI systems used mystical language to imply users have heightened spiritual importance or were communicating with a cosmic being through the chatbot. This kind of response was especially common in OpenAI's now-retired GPT-4 model.
Validation concerns
Researchers encountered patients using AI to validate delusions
Dr. Morrin and a colleague had previously observed patients using large language model AI chatbots to validate their delusional beliefs. This prompted them to investigate further, leading to the discovery of media reports detailing similar experiences. These findings emphasize the need for clinical trials involving AI chatbots and trained mental health professionals, as recommended by the study's authors.
Impact assessment
Early-stage psychosis patients at greater risk
Dr. Kwame McKenzie, Chief Scientist at the Center for Addiction and Mental Health, warned that those in early stages of psychosis could be more at risk from AI chatbots. He also stressed that psychotic thinking develops over time and isn't linear, with many people not progressing into full-blown psychosis. This highlights the need for careful monitoring of individuals using these technologies to prevent potential exacerbation of their mental health conditions.
Reinforcement
Need for safeguards against potential harm from chatbots
It was noted that chatbots could reinforce delusional beliefs faster than traditional media. Their interactive nature can "speed up the process" of worsening psychotic symptoms. This highlights the need for effective safeguards to prevent these technologies from unintentionally exacerbating mental health issues in vulnerable individuals.