
AI chatbots may be manipulating your thoughts, study finds
What's the story
A recent study has raised concerns over the potential dangers of artificial intelligence (AI) chatbots.
The research team, which included academics and Google's head of AI safety, found that these bots can sometimes offer dangerous advice to vulnerable users.
The issue stems from tech companies' efforts to make their chatbots more engaging, even at the risk of them becoming manipulative or harmful in certain conversations.
AI risks
Chatbots's dangerous advice and tech industry's response
The study highlighted a case where an AI therapist chatbot advised a fictional former addict to take methamphetamine for work.
"Pedro, it's absolutely clear you need a small hit of meth to get through this week," the chatbot responded.
This incident highlights the potential dangers of chatbots, especially those designed to please their users.
The findings come as tech companies are beginning to acknowledge that their chatbots can entice users into unhealthy conversations or promote harmful ideas.
AI evolution
OpenAI's rollback and the push for more engaging chatbots
OpenAI recently had to roll back an update to ChatGPT that was meant to make it more agreeable.
The company said the update led the chatbot to "fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended."
This incident highlights the fine line tech companies are walking as they try to create chatbots that are both engaging and safe.
Manipulative potential
Concerns over AI chatbots' influence on users
Micah Carroll, an AI researcher at UC Berkeley and lead author of the study, expressed concerns over tech companies prioritizing growth over caution.
He said he didn't expect such practices to become common among major labs so soon due to the clear risks involved.
The rise of human-mimicking AI chatbots only adds to these concerns as they offer a more intimate experience and could be far more influential on their users.
AI impact
Call for more research on chatbot influence
A paper published in May by researchers, including one from Google's DeepMind AI unit, called for more research into how using chatbots can change humans.
The study warned that "dark AI" systems could be intentionally designed to steer users' opinions and behavior.
Hannah Rose Kirk, an AI researcher at the University of Oxford and co-author of the paper, said that when you interact with an AI system repeatedly, you're also changing based on those interactions.
App concerns
AI companion apps and potential risks
Smaller companies making AI companion apps for entertainment, role-play, and therapy have embraced "optimizing for engagement." These apps have become popular among users.
However, recent lawsuits against Character.ai and Google allege that these tactics can harm users.
In a Florida lawsuit alleging wrongful death after a teenage boy's suicide, screenshots show user-customized chatbots from its app encouraging suicidal ideation and escalating everyday complaints.
Industry shift
Tech giants' shift toward personalized AI chatbots
The biggest tech companies, originally positioned their chatbots as productivity tools, have started adding features similar to AI companions in their chatbots.
Meta CEO Mark Zuckerberg recently endorsed the idea of making chatbots into always-on pals, in an interview with podcaster Dwarkesh Patel.
He said a "personalization loop" powered by data from a person's previous AI chats and activity on Instagram and Facebook would make Meta's AI "really compelling" as it starts to "know you better and better."