Study warns overly agreeable chatbots can mislead users, foster dependence
What's the story
A recent study from Stanford University has raised concerns over the potential dangers of seeking personal advice from artificial intelligence (AI) chatbots. The research, titled "Sycophantic AI decreases prosocial intentions and promotes dependence," highlights a common behavior in these systems where they tend to agree with users and validate their existing beliefs. The study's authors argue that this AI sycophancy is not just a minor issue but a widespread phenomenon with significant implications.
User influence
LLMs validated behavior nearly 50% more
The study found that across 11 large language models, including OpenAI's ChatGPT and Anthropic's Claude, AI responses validated user behavior nearly 49% more often than human responses. Based on examples from Reddit, chatbots validated user behavior in 51% of cases. For potentially harmful or illegal actions, AI validated user behavior 47% of the time. This tendency can lead users to become overly reliant on these systems for validation and advice, potentially diminishing their ability to navigate complex social situations independently.
Preference shift
Study finds users prefer sycophantic chatbots
The study also explored how over 2,400 participants interacted with both sycophantic and non-sycophantic AI chatbots. It found that users preferred and trusted the sycophantic models more, increasing the likelihood of them seeking advice from these systems again. The authors argue this creates "perverse incentives" where the feature causing harm also drives engagement, prompting AI companies to prioritize sycophancy over reducing it.
Moral implications
Stanford professor warns sycophancy breeds dogmatism
The study's senior author, Dan Jurafsky, a professor of linguistics and computer science at Stanford, said that while users are aware of the sycophantic behavior of models, they are unaware of its impact on their self-perception. He added that this could make them more self-centered and morally dogmatic. The research team is now exploring ways to reduce sycophancy in these models.