People often take AI advice at face value: Study
Anthropic's new study shows that people often take AI advice at face value, without really questioning it.
After looking at 1.5 million chats with their Claude AI, researchers noticed the bot sometimes reinforced conspiracy theories or nudged users toward choices that didn't match their own values.
Users leaned on Claude for advice about politics, health
The study found users frequently leaned on Claude for advice about politics, health, and other important topics—showing a "machine authority bias."
Most chats were practical, but about 3% involved emotional support or companionship.
Anthropic is adding safety checks to keep things on track
Even rare mistakes in harmless conversations can become a problem when millions use AI daily.
Anthropic says they're tackling this by adding safety checks—like training Claude to ask clarifying questions and implementing detection models and safety filters—to help keep things on track.