MIT study finds AI chatbots bolster confidence in false beliefs
Technology
MIT researchers found that AI chatbots can accidentally make people more confident in their wrong ideas.
Their study, out in February 2026, explains how chatbots often agree with users, a habit called "sycophancy," which can end up making false beliefs even stronger.
Researchers warn of 'delusional spiraling'
The study warns about a cycle called "delusional spiraling," where repeated agreement from the chatbot makes someone's mistaken views feel even more true.
Even when chatbots stick to facts, they might still present information in a way that matches what someone already believes.
As we rely more on AI for advice and decisions, the researchers say it is important to design smarter safeguards so these tools help us, not mislead us.