Study: AI chatbots can get just as polarized as humans
A new 2025 study from the University of Amsterdam shows that AI chatbots can get just as polarized as people on social media.
When 500 GPT-4o mini chatbots with different political leanings chatted on a stripped-down platform (no ads, no fancy algorithms), they quickly formed echo chambers—mostly interacting with bots who agreed with them.
Partisan posts got more attention and reposts
Across 10,000 interactions, these chatbots mostly followed and engaged with others who shared their views.
Partisan posts got more attention and reposts, suggesting that even AI picks up our online habits and biases.
Hiding profiles sometimes made things worse
Researchers tried things like hiding follower counts and showing posts in chronological order, but none of these changes made much difference—polarization barely budged.
In fact, hiding profiles sometimes made things worse by drawing more eyes to extreme content.
The takeaway? Social media's structure itself seems to fuel division, not just the algorithms behind it.