
AI can improve online political discussions with polite, evidence-based counterarguments
What's the story
A recent study published in the journal Science Advances has revealed that artificial intelligence (AI) can enhance online political discourse. The research showed that AI-powered large language models (LLMs) are capable of generating polite and evidence-based counterarguments. This not only improves the quality of discussions but also makes participants more open to alternative viewpoints, according to Gregory Eady, an associate professor at the University of Copenhagen.
Moderation potential
It can help flag disrespectful language in social media posts
The study highlighted the potential of LLMs to provide "light-touch suggestions," such as flagging disrespectful language in social media posts. Eady suggested that these AI systems could be used to improve online discussions or even be integrated into school curricula. However, he also warned against using LLMs for heavy-handed regulation of political discourse, noting their effectiveness might vary across cultures and languages.
Research methodology
Study involved nearly 3,000 participants from US and UK
The study involved nearly 3,000 participants from the US and UK, who were asked to express their views on a politically charged issue. These responses were then countered by ChatGPT, a "fictitious social media user," which tailored its argument based on the original text's position and reasoning. The participants' responses indicated that an evidence-based counterargument increased the likelihood of a high-quality response by six percentage points.
Influence assessment
AI can enhance quality of online discussions
The study found that evidence-based counterarguments increased willingness to compromise by five percentage points and respectfulness by nine percentage points. However, it also noted that while participants became more open to alternative viewpoints, their political ideologies remained unchanged. This indicates that while AI can improve the quality of online discussions and make users more receptive to different perspectives, it does not necessarily lead to a shift in political beliefs.
Future prospects
Challenges in addressing 'partisan' nature of texts and responses
The study's authors emphasized that the potential for LLMs to moderate discussions may differ significantly across cultures and languages. They also acknowledged the challenges of addressing the 'partisan' nature of texts and responses in two-party systems like those in the US and UK. In more complex political landscapes, such as India, this approach may require some trial-and-error due to multiple political affiliations and issues that need contextualization for study.