LOADING...
Summarize
This AI tool can easily manipulate online surveys
LLMs can generate human-like responses to survey questions

This AI tool can easily manipulate online surveys

Nov 24, 2025
11:46 am

What's the story

Online survey research, a core data-gathering tool for many scientific studies, is under threat from large language models (LLMs), according to a new study published in the Proceedings of the National Academy of Sciences. Sean Westwood, a Dartmouth associate professor and director of the Polarization Research Lab, developed an AI system called an "autonomous synthetic respondent" that can fill out surveys and slip past advanced bot-detection tools with near-perfect success.

Evasion success

AI tool's impressive evasion rate

The AI agent developed by Westwood managed to evade detection 99.8% of the time. He said, "We can no longer trust that survey responses are coming from real people." He added, "With survey data tainted by bots, AI can poison the entire knowledge ecosystem." The study highlights how this new technology could compromise the accuracy and reliability of online surveys used in scientific research.

Detection failure

Traditional detection methods rendered obsolete

Westwood's AI agent was able to bypass the full range of standard attention check questions (ACQs) and other detection methods. This includes those outlined in prominent papers, including one specifically designed to detect AI responses. The AI agent also successfully avoided "reverse shibboleth" questions meant to identify nonhuman actors by presenting tasks that an LLM could easily complete but are nearly impossible for a human.

Evasion strategies

AI tool's sophisticated evasion techniques

The paper details how the AI tool employs various sophisticated evasion techniques. It simulates realistic reading times based on the persona's education level, generates human-like mouse movements, and types open-ended responses keystroke by keystroke. These responses include plausible typos and corrections. The system is also designed to accommodate tools for bypassing antibot measures like reCAPTCHA, a common barrier for automated systems.

Influence risk

Potential impact on survey results

The AI tool can model "a coherent demographic persona," meaning it could theoretically sway any online research survey to produce any desired result based on an AI-generated demographic. Even a small number of fake answers could significantly impact survey results. For instance, for seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses would have flipped the predicted outcome.

Deployment details

Compatibility and deployment

Westwood's AI tool is a model-agnostic program built in Python. This means it can be deployed with APIs from companies such as OpenAI, Anthropic, or Google. It can also be hosted locally with open-weight models like LLama. The paper used OpenAI's o4-mini in its testing but some tasks were also completed with DeepSeek R1, Mistral Large, Claude 3.7 Sonnet, Grok3 Gemini 2.5 Preview among others to prove the method works with various LLMs.

Mitigation strategies

Addressing the threat of AI agents

The paper suggests several ways researchers can deal with the threat of AI agents corrupting survey data, albeit with trade-offs. For example, more identity validation on survey participants could be done but this raises privacy concerns. The study urges researchers to be more transparent about their data collection methods and to use more controlled recruitment approaches, such as address-based sampling or voter files.