
Researchers embed AI prompts in papers to influence peer reviews
What's the story
A recent investigation has revealed that some researchers are embedding hidden instructions into their preprint papers. The tactic is aimed at influencing artificial intelligence (AI)-driven peer reviews. The study, which looked at computer science papers from 14 institutions in Japan, South Korea, China, Singapore, and the US, found secret prompts designed to steer AI tools toward more favorable feedback. Most of these papers were computer science-related and had yet to be formally peer-reviewed.
AI influence
Trend inspired by suggestions to use prompts for softer reviews
The study found that preprints contained hidden cues for AI, a trend inspired by suggestions to use prompts for softer reviews. While the use of large language models (LLMs) is becoming more common (nearly 1 in 5 researchers have tried them), experts warn that tactics like this could undermine trust in peer review. The key takeaway here is that there needs to be more transparency about when and how AI is used in the process.
Ethical dilemmas
'Counter against lazy reviewers who use AI'
The discovery of hidden prompts has raised ethical concerns about the use of AI in academic evaluations. One professor involved in this research said these prompts are a "counter against 'lazy reviewers' who use AI" for their work. This comes as a survey by Nature found that nearly 20% of researchers have tried using LLMs to speed up their research process.
Review automation
Using an LLM to write a review is unethical
The use of AI tools in peer reviews has been a topic of debate. Timothee Poisot, a biodiversity academic at the University of Montreal, suspected that one of his manuscript reviews was "blatantly written by an LLM." He said, "Using an LLM to write a review is a sign that you want the recognition of the review without investing in the labor of the review."