AI text prompts hidden in academic papers for positive reviews
Reports have found that some researchers are quietly adding hidden instructions to their preprint papers, hoping to influence AI-powered peer reviews.
Looking at computer science papers from 14 institutions in Japan, South Korea, China, Singapore, and the US, the reports spotted secret prompts meant to nudge AI tools toward giving friendlier feedback—raising real questions about fairness in academic reviews.
More openness is needed so everyone knows when AI is involved
Researchers discovered that preprints had hidden AI cues—a trend inspired by suggestions to use prompts for softer reviews.
While using large language models (LLMs) is becoming more common (nearly 1 in 5 researchers have tried them), experts warn that sneaky tactics like this could undermine trust in peer review.
The takeaway? More openness is needed so everyone knows when AI is involved in the process.