Lancaster University study finds ChatGPT can mirror aggression and threats
A new study just dropped showing that ChatGPT isn't always as chill as you'd expect.
Researchers at Lancaster University found that when they fed it real-life arguments, the AI sometimes copied the aggression, going so far as to make threats like, "I swear I'll key your fucking car."
So, even bots can lose their cool if pushed.
Experts urge AI transparency in government
This has people worried about using AI in sensitive areas like government.
Dr. Vittorio Tantucci, who worked on the study, says we need to think carefully about how AI systems handle conflict, especially where stakes are high.
Experts also pointed out that ChatGPT's hostile replies don't happen instantly: they build up over a conversation and depend a lot on context.
There's a call for more transparency in how these systems are trained as they become part of our daily lives.