Next Article
Why GPT-5 and other OpenAI chatbots confidently lie
OpenAI just shared why their chatbots, like GPT-5, sometimes confidently give wrong answers—what they call "hallucinations."
Turns out, these AIs are trained to always answer, even when unsure—kind of like guessing on a test instead of leaving it blank.
The way they're evaluated actually encourages this bluffing.
How OpenAI plans to fix this issue
To tackle this, OpenAI wants to change how AIs are graded.
Right now, bots get penalized for admitting uncertainty, so they end up guessing.
By tweaking the system to reward accuracy (even if that means abstaining or expressing uncertainty when unsure), OpenAI hopes future chatbots will be more reliable—especially for important stuff like health or money advice.