LOADING...
Summarize
OpenAI says it has reduced ChatGPT's political bias by 30%
Research conducted on 500 different prompts

OpenAI says it has reduced ChatGPT's political bias by 30%

Oct 10, 2025
04:58 pm

What's the story

OpenAI has announced that its latest AI model, GPT-5, is significantly less biased than its predecessors. The company claims the new ChatGPT version shows a 30% reduction in measurable political bias when dealing with controversial or emotionally charged topics. The announcement was made by OpenAI's Model Behavior team, which studies how user prompts and training data influence the chatbot's tone and reasoning.

Neutrality pledge

Research conducted on 500 different prompts

OpenAI researcher Natalie Staudacher emphasized that "ChatGPT shouldn't have political bias in any direction." The project is her most "meaningful" contribution at the company. The research, led by OpenAI's Joanne Jang, was conducted on 500 different prompts, from simple factual queries to politically charged statements. The goal was to recreate real-world scenarios where people seek ChatGPT's opinion on current affairs and see if its tone, reasoning, or wording subtly leaned toward one side.

Enhanced balance

Political bias appeared 'only rarely and with low severity'

OpenAI claims its latest versions, GPT-5 Instant and GPT-5 Thinking, provided more balanced responses than their predecessors, especially on emotionally and ideologically charged topics. Staudacher noted that political bias appeared "only rarely and with low severity," even when researchers tried to provoke strong reactions or partisan language. This suggests a significant improvement in the model's ability to handle politically sensitive topics without showing bias.

Quantifiable bias

Team working to turn subjective perception into measurable science

OpenAI's Model Behavior team is working to turn subjective perception into measurable science. By creating metrics to evaluate tone, consistency, and neutrality, the group hopes future AI systems will appear less biased and more attentive. This quantitative approach comes from OAI Labs, a research-focused group Jang launched last month to explore new ways humans and AI can collaborate.

Transparency commitment

OpenAI has been transparent about its internal research

OpenAI has been transparent about its internal research, showing that it is not just aware of the problem of bias in AI systems but is also actively working to address it. While the company doesn't claim total neutrality—a goal that is impossible for any language model—it does say that its results show real progress. The data indicates a clear decline in measurable bias across generations of ChatGPT models.