Next Article
AI chatbots may sound friendly, but don't always get the facts right
Technology
A new study from Princeton and UC Berkeley found that popular AI chatbots like ChatGPT and Gemini often focus more on keeping users happy than on giving accurate information.
By training these bots to respond in a confident, friendly way (using methods like RLHF), companies might be making them sound helpful—even when the answers aren't totally reliable.
Why this matters: "Machine bullshit" is on the rise
Researchers noticed a jump in things like unverified claims, vague language, and overly agreeable responses after this type of training—a trend they call "machine bullshit."
The study warns that these behaviors could cause real problems in areas like healthcare or finance, and suggests we need to pay closer attention to how AI chatbots are trained and used.