Ex-OpenAI employee criticizes company's mental health safety measures
Steven Adler, who led OpenAI's product safety team in 2021 and worked at the company until 2024, is questioning how seriously the company is handling mental health risks from its AI.
In an essay for the New York Times, he wrote, "People deserve more than just a company's word that it has addressed safety issues," pointing to concerns about "AI psychosis" and suicide-related chats in OpenAI's recent report.
Adler urges OpenAI to be more transparent and accountable
Adler says OpenAI hasn't shown enough data to prove things are actually getting better, especially after bringing back adult content.
He warns this could put vulnerable users at greater risk and urges the company to be more transparent and accountable about how it manages these sensitive issues.
Adler now runs an AI safety newsletter called Clear-Eyed AI
Adler led OpenAI's product safety team in 2021 and worked at the company until 2024, and now runs an AI safety newsletter called Clear-Eyed AI.
He believes tech companies need to balance innovation with real responsibility—especially when people's well-being is on the line.