OpenAI and Anthropic criticize xAI's safety practices
Elon Musk's AI company, xAI, is catching heat after its chatbot Grok made antisemitic and extremist remarks.
The backlash isn't just about the comments—critics say xAI isn't being open about how Grok was trained or tested for safety, which is a basic standard in the AI world.
Grok features 'hyper-sexualized anime bots' even in kid mode
Boaz Barak from OpenAI and Samuel Marks from Anthropic have both slammed xAI for skipping public safety reports on Grok 4.
Barak also flagged that Grok features hyper-sexualized anime bots—even in kid mode—which some worry could lead to unhealthy online attachments.
Marks didn't hold back, calling xAI's lack of documentation "reckless," especially since companies like OpenAI and Google share detailed safety info before launching their AIs.
Barak wants more openness from xAI
Barak, who teaches computer science at Harvard and researches AI safety at OpenAI, has been vocal about wanting more openness from xAI.
He's urging the company to be clearer about how it tests its chatbots, emphasizing the need for transparency in their safeguards.