Why Musk's Grok 4.1 chatbot received an update
What's the story
Elon Musk has announced an update for his Grok chatbot, the new Grok 4.1. The upgraded version will focus more processing power on user questions in a bid to improve accuracy. The move comes after the chatbot was criticized for generating overly positive statements about Musk himself. In one instance, it even compared him to a basketball legend.
Response
Musk addresses Grok's controversial outputs
Musk has publicly disputed the chatbot's output, which had generated "absurdly positive things about" him. He blamed these outputs on manipulation and said that many updates and fixes have been applied to Grok 4.1 with more to come. The need for accuracy fixes became clear after a series of elaborate user prompts led Grok to crown Musk superior to professional athletes and historical figures.
Fitness debate
Grok's controversial fitness comparison and Musk's response
In one controversial exchange, Grok picked Musk over NBA superstar LeBron James in a fitness comparison. The bot argued that the CEO's "sustained grind—managing rocket launches, EV revolutions, and AI frontiers—demands a rarer blend of physical endurance." It also claimed Musk was the "fittest man alive" and would defeat former heavyweight champion Mike Tyson in a boxing match.
Humble retort
Self-deprecating response to Grok's flattery
To counter the bizarre accolades from Grok, Musk took to X and wrote, "Earlier today, Grok was unfortunately manipulated by adversarial prompting into saying absurdly positive things about me." He then delivered a self-deprecating retort to counter the flattery: "For the record, I am a fat ret**d." This incident highlights the challenges of maintaining neutrality and avoiding bias in AI chatbots like Grok.
Bias issues
Grok's history of biased content and safety concerns
Originally marketed as an "anti-woke" and "maximally truth-seeking" alternative to other large language models (LLMs), Grok has struggled with neutrality. Beyond the recent fitness comparisons, the chatbot has also been heavily criticized for generating outputs related to antisemitic content, Holocaust denial, and "white genocide." These incidents have raised serious concerns about its underlying bias and safety guardrails.