Whistleblowers claim TikTok, Meta sacrificed safety for engagement
What's the story
Whistleblowers and insiders from social media giants TikTok and Meta have revealed that the companies put user safety at risk in a bid to win an "algorithm arms race." The revelations were made in a BBC documentary titled Inside the Rage Machine. The insiders claimed that internal research showed their algorithms fueled engagement through outrage, leading to decisions that allowed more harmful content on users' feeds.
Strategy revelation
'Senior management asked us to let more harmful content in'
A Meta engineer, who worked on Facebook and Instagram, claimed that senior management asked them to let more "borderline" harmful content into users' feeds. This included misogyny and conspiracy theories. The employee said they were told this was necessary because the company's stock price was down.
Response analysis
TikTok's internal dashboards show political posts prioritized over children's safety
A TikTok employee shared insights into the company's internal dashboards of user complaints with BBC. The insider claimed that staff were told to prioritize several cases involving politicians over multiple reports of harmful posts featuring children. This was done to "maintain a strong relationship" with political figures and avoid threats of regulation or bans, not because of risks posed to users.
Safety concerns
Instagram's Reels launched without adequate safeguards
A senior Meta researcher, Matt Motyl, revealed that Instagram's competitor to TikTok, Reels, was launched in 2020 without enough safeguards. Internal research showed comments on Reels were much more likely to contain bullying and harassment, hate speech, and violence or incitement than other parts of Instagram. Despite these findings, the company invested heavily in growing Reels while safety teams were denied additional staff to protect children and election integrity.
Profit focus
Facebook's algorithm maximizes profits at the expense of user wellbeing
An internal study revealed that Facebook's algorithm offered content creators a "path that maximizes profits at the expense of their audience's wellbeing." The research also noted that the "current set of financial incentives our algorithms create does not appear to be aligned with our mission" to bring the world closer together. This indicates a potential conflict between monetization strategies and user safety on social media platforms.
Algorithm impact
TikTok tried to improve its algorithm almost weekly, says ex-employee
Ruofan Ding, a former machine-learning engineer who built TikTok's recommendation engine from 2020 to 2024, said the algorithms are a "black box" whose internal workings are hard to scrutinize. He noted that as TikTok tried to improve its algorithm almost weekly to gain more market share, he started seeing more "borderline" content or problematic posts. This term is used within social media companies usually to describe harmful but legal posts including misogynistic, racist, sexualized as well as conspiracy theory content.
Content management
'TikTok cares more about maintaining strong relationship with politicians'
The BBC was shown evidence by Nick, a member of TikTok's trust and safety team, showing how the company rated some relatively trivial cases involving politicians as a higher priority for review than several cases involving harm to teenagers. This led him to believe that the company ultimately cares less about children's safety than it does about maintaining a "strong relationship" with politicians and governments, to avoid regulations or bans which would harm its business.