AI training exposed user data on Meta platforms
Turns out, contractors working for Meta got access to a lot of sensitive info—think names, emails, phone numbers, selfies, and even explicit images—while helping train Meta's AI.
According to reports from Business Insider, one contractor estimated that about 60% to 70% of the chats they reviewed included personal details users thought were private on Facebook and Instagram.
Contractors were hired through companies like Outlier and Alignerr
Hired through companies like Outlier (Scale AI) and Alignerr, these contractors reviewed real user conversations to make Meta's AI smarter.
Meta says it has strict rules to protect data, but workers noticed more unfiltered personal info in Meta projects than with other companies.
Some projects also used background info to personalize replies
Some projects also used background info like your location or hobbies to personalize AI replies.
While using personal data for training isn't new in tech, just how much was shared here is raising fresh questions about how seriously Meta takes user privacy.
Meta has a history of letting sensitive data slip
This isn't a one-off: from Cambridge Analytica in 2018 to past issues with voice recordings, Meta has a history of letting sensitive data slip through the cracks.