Meta AI fixes privacy flaw exposing user chats
Meta just patched a serious bug in its AI chatbot that let people peek into private chats they weren't supposed to see.
Ethical hacker Sandeep Hodkasia found the issue back in December 2024, and Meta fixed it about a month later.
Hodkasia got a $10,000 reward for flagging it.
How the bug allowed users to access private chats
The glitch happened because each chat and response had simple, numbered IDs.
By tweaking these numbers using browser tools, anyone could access other users' conversations—no special skills needed.
Meta's system didn't check if you actually had permission to view those chats, so it was way too easy to snoop or even scrape tons of data.
Even after fix, chat data still appears publicly
Even with this fix, privacy on Meta's AI is still shaky.
Sometimes user chats show up publicly in the "Discover" feed tied to real profiles, and confusing sharing settings have led people to accidentally reveal sensitive info.
While Meta says chats are private unless shared, unclear controls mean mistakes keep happening—raising bigger questions about how safe our conversations really are with AI platforms like this.