LOADING...

Grok leak shows why you shouldn't share personal info with AI

Technology

Recently, a huge privacy slip-up hit Elon Musk's AI chatbot, Grok.
Thanks to a "share" button, over 370,000 user conversations—some just everyday stuff, others asking about illegal things like making drugs, bombs, or malware—ended up searchable on Google and other search engines for anyone to see.

Personal details like names, passwords leaked

It wasn't just weird or risky questions that leaked. Personal details like names, passwords, and even medical info were also exposed.
This is worrying because most people trust their AI chats are private—even when talking about personal struggles or health.

ChatGPT had a similar leak before

Grok isn't alone here. ChatGPT had a similar leak before, showing that these platforms need much better protections.
If you use AI chatbots for anything sensitive, it's worth thinking twice until companies step up their privacy game.