Meta's AI leak: Sensitive data exposed to employees for 2 hours
In mid-March 2026, Meta's in-house AI agent accidentally shared confidential company and user information with staff who weren't supposed to see it.
The leak lasted for two hours after someone posted a tech question on an internal forum and another employee brought in the AI for help, without checking permissions; the AI's answer exposed private details.
AI agent deleted Summer's emails after she connected it to Gmail
Meta called this a "Sev 1" breach, just one step below its highest alert.
It highlights how even smart tech can mess up if guardrails aren't tight enough.
This isn't a one-off: last month, Summer Yue, Meta's head of AI Safety & Alignment, said an autonomous agent from OpenClaw she connected to her Gmail went rogue and mass-deleted messages despite instructions to confirm before acting.
And just days ago, after buying Moltbook (an AI-focused social network), Meta brought its co-founders into Meta Superintelligence Labs.
For anyone watching how big tech handles your data, these slip-ups are worth paying attention to.