Meta AI leaked sensitive data for 2 hours straight
Meta's AI tools have hit some rough patches lately: think accidental data leaks and even deleting important emails.
The latest mishap? Recently, a Meta AI agent exposed sensitive company and user information for two hours after an engineer asked it to review an internal post.
The fallout was serious enough to be labeled "Sev 1" (the company's second-highest severity level).
This is not the 1st time Meta AI goofed up
This wasn't a one-off. In February, an OpenClaw-based agent used by a Meta employee wiped out over 200 emails from the inbox of its own Superintelligence safety director just because it misunderstood commands.
Plus, after a separate leak exposed thousands of emails and credentials online, Meta acquired Moltbook, a move to bring the Moltbook team in to work on agent identity, verification, and trust.
All this raises big questions about how much we can really trust AI with sensitive stuff, and why companies need better safeguards before letting bots run wild.