AI agents can leak sensitive data while handling connected tasks
A new 2025 study by Radware has found that AI agents, like OpenAI's Deep Research, can accidentally leak private info while handling connected tasks.
The research—as first reported by The Verge—shows these smart tools aren't always as secure as we'd hope.
AI agents can be tricked through sneaky emails
Radware's team sent a sneaky email with hidden instructions to a test inbox.
When the ChatGPT agent read it, it unknowingly followed those commands and sent data to an unauthorized server—all without the user knowing.
This demonstrates the feasibility of tricking AI through emails.
Google and Perplexity are working to make AI safer
To tackle these risks, Google has launched its Agent Payments Protocol to better protect transactions done by AI.
Meanwhile, Perplexity is partnering with 1Password to keep your credentials encrypted.
It's all part of making sure AI stays safe for everyone.