OpenClaw AI assistant bug could expose sensitive data
A major security bug in the OpenClaw AI assistant could be used to exfiltrate access tokens and enable remote code execution on exposed systems.
In January 2026 engineer Chris Boyd's experiment went wrong, flooding his contacts with messages and exposing deeper problems.
Separately, researchers found over 17,500 OpenClaw setups online—many holding sensitive credentials for platforms like Claude, OpenAI, and Google AI.
Attackers could hijack an account almost instantly
With just one click on a bad link, attackers could hijack an account almost instantly—no password needed.
This gave them full control to change settings or run whatever code they wanted.
A fix was released at the end of January 2026, but the whole episode is a wake-up call: if you're using or building AI tools, double-check your security before something goes sideways.