LOADING...
Google blocks world's first AI-created zero-day cyberattack
Google also found evidence of a "mass exploitation event"

Google blocks world's first AI-created zero-day cyberattack

May 12, 2026
10:32 am

What's the story

Google has successfully thwarted a zero-day exploit that was developed using artificial intelligence (AI), marking the first time the tech giant has thwarted an attack of this nature. The exploit, which was aimed at an unnamed open-source web-based system administration tool, could have allowed cybercriminals to bypass two-factor authentication (2FA) protections. The threat was discovered by the company's Threat Intelligence Group (GTIG), which found evidence of a "mass exploitation event."

AI involvement

Exploit was discovered in a Python script

The exploit was discovered in a Python script, which showed signs of AI assistance. These included a "hallucinated CVSS score" and "structured, textbook" formatting typical of large language model (LLM) training data. The attack exploited a high-level semantic logic flaw where the developer hardcoded a trust assumption into the platform's 2FA system.

Historic discovery

First AI-assisted attack

This is the first time Google has found evidence of an attack assisted by AI. However, the company's researchers have clarified that they "do not believe Gemini was used" in this case. While Google was able to "disrupt" this particular exploit, it warns that hackers are increasingly using AI to discover and exploit security vulnerabilities.

Advertisement

Dual threat

AI as a target for attackers

The GTIG report also highlights that AI is not just a tool but also a target for attackers. The report states, "GTIG has observed adversaries increasingly target the integrated components that grant AI systems their utility, such as autonomous skills and third-party data connectors." This shows how cybercriminals are now using AI to find security vulnerabilities in other systems.

Advertisement

Advanced strategies

Advanced tactics employed by hackers

The report also sheds light on some advanced strategies employed by hackers. These include "persona-driven jailbreaking" where they get AI to find security vulnerabilities for them, and feeding whole repositories of vulnerability data into AI models. Cybercriminals are now training AI models on massive vulnerability datasets and leveraging OpenClaw in ways that suggest "an interest in refining AI-generated payloads within controlled settings to increase exploit reliability prior to deployment."

Advertisement