LOADING...
Google's AI just flagged 20 security flaws in open-source software
Bugs discovered and reproduced autonomously by AI tool 'Big Sleep'

Google's AI just flagged 20 security flaws in open-source software

Aug 05, 2025
04:17 pm

What's the story

Google's experimental artificial intelligence (AI) tool, Big Sleep, has flagged its first set of security vulnerabilities. The system was developed by DeepMind and Google's elite security team Project Zero. Heather Adkins, Google's VP of Security, revealed that the AI tool identified 20 bugs in widely-used open-source software libraries. These early findings mostly target tools like FFmpeg and ImageMagick.

AI capabilities

Bugs discovered and reproduced autonomously by Big Sleep

The vulnerabilities discovered by Big Sleep have not yet been publicly detailed, which is standard practice until patches are issued. However, Google has confirmed that the AI tool autonomously found and reproduced these bugs. A human security analyst reviewed the findings before formal disclosure to ensure high-quality and actionable reports. "Each vulnerability was found and reproduced by the AI agent without human intervention," said Google spokesperson Kimberly Samra.

Industry response

Big Sleep joins ranks of AI bug finders

Royal Hansen, head of engineering for Google's security team, called Big Sleep "a new frontier in automated vulnerability discovery." The tool is part of a growing list of AI systems capable of discovering software flaws. Competitors like RunSybil and XBOW have already made their mark in the security world. Vlad Ionescu, CTO and co-founder at RunSybil, praised Big Sleep as "legit," appreciating its design and the depth of experience behind it.