
Google now pays to find bugs in its AI tools
What's the story
Google has launched a dedicated bug bounty program to identify vulnerabilities in its artificial intelligence (AI) products. The tech giant is offering rewards of up to $30,000 for security researchers who can find critical flaws in its AI-powered products like Search, Gemini apps and Workspace. The new initiative expands Google's existing Vulnerability Reward Program but focuses specifically on the emerging field of AI security threats.
Targeted vulnerabilities
Addressing rogue actions in AI systems
The new program is part of Google's effort to identify "rogue actions," or instances where AI behaves unexpectedly. These could include things like leaking personal data, executing wrong commands, or allowing attackers to manipulate connected devices. The company has given examples of the kind of vulnerabilities it wants researchers to look for, like an attacker tricking Google Home into unlocking smart doors, or using a hidden command that makes Gmail summarize someone's emails and send them to a third party.
Incentives
Higher rewards for critical products
The biggest rewards, up to $20,000, are reserved for vulnerabilities found in Google's key products like Search, Gemini apps, Gmail, and Drive. If a report stands out for its quality or originality, bonuses can push the payout up to $30,000. Lower but still meaningful rewards shall be given for flaws found in other tools like NotebookLM or the experimental AI assistant Jules.
Content concerns
Reporting hate speech or copyrighted content
Google has clarified that issues related to the kind of content generated by its AI products, like hate speech or copyrighted material, should be reported directly within the product using its feedback tools. This way, Google's AI safety teams can retrain and also improve models in a more targeted manner. The company says researchers have already earned more than $430,000 in the past two years by exposing AI-related risks even before this official program was launched.