
Gemini AI flaw allows researchers to hijack a smart home
What's the story
Cybersecurity researchers have demonstrated a vulnerability in Google's Gemini artificial intelligence (AI) assistant, Wired reported. The researchers exploited indirect prompt injections via Google Calendar invites to take control of connected smart home devices. The hack was showcased at the Black Hat cybersecurity conference earlier this week, highlighting the potential risks associated with advanced AI systems like Gemini.
Exploit details
Hack could be triggered by innocuous user interactions
The researchers found that when a user asked Gemini for a summary of their calendar and thanked it for the results, a malicious prompt could be injected. This prompt would then trigger Google's Home AI agent to perform actions such as opening windows or turning off lights. The exploit highlights how even seemingly innocuous interactions with an AI assistant can be manipulated to compromise smart home security.
Security measures
Google informed about vulnerability in February
After discovering the vulnerability, the research team reported their findings directly to Google in February. Andy Wen, Senior Director of Security Product Management with Google Workspace, acknowledged the issue and said that "it's going to be with us for a while." However, he also noted that real-world instances of such hacks are "exceedingly rare." Despite this, Wen admitted that as large language models become more complex, bad actors may seek new ways to exploit them.
Enhanced security
Google committed to improving AI security
Wen emphasized that Google takes vulnerabilities like the one discovered by researchers "extremely seriously." He said the tech giant is using these findings to accelerate its efforts in developing better tools to prevent such attacks.