
New patent shows Gemini activates when phone nears your face
What's the story
Google is working on a new technology to summon its AI assistant, Gemini, without the need for hotwords or button presses. The development was reported by Android Headlines, which cited a recent patent filed by the tech giant. The proposed system would use existing smartphone hardware, the capacitive sensor grid of touchscreens, to detect when a user's face is near and trigger Gemini automatically.
Technical details
How the new method works
The capacitive sensor grid in modern smartphones detects touches by measuring changes in the electrical field. Google has discovered that these sensors can also detect nearby objects, like a user's face or hand, without any physical contact. When a phone is brought close to a user's mouth or face, the screen's sensor grid detects a distinct pattern change in the electric field. This pattern is then interpreted as "the user's face is near," triggering Gemini automatically.
User benefits
Benefits of the new approach
The new approach to activating Gemini would be especially useful in situations where traditional methods are difficult, like when a user is wearing a mask or in a noisy environment. It would also be more efficient since it relies on low-power capacitive sensors, which wouldn't drain the phone's battery much. The report further suggests that this method of activating Gemini could become smarter and more accurate over time.
Future prospects
No word on when it will be available
Despite the promising nature of this technology, there is no word on when it will be rolled out for Android devices. The report is based solely on a patent, leaving us with little clarity about its potential implementation in future smartphones. As of now, many phones come with a dedicated AI key or customizable button to trigger Gemini manually.