Google Unveils New AI Features in Gemini Live
Innovative integration of AI allows real-time screen reading and live video interactions.
Introduction to New AI Capabilities
Google has begun to roll out new artificial intelligence functionalities in its Gemini Live platform, enabling users to have real-time interactions with the AI through their devices. This rollout was confirmed by Google representative Alex Joseph in correspondence with The Verge.
Features Overview
Screen Reading Ability
One of the standout features allows Gemini to interpret and respond to information displayed on a user’s screen. A recent report mentioned that this functionality appeared on a Xioami smartphone, where a user shared a demonstration video showcasing Gemini’s screen-reading skills.
Live Video Interaction
Another significant feature being introduced is live video interpretation, which allows Gemini to analyze and respond to queries based on the camera feed from a smartphone. For instance, in a recent demonstration, users were seen asking Gemini for assistance in selecting a paint color for pottery, highlighting the practical applications of the technology.
Availability
The previously mentioned features are currently rolling out as part of the Gemini Advanced Subscription included in the Google One AI Premium plan, which is expected to be fully available to subscribers soon.
Conclusion
With these new features, Google continues to innovate in the realm of AI, striving to enhance user engagement and accessibility through real-time assistance. The capabilities showcased in Gemini Live may pave the way for broader applications of AI technology in everyday scenarios.