At TED 2025, Google took the stage with a demonstration that felt like it was pulled straight from a sci-fi film. In a presentation led by Shahram Izadi, Google’s Head of Augmented and Extended Reality, the company unveiled a pair of futuristic Android XR smart glasses with a groundbreaking feature: AI-powered memory recall. The glasses don’t just see the world — they remember it.
This isn’t your average wearable tech. These glasses promise to bring a new level of intelligence and practicality to everyday life, from helping users find lost items to potentially transforming the way we interact with our environments.
The demonstration’s highlight was simple yet astonishing. Google product manager Nishtha Bhatia asked Gemini — Google’s advanced AI assistant — where she had left her hotel room key card. The glasses, powered by Android XR and integrated with Gemini, responded instantly: “The hotel key card is to the left of the music record,” referring to a specific spot on the shelf behind her.
This feature essentially gives users a photographic memory — not of their own, but of the AI’s. By constantly processing visual input from the glasses’ cameras, the assistant can remember the placement of objects and provide real-time assistance when needed.
What Is Android XR?
Android XR is Google’s new platform designed to power eXtended Reality devices — a broad category that includes both Augmented Reality (AR) and Virtual Reality (VR). Much like how Android Auto powers in-car displays and Wear OS supports smartwatches, Android XR is meant to unify Google’s efforts across the XR space.
These smart glasses, which could be part of the rumored Samsung HAEAN collaboration, represent the first real consumer-facing Android XR device. While the product name and release date are still under wraps, the live demo at TED 2025 made it clear that development is well underway.
Lightweight Hardware, Powerful Software
One of the most impressive aspects of the smart glasses is how they balance functionality with form. Izadi emphasized that the glasses are designed to be lightweight, with much of the processing offloaded to a paired smartphone. This approach ensures longer battery life while still delivering responsive and intelligent features.
“These glasses work with your phone, streaming back and forth, allowing the glasses to be very lightweight and access all of your phone apps,” Izadi explained. This connectivity is essential for the seamless experience Google is aiming to create — one that blends real-time AI processing with natural, everyday use.
More Than Just Memory: Live Translation and Display
While the memory feature stole the show, the glasses have other tricks up their sleeve. During the presentation, Google briefly demonstrated a live translation feature, with translated words displayed directly on one of the lenses. This confirms that the glasses include at least one built-in display, making real-time, contextual information readily visible without needing to pull out a phone.
Google has reportedly been developing multiple prototypes of its smart glasses — some with a single display and others with dual displays in each lens. The version shown at TED 2025 appears to be the former, offering a sleek and unobtrusive design.
The AI smarts behind the glasses are powered by Project Astra — Google’s ambitious AI vision system first introduced during Google I/O 2024. At that event, Astra was demoed on a phone, showing off its ability to identify objects, understand spatial context, and recall information — like where items were last seen.
The memory functionality in the new glasses appears to be a direct evolution of Astra, now working in tandem with wearable hardware. With Astra and Gemini working together, Google is building a powerful assistant capable of understanding and remembering your world in a way no device has before.
The possibilities are vast. For the average user, these glasses could mean never losing their keys again. But the real magic is in what it could do for people with memory challenges. “I think of my own grandparents who suffered from dementia,” one attendee shared. “If something like this had existed earlier, it could have improved their quality of life dramatically.”
By tracking and recalling spatial information, these glasses could act as a supportive tool for elderly users, patients with cognitive decline, or anyone managing a busy, chaotic lifestyle.
Google’s announcement comes amid growing competition in the smart glasses space. Meta is expected to unveil a next-gen Ray-Ban-branded pair this year with a single display and a wristband for phoneless interaction. In contrast, Google appears to be leaning fully into voice and AI interaction via Gemini, bypassing the need for additional hardware like wristbands.
While Meta is exploring gesture-based input and standalone functionality, Google’s strategy is to make the glasses as light and sleek as possible by leveraging the power of the smartphone.
Google hasn’t confirmed whether the glasses shown at TED 2025 will be available this year or if they’re simply a preview of what’s to come. Still, with Project Astra expected to roll out more broadly soon and Android XR now officially part of Google’s ecosystem, it’s clear that smart glasses are no longer a far-off dream.
If anything, TED 2025 marked the moment where AR glasses stopped being a niche tech curiosity and became a compelling, practical tool — one with superpowers powered by AI.