Skip links
glasses

Meta Glasses Now Understand What You’re Looking At: New AI Mode Analyzes Visual Stimuli in Real Time

Meta is taking the next step in neurointerface development: its smart glasses, created in partnership with Ray-Ban, will soon receive an update that allows them not just to capture the surrounding space, but to interpret what the user is looking at — in real time and with contextual depth.
At the core of this feature is the integration of Meta AI’s large language model with on-device computer vision. The glasses will be able to recognize objects, text, buildings, artwork, people, and logos — instantly providing explanations, related facts, personalized tips, or broader context.

meta


Example: you look at a book cover — the glasses suggest the author, summary, and rating. You glance at a landmark — they recite its history and architectural details. Looking at a QR code — you get a translation, a summary, or direct action options. It’s your personal guide, recognizing the world in real time.
The feature is currently in closed testing and is available to a limited number of users in the US. But according to the company, a global rollout is planned in the coming months.
This isn’t just “visual hints” — it’s a mode of cognitive augmentation where artificial intelligence becomes your visual memory, reference system, and guide.
Meta aims to turn its glasses into an invisible yet active interface layer that understands what you see — without distractions, screens, or hands. Just a glance — and an answer.

This website uses cookies to improve your web experience.
Explore
Drag