Skip links
Google

Google launches AI Edge Gallery: local models now run directly on smartphones

Google has taken an important step in democratizing AI: the AI  Edge Gallery app enables local language models to run directly on smartphones, without constant server connection. Particularly notable is the Gemma 3n model series — now running fully on-device, providing greater autonomy, privacy, and response speed.

Among the innovations is Audio Scribe mode, which can transcribe and translate up to 30 seconds of audio directly on the phone. The models’ context window has been expanded to 4K tokens, allowing Gemma 3n to process more information in a single session — useful for complex conversations, documents, and long queries. A new Gemma 3n 4B version — the most powerful in the series — is also available, optimized for local operation on mobile hardware.

This update could change how we use AI assistants on the go, in meetings, and in offline scenarios: functions that once required the cloud now run directly on the device. Privacy is enhanced, server latency disappears — another step toward making AI models part of everyday experience, rather than tools operating “somewhere out there in the cloud.”

This website uses cookies to improve your web experience.
Explore
Drag