Google's AI Model Gemma Can Now Run Locally on Your Phones

Alongside Gemma 3n, Google introduced MedGemma, designed for health-related text and image analysis

Google's AI Model Gemma Can Now Run Locally on Your Phones
(Downloaded from Freepik)

Google is expanding its family of open AI models, Gemma, with new releases announced at Google I/O 2025.

The company unveiled Gemma 3n, a model optimized to run smoothly on phones, laptops, and tablets, now available in preview.

Gemma 3n supports audio, text, images, and videos and can operate on devices with less than 2GB of RAM, offering efficient performance without relying on cloud computing.

Alongside Gemma 3n, Google introduced MedGemma, designed for health-related text and image analysis, available through its Health AI Developer Foundations programme.

MedGemma aims to help developers build health applications by providing powerful multimodal understanding capabilities.

Google also previewed SignGemma, a model focused on translating sign language—especially American Sign Language—into spoken-language text, enhancing accessibility for deaf and hard-of-hearing users.

While Gemma models have been downloaded millions of times, some developers remain cautious due to the models’ unique licensing terms, which pose challenges for commercial use.