Google DeepMind Unveils AI that Understands Sign Language

SignGemma supports multiple sign languages but is optimised for American Sign Language (ASL) and English.

Google DeepMind Unveils AI that Understands Sign Language

Google DeepMind unveiled SignGemma, its most advanced model for translating sign language into spoken text.

"We're thrilled to announce SignGemma, our most capable model for translating sign language into spoken text," Google said in a X post.

Set to join the Gemma family later this year, SignGemma supports multiple sign languages but is optimised for American Sign Language (ASL) and English.

Other models include DolphinGemma, analysing dolphin vocalizations, and MedGemma, focused on medical AI.

These innovations aim to enhance accessibility, helping sign language users communicate more easily in education, work, and social settings.

At Google I/O 2025, the company unveiled Gemma 3n, a model optimised to run smoothly on phones, laptops, and tablets, now available in preview.

Gemma 3n supports audio, text, images, and videos and can operate on devices with less than 2GB of RAM, offering efficient performance without relying on cloud computing.