Google DeepMind Unveils AI that Understands Sign Language
SignGemma supports multiple sign languages but is optimised for American Sign Language (ASL) and English.

Google DeepMind unveiled SignGemma, its most advanced model for translating sign language into spoken text.
"We're thrilled to announce SignGemma, our most capable model for translating sign language into spoken text," Google said in a X post.
We're thrilled to announce SignGemma, our most capable model for translating sign language into spoken text. 🧏
— Google DeepMind (@GoogleDeepMind) May 27, 2025
This open model is coming to the Gemma model family later this year, opening up new possibilities for inclusive tech.
Share your feedback and interest in early… pic.twitter.com/NhL9G5Y8tA
Set to join the Gemma family later this year, SignGemma supports multiple sign languages but is optimised for American Sign Language (ASL) and English.
Other models include DolphinGemma, analysing dolphin vocalizations, and MedGemma, focused on medical AI.
These innovations aim to enhance accessibility, helping sign language users communicate more easily in education, work, and social settings.
At Google I/O 2025, the company unveiled Gemma 3n, a model optimised to run smoothly on phones, laptops, and tablets, now available in preview.
Gemma 3n supports audio, text, images, and videos and can operate on devices with less than 2GB of RAM, offering efficient performance without relying on cloud computing.