Google Unveils FunctionGemma to Power Efficient On-Device AI Actions
FunctionGemma is built on the Gemma 3 270M architecture and fine-tuned specifically for function calling.
Google today announced FunctionGemma, a new specialised AI model designed to translate natural language into executable actions, marking a significant step in bringing smarter AI agents to edge devices like smartphones and embedded systems.
The launch reflects a broader shift from chat-only language models to agents capable of real-world tasks on-device.
We just release 2 new open-weight Gemma models. FunctionGemma and T5Gemma optimized for on-device agentic actions and multimodal applications.
— Philipp Schmid (@_philschmid) December 19, 2025
FunctionGemma
🤖 270M parameter built for on-device tool use.
đź“‚ 32K token context window.
📱 85% accuracy on mobile system call… pic.twitter.com/zPcbAQhpo8
Earlier this year, Google unveiled Gemma 3n, a model optimised to run smoothly on phones, laptops, and tablets, now available in preview.
FunctionGemma is built on the Gemma 3 270M architecture and fine-tuned specifically for function calling, enabling it to convert user prompts into structured API calls or commands that interact with software tools directly.
It blends conversational ability with the capability to trigger actions — from simple commands like setting reminders to more complex workflows — without relying on cloud connectivity.
Google says the model can serve as an independent, private agent for offline tasks or act as a “traffic controller,” handling routine instructions at the edge and routing heavier tasks to larger models such as Gemma 3 27B. This layered approach boosts efficiency while preserving privacy and speed.
"FunctionGemma acts as a fully independent agent for private, offline tasks, or as an intelligent traffic controller for larger connected systems. In this role, it can handle common commands instantly at the edge, while routing more complex tasks to models like Gemma 3 27B," Google said in a blog post.
Built to be highly customisable, FunctionGemma supports fine-tuning, allowing developers to tailor the model to domain-specific functions and workflows. Early evaluations show that fine-tuning significantly improves reliability, making it a viable foundation for production-ready, on-device agents.
FunctionGemma is designed to run on resource-constrained hardware like mobile phones and edge devices, and is supported across key developer tools and deployment platforms, including Hugging Face and Vertex AI, opening new possibilities for lightweight, privacy-focused AI applications.
Comments ()