IBM Launches Granite 4.0 Hybrid AI Models
Granite 4.0 is the first open model family to earn ISO 42001 certification for AI governance, privacy, and accountability.

IBM today unveiled Granite 4.0, its next-generation hybrid architecture language model suite built for enterprise-scale AI. The models blend transformer and Mamba-style layers to drastically cut memory use and cost while maintaining strong performance — even on extended contexts.
Granite 4.0 is being released under the Apache 2.0 open license, and becomes the first open model family to earn ISO 42001 certification for AI governance, privacy, and accountability.
Availability spans IBM watsonx.ai, Dell, Docker, Hugging Face, and other AI platforms, with support planned for Amazon SageMaker and Azure AI Foundry.
The Granite 4 lineup includes models of different sizes and architectures:
- Granite-4.0-H-Small (32B parameters, 9B active) for heavy agentic tasks
- Granite-4.0-H-Tiny (7B total, 1B active) and H-Micro (3B) aimed at low-latency and edge use
- A dense Granite-4.0 Micro variant for environments not yet supporting hybrid architectures
Benchmarks show that even the smaller Granite models outperform Granite 3.3 8B while requiring over 70% less RAM for long context workloads and concurrent sessions. This efficiency allows organizations to run high-performance AI on more modest hardware — lowering barriers to deployment.
IBM emphasised security and trust: all model checkpoints are cryptographically signed, and a new bug bounty program with HackerOne supports ongoing vulnerability discovery. The firm also offers an uncapped indemnity against third-party IP claims when using Granite models on watsonx.ai.
With its hybrid architecture, open access, and enterprise safeguards, Granite 4.0 positions IBM as a foundational provider of efficient, transparent AI models for real-world business applications.
Comments ()