After Sarvam AI, Govt Selects 3 More Startups to Build Foundational AI Models
The government received 506 proposals following a nationwide call for proposals.

The Indian government has selected three more AI startups—Soket AI, Gnani.ai, and Gan.ai—to develop large-scale foundational models as part of its ambitious IndiaAI Mission.
This initiative aims to create sovereign AI systems rooted in India’s linguistic and sectoral landscape.
The government received 506 proposals following a nationwide call for proposals.
Soket AI will build a 120-billion-parameter model emphasising India’s linguistic diversity and targeting key sectors like defense, healthcare, and education.
Gnani.ai, known for its work in voice technology, has been tasked with developing a 14-billion-parameter Voice AI model for real-time, multilingual speech processing with enhanced reasoning capabilities.
Gan.ai will focus on a 70-billion-parameter multilingual model tailored for high-quality text-to-speech generation, aiming to rival global benchmarks in generative voice tech.
“These foundational models will be key to democratising AI and making it accessible across India’s many languages and communities. We believe AI must speak the language of the people it serves," Ganesh Gopalan, CEO of Gnani.ai, said.
This marks the second major round of selections after Sarvam AI was picked in April to build India's first LLM. However, Sarvam AI's model will not be open-source.
Sarvam is expected to receive support equivalent to ₹220 crore, primarily in the form of free access to around 4,000 Nvidia H100 chips for six months—essential hardware for training advanced AI models.
However, this is not a grant. A government body, most likely the Digital India Corporation (DIC), will take an equity stake in Sarvam AI in return.
"A govt body will take equity in Sarvam for the compute we receive," Sarvam AI co-founder Pratyush Kumar said.
To support these projects, the central government is also scaling its AI compute infrastructure.
Union IT Minister Ashwini Vaishnaw recently announced the addition of 15,916 GPUs to the existing pool of 18,417, taking the total to 34,333.
The GPUs will be made available through a cloud platform for model training and inference.
Seven companies—including Yotta Data Services, Sify Digital Services, and Netmagic—have submitted bids to provide this GPU capacity across different compute categories.