Operant AI Launches AI Infrastructure Security Programme Amid India’s AI Boom
The programme is open to infrastructure providers and AI platform vendors, offering integrated security capabilities as enterprises increasingly demand safe, scalable AI deployment.
San Francisco-based Operant AI has announced the launch of its AI Infrastructure Ecosystem Partnership Programme, aimed at embedding real-time security directly into AI inference systems as enterprises scale adoption of autonomous and agent-driven technologies.
Operant AI’s platform focuses on securing the inference layer—the stage where AI models process inputs and generate outputs. Its solutions, including GPU-accelerated AI Gatekeeper and MCP Gateway, provide real-time monitoring and threat detection without slowing performance. These tools are designed to counter emerging risks such as prompt injection, zero-day exploits and vulnerabilities in agent-based systems.
“As Indian enterprises deploy AI models and agents across financial services, healthcare, and public sector environments, the inference layer is where security must be enforced. Our GPU-accelerated AI Gatekeeper and MCP Gateway, combined with the GPU Ecosystem Programme, deliver the speed and protection that India’s production AI systems demand.
“As autonomous agents become more sophisticated and models take on increasingly critical roles, securing the inference layer is no longer optional. It is the foundation on which safe, agentic systems must be built. We're transforming models and agents from vulnerable systems into trustworthy, production-ready intelligence that organisations can deploy with confidence. Our goal is to ensure that the AI momentum is not only powerful, but also secure,” said Vrajesh Bhavsar, Co-founder and CEO of Operant AI.
The programme is open to infrastructure providers and AI platform vendors, offering integrated security capabilities as enterprises increasingly demand safe, scalable AI deployment.
The initiative also includes partnerships with infrastructure players such as Tenstorrent, integrating high-performance inference with real-time monitoring.
“As AI moves to always-on agents, the bar for infrastructure gets higher: it has to be performant and open by design. Partnering with Operant AI lets Tenstorrent customers pair our high-throughput, Tensix-based inference platforms with real-time visibility of agents at the inference runtime layer, enabling them to scale their AI initiatives with confidence,” said Aniket Saha, VP of Product Strategy at Tenstorrent.