JFrog Unveils MCP Server to Bring AI-Native DevOps to the Enterprise
The JFrog MCP Server supports seamless communication between large language models (LLMs) and software metadata.

JFrog has introduced its new MCP Server, a foundational layer designed to bring AI-native capabilities to DevOps workflows by allowing AI agents to interact natively with software supply chain data.
Built using the Model Context Protocol (MCP), the server bridges the gap between AI systems and development environments, enabling automated, intelligent decision-making throughout the software lifecycle.
“AI agents can now interact with your software supply chain data as easily as humans do. This changes everything about how you build, test, secure, and deliver software,” JFrog said in its official blog post.
The JFrog MCP Server supports seamless communication between large language models (LLMs) and software metadata by exposing software bill of materials (SBOMs), build information, vulnerability data, and more in a machine-readable format. This empowers AI agents to query environments, recommend remediations, and even generate pull requests—all in real time.
The server offers a secure, isolated schema, ensuring AI agents operate within strict access controls. JFrog says the MCP Server integrates easily with existing JFrog platforms and works with both private and public LLMs.
“Until recently, connecting AI agents to diverse enterprise systems created development bottlenecks, with each integration requiring custom code and ongoing maintenance, which was costly and time-consuming for internal teams to manage,” Yoav Landman, founder and CTO at JFrog, said.
Comments ()