New Relic Adds Support for Model Context Protocol to Enable True End-to-End Observability of AI Applications
MCP servers are critical for enabling AI agents to interact with various tools and services.

New Relic has announced groundbreaking support for the Model Context Protocol (MCP) within its comprehensive AI Monitoring solution, fully integrated with New Relic’s best-in-class Application Performance Monitoring (APM).
Now developers building agents that use MCP and the teams providing MCP services can access deep, actionable insights that allow them to quickly pinpoint and resolve any issues with AI applications, reducing the need for manual effort and custom instrumentation, and lowering operational costs.
“Since it was released last year, MCP has quickly become the standard protocol for agentic AI. Once again meeting our customers where and how they work, our new MCP integration is a game-changer for anyone building or operating AI systems that rely on this protocol.
“We’ve moved beyond siloed LLM monitoring to demystify MCP, connecting insights from AI interactions directly with the performance of the entire application stack for a holistic view. All this is offered as an integral part of our industry leading APM technology,” New Relic Chief Technology Officer Siva Padisetty, said.
MCP servers are critical for enabling AI agents to interact with various tools and services, but often operate as “black boxes.”
This leads to a lack of visibility into which tools are being used by AI agents and how they perform. It’s also difficult for MCP providers to understand tool usage, identify performance bottlenecks or pinpoint error sources within their service.
Often, complex, manual effort is required for custom instrumentation to get any insights.
New Relic’s support for MCP solves these challenges, giving agent developers and MCP providers the following capabilities:
- Instant MCP tracing visibility: Automatically uncover specific usage and patterns of the entire lifecycle of an MCP request, including invoked tools, call sequences, and execution durations with clear waterfall diagrams.
- Proactive MCP optimization: Quickly analyze which tools agents select for specific prompts, evaluate tool choices and effectiveness, and track usage patterns, latency, errors, and performance to optimize MCP services and demonstrate value.
- Intelligent AI monitoring context: Seamlessly correlate MCP performance with the entire application ecosystem – including databases, microservices and queues – eliminating screen-swiveling between monitoring tools.
Comments ()