There’s been a lot of noise about the Model Context Protocol (MCP), a standard designed to help AI agents and LLM-powered applications access data and tools. MCP proposes a way for client applications powered by large language models (LLMs) to access real data and take real action across a user’s stack.
In this post, I’ll unpack what MCP is, where it shows promise, where it falls short, and how we’re thinking about it at Tray.
What is the Model Context Protocol (MCP)?
Anthropic released the first draft of the MCP specification in December 2024, but it didn’t get much traction until March 2025. That’s when momentum picked up, and leaders like Sam Altman and Sundar Pichai publicly expressed support. Suddenly, the AI ecosystem was paying attention.
At its core, MCP is a communication standard, an attempt to bring order to how LLM-based clients interact with external systems. Think of it like USB-C for AI assistants: A consistent, universal interface that makes it easier to connect to third-party tools and data.
The goal is straightforward: If you’re using an LLM-powered client like Claude Desktop or Cursor, MCP gives that client a standard way to reach outside of itself to access data, trigger workflows, and interact with the rest of your systems.
This isn’t a new idea. OpenAI launched ChatGPT plugins (now known as "actions") years ago. But MCP is the first real attempt to formalize that concept into an open, community-driven protocol.
What the Model Context Protocol gets right
- A narrow, practical focus. MCP is refreshingly focused on a specific outcome: enabling LLMs to call tools and access data in a standardized way.
- It builds on what works. Rather than replacing APIs, it abstracts over them, providing a lightweight wrapper that makes it easier for LLMs to understand how to interact.
- Early traction. The pace of experimentation is high. With active community involvement and some high-profile backing, MCP is evolving quickly.
A quick history lesson
In the early 2000s, Tim Berners-Lee envisioned a Semantic Web—a globally connected network of structured, machine-readable data. But getting everyone to agree on ontologies and RDF standards proved difficult. The web became optimized for people to read, not for machines to act on.
I was working in this space at the time, as part of the Web Science Research Initiative (WSRI), a program created to advance that vision. We were focused on making the web more structured and machine-readable, with the hope that machines could eventually take meaningful action on that data. But adoption challenges made that vision hard to realize at scale.
Instead, the API economy took hold. But APIs are messy. They’re flexible, inconsistent, and built with humans in mind. MCP narrows the focus. Instead of reinventing how data is structured, it standardizes how LLMs can discover and call external tools—without needing every API to change.
It’s a smart, pragmatic shift. And it’s why MCP is gaining attention.
Where the Model Context Protocol falls short today
While the Model Context Protocol (MCP) has gained significant momentum across the AI ecosystem, many enterprises are still evaluating how to adopt it safely at scale. Here’s why:
1. Security and authentication are underdeveloped.
MCP wasn’t designed with remote, multi-user environments in mind. Authentication and access control were added later, and while the spec has recently introduced improvements with a new authorization standard, adoption is still early. Client implementations vary, and the spec itself is evolving quickly. For enterprises that need stability, governance and trust, MCP is not yet a foundation to build on.
2. Governance and observability are lacking.
There’s no standard way to monitor activity, enforce access policies, or see who’s using what. For IT teams, this is a non-starter.
3. Server quality is inconsistent.
The MCP ecosystem is evolving rapidly, but quality, security, and operational maturity still vary significantly across servers.
4. Remote execution was an afterthought.
MCP’s early implementations were heavily influenced by local-first patterns like the Language Server Protocol (LSP), though newer MCP implementations are increasingly designed for remote and cloud environments.
Tray and MCP
At Tray, we’ve expanded our support for MCP-style architectures through Agent Gateway, which provides centralized control, governance, and observability for agents interacting with MCP services.
Agent Gateway acts as a secure orchestration layer, helping enterprises safely expose tools and services to agents while maintaining authentication, monitoring, and policy enforcement.
Tray already supports many MCP-style capabilities. Our Connectivity API uses JSON schema to define inputs and tools, while Merlin Agent Builder enables teams to design and govern agents. With Agent Gateway, organizations can also manage agent access to tools and MCP-style services through a centralized control layer.
Over the past year, MCP has gained significant adoption across AI tooling ecosystems, and many vendors are now experimenting with MCP-compatible services and servers.
FAQ
What is the Model Context Protocol (MCP)?
A communication protocol that allows AI agents and LLM-based applications to discover and interact with external tools and services.
Is MCP ready for enterprise use?
Many enterprises are experimenting with MCP, but most still rely on orchestration layers and governance controls for production deployments.



