Model Context Protocol: Game changer or just hype?

Model Context Protocol. Game changer or just hype
Ali Russell

Ali Russell

CTO @ Tray.ai

Discover whether the Model Context Protocol (MCP) is a real breakthrough or just hype. Learn what MCP is, where it shines, where it falls short—and how Tray.ai is thinking about the future of AI agent standards like MCP and beyond.

There’s been a lot of noise in the past few months about the Model Context Protocol (MCP)—an emerging standard from Anthropic that aims to help AI copilots and assistants do more than just respond to prompts. MCP proposes a way for client applications powered by large language models (LLMs) to access real data and take real action across a user’s stack.

In this post, I’ll unpack what MCP is, where it shows promise, where it falls short—and how we’re thinking about it at Tray.

What is MCP?

Anthropic released the first draft of the MCP specification in December 2024, but it didn’t get much traction until March 2025. That’s when momentum picked up—leaders like Sam Altman and Sundar Pichai publicly expressed support, and suddenly, the AI ecosystem was paying attention.

At its core, MCP is a communication standard—an attempt to bring order to how LLM-based clients interact with external systems. Think of it like USB-C for AI assistants: A consistent, universal interface that makes it easier to connect to third-party tools and data.

The goal is straightforward: If you’re using an LLM-powered client like Claude Desktop or Cursor, MCP gives that client a standard way to reach outside of itself—to access data, trigger workflows, and interact with the rest of your systems.

This isn’t a new idea. OpenAI launched ChatGPT plugins (now known as "actions") years ago. But MCP is the first real attempt to formalize that concept into an open, community-driven protocol.

What MCP gets right

  • A narrow, practical focus. MCP is refreshingly focused on a specific outcome: enabling LLMs to call tools and access data in a standardized way.

  • It builds on what works. Rather than replacing APIs, it abstracts over them—providing a lightweight wrapper that makes it easier for LLMs to understand how to interact.

  • Early traction. The pace of experimentation is high. With active community involvement and some high-profile backing, MCP is evolving quickly.

A quick history lesson

In the early 2000s, Tim Berners-Lee envisioned a Semantic Web—a globally connected network of structured, machine-readable data. But getting everyone to agree on ontologies and RDF standards proved difficult. The web became optimized for people to read, not for machines to act on.

I was working in this space at the time, as part of the Web Science Research Initiative (WSRI), a program created to advance that vision. We were focused on making the web more structured and machine-readable, with the hope that machines could eventually take meaningful action on that data. But adoption challenges made that vision hard to realize at scale.

Instead, the API economy took hold. But APIs are messy. They’re flexible, inconsistent, and built with humans in mind. MCP narrows the focus. Instead of reinventing how data is structured, it standardizes how LLMs can discover and call external tools—without needing every API to change.

It’s a smart, pragmatic shift. And it’s why MCP is gaining attention.

Where MCP falls short today

Despite its promise, MCP isn’t ready for enterprise adoption. Here’s why:

1. Security and authentication are underdeveloped.

MCP wasn’t designed with remote, multi-user environments in mind. Authentication and access control were added later—and while the spec has recently introduced improvements with a new authorization standard, adoption is still early. Client implementations vary, and the spec itself is evolving quickly. For enterprises that need stability, governance and trust, MCP is not yet a foundation to build on. 

2. Governance and observability are lacking.

There’s no standard way to monitor activity, enforce access policies, or see who’s using what. For IT teams, this is a non-starter.

3. Server quality is inconsistent.

The open-source community is moving fast, but that speed comes with tradeoffs. Quality, trust, and security vary significantly between servers.

4. Remote execution was an afterthought.

MCP’s foundation is built on local-first protocols like LSP. That doesn’t scale to cloud-native, enterprise-wide deployments.

Tray.ai’s perspective

At Tray, we see the potential. Standards like MCP could help bring consistency to an increasingly fragmented space. But right now, MCP is not ready for enterprise use.

We’re watching it closely. We're exploring lightweight compatibility options and contributing to the conversation. But we’re not waiting around for a spec to mature before helping customers solve real problems.

Tray already supports many of the same capabilities. Our Connectivity API uses JSON schema to define inputs and tools. Our Merlin Agent Builder lets you design, govern, and monitor agent behavior—safely, at scale.

We’re also keeping a close eye on other efforts, like Google’s recently announced A2A (Agent-to-Agent) protocol. There’s a real energy around creating interoperable agent standards. But just like with MCP, it’s too early to know which will stick. 

If and when these protocols become enterprise-ready, we’ll be ready too.

Subscribe to our newsletter

Privacy policy