Sitemap

Everything That Is Wrong with Model Context Protocol

15 min readJun 28, 2025

Anthropic’s Model Context Protocol (MCP) started out as a beautifully simple on-machine solution. Early on, you could just run an MCP server locally and connect it to an AI assistant via STDIO (standard input/output), with no networking or auth hassles — it “just worked” for a single developer environment. However, this honeymoon didn’t last. As teams tried to share tools and deploy AI assistants at scale, the limitations of a local STDIO setup became painfully obvious. You can’t realistically share a local process (and its sensitive credentials) across an organization, and you certainly can’t rely on STDIO when your AI agents and tools live in different cloud containers. MCP had to grow up fast and move beyond the desktop. To enable remote use, MCP adopted Server-Sent Events (SSE) and an HTTP-based transport. This shift opened the door for cloud-hosted MCP servers and multiple concurrent connections. But it also introduced new complexity. SSE is a one-way channel (server to client), so MCP ended up needing a persistent open connection for streaming responses and separate HTTP calls for the client’s requests — essentially juggling two connections for a “bi-directional” convo. Anthropic’s newer streamable HTTP update partially addresses this by keeping a single web connection open for real-time two-way data flow, yet the underlying issue remains: MCP’s design moved from a stateless, on-demand call model to a stateful session model that’s more cumbersome in distributed environments. Maintaining these open pipes is a pain. It conflicts with the stateless nature of typical web APIs and microservices, forcing developers to manage session state and socket lifecycles manually. In practice, MCP’s stateful protocol can wreak havoc on scaling and load balancing — persistent connections gobble up server resources and can make the whole system more brittle and less resilient. In short, what started as a plug-and-play local solution turned into a cloud architecture headache, where the simple act of keeping an AI connected to a tool isn’t so simple anymore.

Context Window Bloat and Performance Hits

Another gnarly issue with MCP is how it blows up an LLM’s context window. Every tool you hook up via MCP isn’t just “out there” waiting to be called — it’s actually described in the model’s prompt context so the AI knows what it can do. Load up a bunch of MCP servers and you load up your poor model’s short-term memory with tons of tool definitions and metadata. The result? Bloated context that can choke your model’s performance. Multiple active MCP connections “could consume significant tokens in the LLM’s context window”, slowing down responses and making it harder for the model to focus on what really matters. Essentially, the model is busy remembering the API schema of your CRM and file system and Slack and whatever else, leaving fewer brain cells (tokens) for the actual user query. If you go overboard, you can even hit the context size limit and start truncating information, meaning some data or instructions get clipped out entirely. That’s a recipe for the AI to start forgetting or mixing up details. Overloading the context doesn’t just impact speed and capacity — it can mess with the AI’s decision-making. When there’s a glut of tools and data in the prompt, the model can get confused about which tool to use or what info is relevant, leading to erratic behavior. Researchers have noted that with too many MCP-provided options, the AI might under-select (ignore a tool that it should have used) or over-select (invoke tools that aren’t actually helpful) because it’s essentially lost in an overstocked toolbox (source). One suggested workaround has been to dynamically limit or filter the available tools (even using retrieval techniques to pick relevant tools on the fly), rather than dumping every possible tool into the context. The bottom line: MCP’s “more is more” approach to context can backfire — more tools and data can actually make your supposedly smart assistant feel dumber and slower when it’s overwhelmed with extraneous info.

Security and Exploit Risks

For all its promises of connecting AI to powerful tools, MCP also opens a Pandora’s box of security concerns. By design, MCP gives your LLM the keys to external systems — it can execute functions, retrieve data, even modify things on your behalf. That’s powerful and dangerous. The community quickly identified a slew of potential exploits leveraging this capability. Top of the list is the classic prompt injection attack: malicious instructions hidden in tool schema description trick the LLM into executing unintended actions via MCP. For example, an attacker could embed a sneaky command in a document that the AI is asked to summarize; if the AI isn’t careful, it might obediently call an MCP file-system tool to exfiltrate data or modify files based on that hidden prompt. And it doesn’t stop at prompt text — the tools themselves can be attacked. There’s been talk of tool poisoning, where someone tampers with an MCP server’s advertised functions (especially in community-contributed servers) to include nasty surprises (source). Or the even more insidious tool shadowing: a bad actor spins up a fake MCP server with the same tool name as a popular one, hoping your AI calls the malicious twin instead of the real deal. In an open ecosystem of plugins and servers, these kinds of name-squatting attacks are a real concern. All these vectors mean a compromised MCP setup could allow anything from data leaks to remote code execution under the guise of an AI agent trying to be helpful. Perhaps the scariest part is that MCP has very little in the way of built-in security or permissions management to mitigate these threats. The protocol itself doesn’t enforce authentication, authorization, or sandboxing; it basically trusts whatever tool endpoints you give it. Early versions of MCP didn’t even define a clear auth mechanism, leaving it up to developers to bolt on their own security. This absence of intrinsic security means if you’re not extremely careful, an MCP server running with high privileges could be a juicy target. A breached MCP server could escalate into a full-on data breach or system compromise, since the AI will dutifully execute whatever that server says it can do. Observers have bluntly noted that MCP “lacks inherent security enforcement mechanisms”, instead relying on external measures that were “not initially well-defined” (source). In other words, out-of-the-box MCP is not secure — it’s powerful, but you’re on your own to lock it down. The situation is made worse by the fact that MCP is still very new and evolving. Many implementations and community servers are essentially beta-quality, and “many of the safety mechanisms users might expect simply aren’t there yet.” (source) For instance, proper validation of tool permissions is still a work in progress in many clients. It’s disturbingly common for an AI interface to ask the user once for permission to use a tool and then never ask again, even if later the tool is invoked in a more dangerous way. That lack of consistent, granular permission checking can lead to privilege escalation without the user realizing it. Attackers have demonstrated creative exploits chaining MCP tools together: one example combined a cloud storage tool and a web request tool to leak files off a system, all via the AI agent following what it thought were legitimate instructions. And because there’s little standardization in how tools request approval or how an AI should warn users, it’s easy for malicious actions to slip under the radar. The upshot is that using MCP in production today demands a lot of caution and probably some additional guardrails. The protocol might be advertised as “secure, two-way connections” by Anthropic’s marketing, but in practice MCP is far from plug-and-play secure. Without bolting on your own authentication, user approval flows, and monitoring, you could be introducing significant vulnerabilities into your application.

Identity and Permission Headaches

MCP’s free-for-all connectivity doesn’t just raise red flags for external attackers — it also creates confusion around intent and identity in normal use. One awkward question is “Who is actually doing an action when an AI agent uses a tool?” Is it the end-user operating the AI? Is it the AI itself as an autonomous entity? Or some system account on behalf of the application? MCP doesn’t clearly define this, and that ambiguity becomes a nightmare for auditing and access control. For example, if an AI via MCP deletes a bunch of files or sends an email, how do we log that action usefully? Right now it’s blurry. Experts have pointed out that determining whether requests originate from the end user, the AI agent, or a shared service account is not standardized, making it hard to attribute actions and enforce accountability. In enterprise scenarios, this is a huge issue: you need to know who to blame or alert if something goes wrong. Without clear identity tagging in MCP’s design, tracing an action back to a responsible party can be like chasing a ghost. This also complicates permissioning — e.g., should an AI be allowed to do everything its user can, or are there separate limits? MCP doesn’t offer answers; it leaves it to developers to implement their own identity and permissions model on top, which many have yet to do. Even on the user-facing side, permission management in MCP is currently a UX minefield. Different AI clients handle it differently, and few do it well. Some implementations (like early Claude Desktop builds) would pop up a permission dialog the first time a tool was used, and then never bother the user again for that tool. That sounds convenient — until you realize a crafty attacker could abuse that trust. If the AI asks for access to, say, your “Documents” folder for a benign reason and you approve, it might never ask again, even if later a malicious prompt tries to use that access for something nasty. In one analysis, researchers warned that an attacker could trick a user into granting a harmless request and then follow up with hidden malicious requests that piggyback on the same permission, “leaving the user oblivious to the attack.” (source) This “ask once, use forever” approach is a glaring security risk. On the flip side, some systems that do ask permission every time can lead to “prompt fatigue,” where the user gets so many pop-ups they just mash “Allow All” to get it over with. Neither scenario is great. The need for a balanced, clear permission flow is evident, but MCP’s current spec doesn’t mandate how to handle it — it’s wild west territory for now. Anthropic even acknowledges some actions might be so sensitive that a human should stay in the loop: MCP has a feature where a server can ask the AI to generate a text (a “sampling request”), and Anthropic’s docs suggest such requests should require human approval before the AI proceeds. That hints at the gravity of the trust issues here — even the protocol’s creators are basically saying “um, you might not want to let the AI and tools do everything unchecked.” All told, until MCP matures, developers and users are stuck with a tricky balancing act: how to empower AI with tools without handing it the crown jewels unguarded.

Incomplete Standards and Error Handling Gaps

Another sore point with MCP is that it still feels half-baked as a standard — there are gaps and rough edges in the spec that cause interoperability and reliability issues. A prominent example is error handling (or the lack thereof). Yes, MCP defines some basic error codes for failed tool calls, but it “does not yet enforce a comprehensive error-handling standard”, nor does it cover important aspects like tool versioning or lifecycle management. This means if something goes wrong in a tool invocation, each MCP server might handle (or report) it differently: one might throw a generic error, another might hang, another might produce a verbose stack trace in the AI’s output. From the AI agent’s perspective, it’s hard to reliably know what happened or how to proceed after a failure. The lack of a defined lifecycle also means there’s no official concept of tool upgrades, deprecations, or governance in MCP. If a tool’s API changes or it needs user re-consent, MCP doesn’t have a built-in way to signal that. Essentially, the protocol covers discovery and invocation of tools, but not much beyond. This incompleteness can lead to inconsistent implementations and brittle integrations. An AI agent using tools from multiple vendors might encounter different conventions or edge-case behaviors for each one, because MCP leaves a lot of behavior unspecified. That’s not a recipe for confidence when you’re building critical workflows on top of this. Until the standard fills in these blanks (or the ecosystem converges on best practices), developers have to anticipate and handle a lot on their own. It’s a bit like the early days of web browsers — write your code to the “standard,” then spend time testing each browser (or in this case, each MCP server) to see how it really behaves. Trust and reliability suffer when everyone’s effectively winging it beyond the happy path.

Adoption and Ecosystem Growing Pains

MCP might be all the rage on tech Twitter and among AI startups, but let’s inject some reality: the ecosystem is still immature, and that creates practical limitations. Despite the hype and a flurry of GitHub repos, MCP has yet to achieve broad industry support. Many widely used software tools and SaaS platforms still don’t have official MCP servers available. So unless you build it yourself, your AI agent might not be able to talk to, say, your specific CRM or internal knowledge base via MCP at all. Companies like OpenAI, Microsoft, and Google have announced support, and there are dozens of community-made servers for popular apps, but there are thousands of apps out there in the real world. In mid-2025, we’re far from full coverage. This means early adopters often face the grind of writing custom MCP servers or waiting for someone else to do it. It’s the classic chicken-and-egg of a new standard: the protocol is only as useful as the integrations available. Right now, MCP’s utility can be hit-or-miss depending on the tools you care about. On top of that, developer experience with MCP has some rough spots. Documentation for the protocol and its implementations is improving but still lacking in places – you might run into “TBD” sections or sparse examples, given how fast things are moving. The community, while enthusiastic, is relatively small and nascent. There aren’t years of Stack Overflow answers or blog troubleshooting posts to fall back on when you hit a weird bug. This can make building with MCP feel like navigating uncharted territory. It’s exciting but also time-consuming. Engineers used to mature ecosystems (like REST/GraphQL APIs with robust docs, SDKs, and forums) can find the MCP world a bit wild. Moreover, integrating MCP often isn’t just drag-and-drop; it can require significant system changes and a learning curve. You have to think about running these MCP servers (potentially as separate processes or services), securing them, updating your AI agent code to handle streaming responses, etc. It’s a commitment. All these factors mean the barrier to entry for MCP is non-trivial, and some teams might delay adopting it until it stabilizes further. In the meantime, simpler or more established integration methods might win out for pragmatic reasons.

Not a Silver Bullet: Overkill and Misconceptions

With all the buzz around MCP, it’s easy to get the impression that it’s the ultimate solution for tool integration — the “USB-C for AI”, as Anthropic cheekily nicknamed it. But reality check: MCP is not a cure-all, and in some cases it’s simply not the right tool for the job. In fact, some skeptics argue that MCP is basically reinventing the wheel. They see it as “yet another API description language” dressed up for AI, noting that we already have a slew of standards (like OpenAPI/Swagger) and techniques for integrating software (source). If an AI agent is truly smart, do we really need to hand-hold it with a special protocol? These critics suggest an advanced LLM could just read traditional API docs or schemas and figure out how to call an API on its own, without this new layer of indirection. From that perspective, MCP might be an over-engineered detour — another abstraction that developers have to learn, while possibly adding latency or complexity compared to direct API calls. This view may be extreme, but it highlights a valid point: if your problem is simple, MCP might be overkill. Even MCP’s proponents admit that for “simple, straightforward projects,” a direct integration or API call could work just fine without introducing the MCP layer (source). There’s no need to force every AI use-case into the MCP mold if a basic script or a single SDK will do. Sometimes, good old hardcoding wins for simplicity and reliability. Another common misconception to dispel is the idea that MCP magically handles all your AI’s knowledge and retrieval needs. Yes, MCP can funnel data from various sources to your model, but it is not a replacement for Retrieval-Augmented Generation (RAG) or other information-retrieval strategies. Your LLM still has a finite context window and limited knowledge. If you need to give it access to, say, a huge database or a ton of documents, you’ll still have to implement search, summarization, or vector retrieval outside of MCP. MCP will dutifully deliver the data through a tool, but figuring out which data the model needs at query time — that’s the retrieval problem that MCP doesn’t solve. We’ve already seen that dumping too much info via MCP can overload the model. So, smart retrieval (perhaps using RAG pipelines) works in tandem with MCP; it isn’t rendered obsolete by it. The protocol is just a pipe; you still have to decide what to send through that pipe. Finally, there’s an architectural pitfall lurking in how easy MCP makes it to wire up powerful actions: it tempts developers to offload way too much logic onto the LLM. Since MCP lets an AI perform operations (write files, send emails, etc.) just by “deciding” to call a tool, one could build an app that basically says, “Let the AI figure everything out and do all the work.” That might sound appealing — until the AI does something dumb or dangerous or simply fails in a way a traditional app wouldn’t. Thoughtful voices in the industry warn that while “MCP makes this easy,” treating the AI as a replacement for all your application logic is a damaging anti-pattern. You could end up with an unreliable system that’s hard to debug and maintain, because the real logic is implicit in the AI’s prompts and there’s less deterministic control flow. Also, if your product just becomes a thin wrapper around an LLM orchestrating tools, you might be giving away your “special sauce” — the AI (from Anthropic, OpenAI, etc.) is doing the heavy lifting, not your code, which can erode your ownership of the core value. In short, MCP should be used judiciously; it’s a means to an end, not a blank check to let the AI run wild or to abdicate designing solid software.

Future Uncertainty and Lock-In Worries

Last but not least, one cannot ignore the uncertainty that comes with a nascent standard like MCP. The AI tools space is moving at breakneck speed, and MCP itself is evolving rapidly. There’s a non-zero chance that the protocol could change so much in upcoming versions that today’s MCP integrations break or become outdated. Even more plausibly, a new competing standard could emerge that gains traction and leaves MCP in the dust. In a fast-paced landscape, betting on the wrong horse is a real concern. Industry observers note that MCP’s evolving nature introduces the risk that previous development could be rendered obsolete, or that an entirely different approach to agent-tool interfacing could overtake it. It wouldn’t be the first time a promising technology got leapfrogged by another. This is not to say MCP is doomed — it has strong momentum right now — but anyone adopting it should keep an eye on the horizon. Flexibility and contingency plans (like modularizing your tool interfaces) could save you a lot of refactoring later if things shift. Another angle to the uncertainty is the question of governance and potential vendor lock-in. MCP came out of Anthropic, and naturally Anthropic’s own Claude model and products were first to embrace it. That led to some early skepticism: Is MCP truly open, or will Anthropic control the roadmap in a self-serving way? Will we see fragmentation if every AI company pushes its own tweak to the protocol? These concerns are valid in any “standard” that starts under a single company’s wing. The good news is Anthropic has taken steps to alleviate the fear of lock-in. They open-sourced the spec and invited other major players to help steer it. By mid-2025, the MCP Steering Committee expanded to include heavyweights like Microsoft (GitHub), OpenAI, and even Google, indicating a more collaborative future. This broader ownership greatly reduces the chance that MCP remains an Anthropic-only affair or that it forks into incompatible versions. In fact, OpenAI and others publicly backing MCP is a strong signal that the industry might converge here. Still, there’s a healthy caution in order: until MCP is fully standardized by an independent body or widely adopted across the board, early adopters carry some risk. If Anthropic’s priorities change or if one of the big contributors has a change of heart, the community could be left holding the bag. For now, though, the trend is positive on governance — it looks like MCP is moving toward a truly open standard rather than a proprietary cul-de-sac. Just go in with your eyes open: in the wild west of AI protocols, today’s gold rush could be tomorrow’s ghost town if you’re not careful. In summary, Anthropic’s Model Context Protocol is a bold and exciting step toward more capable AI agents, but it’s not all sunshine and roses. From technical headaches with stateful connections and context limits, to serious security pitfalls and growing-pains in adoption, MCP has a lot of shortcomings that users must reckon with. It’s a tool with tremendous potential, yet burdened by the baggage of its youth — a double-edged sword that demands both enthusiasm and skepticism. As one might say in IT slang, MCP is awesome, but it ain’t bulletproof. Being aware of these deficiencies is the first step to using the protocol responsibly, or deciding when to hold off until it matures. The MCP train is leaving the station fast, but make sure you know the risks before you jump on board.

--

--

Dmitry Degtyarev
Dmitry Degtyarev

Written by Dmitry Degtyarev

Tips and Tricks for AWS CUR, Savings Plans and Reserved Instances

No responses yet