The Model Context Protocol (MCP) is currently being positioned by numerous consulting firms as a comprehensive solution for integration and process challenges in companies. However, MCP is not suitable for every application and has some limitations—as is so often the case, the optimal solution lies in the use of different technologies. In this blog article, we explain the weaknesses our customers have identified when using MCP and what better solutions might look like.
Table of Contents
What MCP is – and what it is not
What is the Model Context Protocol? MCP is a standard that allows AI models such as Claude or ChatGPT to access tools and data sources in a more structured way – for example, tickets, documents, or internal APIs. It defines how tools are described and how an agent can call these tools, e.g., to perform a search or modify data (actions).
It is important to note the distinction: MCP is a transport and integration protocol, but not a search system, an enterprise graph, or a relevance ranking. It does not answer the question of which data is truly relevant, up-to-date and trustworthy for a specific task – this is precisely where most problems arise in practice with AI agents.
Why MCP is currently trending
MCP is currently trending because it solves an acute problem: the complex, expensive point-to-point integration of AI agents with many systems. As an open standard, MCP promises a kind of “USB-C for AI,” i.e., a uniform plug-in system that allows different models to access tools and data in the same way – significantly reducing integration effort and time-to-market. Added to this is a rapidly growing ecosystem: in 2025, thousands of MCP servers were created in a short period of time, and a significant proportion of large companies are already testing or using MCP productively, reinforcing the impression that “you can’t miss the boat.” At the same time, consultancies and tool manufacturers are positioning MCP as a lever to make agents productive faster and reduce integration costs by 25-40% – this argument contributes significantly to the current hype, but often obscures the fact that MCP does not solve the issues of relevance, context quality, and governance on its own.
Why good AI agents fail because of context
Why is context so crucial? Large language models (LLMs) have a fixed context window – they can only keep and process a limited amount of tokens “in their heads” at any given time. If too much information is pushed into it, the model’s attention becomes diluted: relevant facts compete with noise, older documents with newer ones, contradictory sources with each other.
This phenomenon is often described as “context red”: the larger the data set, the more difficult it is for the model to reliably find the truly relevant information and link it correctly across multiple steps. Current benchmarks show that systems with a strong enterprise context layer – i.e., good search, graphs, and signals – deliver the correct answer significantly more often than generic assistants, even when using the same base models.
So companies need two things:
- strong models for reasoning,
- and a context layer that ensures that only the right information ends up in the context at the right moment.
To find the right context, a retrieval level is usually used; many are familiar with this principle from the field of retrieval augmented generation.
The limits of MCP in enterprise use
Where does MCP reach its practical limits? On paper, MCP seems like a plug-and-play approach: register tools, define schema, done. In real enterprise environments, however, it quickly becomes apparent that the protocol has several structural weaknesses – especially when companies use it as the sole integration layer for AI.
Typical limitations:
- No built-in relevance management
MCP defines how a tool is called up, but not which tool is suitable for which question or how results should be prioritized. Companies must either build this logic themselves – for each tool landscape and use case – or purchase it (more information to follow at a later date).
- Inconsistent context from different tools
MCP-based tools deliver very different amounts and formats of data: a long Slack thread here, a few brief tickets there. In practice, a loud source (e.g., chat logs) can dominate the context and crowd out more important, structured systems (e.g., CRM, DMS). Results must be prioritized.
- Overuse of tools and context windows
Many agents compensate for weak search capabilities by simply calling up more tools and loading more data – either very “deep” with many sequential calls or very “broad” with parallel queries. This drives up costs, clogs the context window, and increases the error rate without actually providing better answers.
- Security and governance issues
Research shows that MCP creates additional attack surfaces, e.g., through manipulated tool descriptions, insufficiently tested third-party tools, or a lack of central governance. Companies need additional gateways, review processes, and policies here – MCP itself does not provide this “out of the box.”
MCP is therefore a useful protocol – but as the sole answer to enterprise integration and context problems, it is overrated.
Learn about different approaches now. In our white paper MCP vs. Index-Based approaches, we explain the technical functions in detail. You can download the white paper here free of charge:
Why a dedicated context layer is essential
How can the context problem be better solved? Experience from enterprise search projects in recent years shows that the key is a dedicated context layer (or index) that stores and connects internal knowledge as a “corporate memory” – regardless of whether MCP, direct API connection, or other integration methods are used later on. This context layer is exactly what we have been developing with amber since 2020.
Such a context layer typically includes:
- Indexed data from many systems
Documents, tickets, chats, code, CRM objects, and much more are indexed in a permission-aware manner, i.e., always taking existing access rights into account.
- Enterprise graph/context graph
Relationships between projects, customers, teams, tickets, and documents are digitally connected. This enables a system to better understand which entity is meant in ambiguous queries and which sources are relevant for this.
- Enterprise memory
The system learns from agent runs: which tools led to good answers, which parameters were helpful, which paths were inefficient or incorrect. These experiences are incorporated into future tool decisions – regardless of whether the tools are connected via MCP or other means.
This context layer acts as both a filter and an amplifier: it reduces noise and enriches answers with the resources that are actually relevant. MCP can then build on this layer instead of having to replace it.
What customers say who have compared MCP with a context layer
At amber, we rely precisely on the context layer described above. Several companies have now compared the two approaches. The lesson learned: The real leverage lies in the context layer, not in the pure MCP approach. In a blind study with 280 realistic enterprise queries, human reviewers preferred answers based on a context layer 1.9 times more often than answers from ChatGPT Enterprise and 1.6 times more often than Claude – despite the same underlying models.
From the customer’s point of view, this is particularly evident in practice: the context layer is significantly more reliable at finding the right document, understands company-specific terms, solves multi-level technical questions, and provides concrete next steps, while MCP-driven approaches often call up many tools, add a lot of noise to the context, and still miss the actual “single source of truth.” Companies are finding that approaches such as those pursued by Claude and ChatGPT attempt to compensate for gaps in the search in MCP setups by using “more tool calls” – this increases costs, complicates governance, and exacerbates context overload, while solutions such as amber deliver more stable, traceable answers with deep connectors, specialized indexes, enterprise graph, and enterprise memory.
Want to try amber? Then sign up for our demo now and experience amber live:
Practical observations: When MCP reaches its limits in companies
What does this look like in real projects? Companies that introduce “MCP-First” often observe similar patterns – regardless of industry or tool stack.
Typical situations:
- Many tool calls, little added value
Agents call up dozens of tools, search Slack, Drive, tickets, and wikis, but still deliver incomplete or incorrect answers because the data set is not clearly identified. It seems very comprehensive and productive – but ultimately only generates LLM costs.
- Dominance of individual systems
A system with long, rich content – often chat or email – supplants structured, authoritative sources such as CRM, HR systems, or specialist wikis in context. This leads to answers that are based more on conversation history than on guidelines or policies, as context is not linked to relevance.
- Difficulties in changing course
Once the agent has “decided” that Slack, for example, can answer a question, it is difficult for them to switch to other tools. As a result, important documents or systems are ignored, even though they would be the better source.
- Costs and limits
Every tool call and every additional document consumes tokens and sometimes API quotas. Companies report that broad MCP setups quickly become expensive and exceed the rate limits (cancellation of the request) of individual systems without a corresponding increase in response quality.
None of these are fundamental arguments against MCP – but they clearly show that without a strong context layer, MCP is just another integration mechanism, not the solution to the context problem.
Target vision: This is what a sensible architecture with MCP looks like
How can MCP be meaningfully integrated into an enterprise architecture? A goal-oriented picture consists of three levels:
1. Context foundation: Enterprise search
- Indexing of all relevant systems (docs, tickets, CRM, code, chats), including permissions.
- Creation of an enterprise or context graph that brings together entities across systems.
- Signal-based relevance (usage, timeliness, ownership, links).
2. Agent and AI layer
- LLM-based assistants and agents use this context layer via search and retrieval APIs.
- Enterprise memory optimizes tool usage and paths for typical tasks in the long term.
3. Integration layer: MCP & Co.
- MCP serves as a standardized interface to bring this curated context to various clients and platforms – such as IDEs, chat interfaces, or other agent frameworks.
- External tools can use MCP to access the same enterprise context that employees already use in native search.
In this architecture, MCP is a transport and integration layer, while relevance, security, compliance, and context quality are anchored in the enterprise search and graph layers. This allows workflow builder tools such as n8n, make, Zapier & Co. to always base their actions and decisions on the latest knowledge and, with the help of MCP, trigger the right actions instead of delivering wrong decisions based on wrong information.
Decision guide: When MCP makes sense – and when it doesn’t
When is it worthwhile to use MCP in a targeted manner – and when should companies invest in their context layer first?
MCP makes sense if:
- You only work with small amounts of data.
- The main focus is on “action tools,” such as creating tickets or triggering workflows.
- A powerful enterprise search or context system already exists that is connected via MCP.
Companies should first build a dedicated context layer if:
- Many heterogeneous systems are involved (drives, M365, DMS, CRM, HR, code, tickets, chats).
- Accuracy and traceability of responses are business-critical.
- The focus is on complex, multi-step tasks, such as analyses, decisions, and technical troubleshooting flows.
- Security and compliance requirements (GDPR, role-based access, hosting location, ISO certifications) play a major role.
The most important question is: “Have I solved my context problem – or am I trying to conceal it with more protocol logic?” If the context layer is missing, MCP will only shift the symptoms, not eliminate the cause.
Conclusion and concrete next steps
MCP is a useful component in the modern AI tech stack, but it is no substitute for a robust enterprise context. Companies that rely solely on MCP are optimizing in the wrong place: they standardize access to data and tools without ensuring that the context provided is truly relevant and secure.
If you want to introduce MCP or AI agents into your company, you can start like this:
- Take inventory of your most important systems and data sources.
- Identify 3–5 critical use cases where accuracy and completeness are crucial.
- Build or evaluate an enterprise search and context platform such as amber, which already neatly maps permissions, graphs, and signals.
- Use MCP specifically to bring this context into your preferred AI tools and agents – not the other way around.
If you would like to see what such an architecture looks like in practice and how you can bundle your existing stack (e.g., Microsoft 365, Confluence, Jira, Salesforce, Slack) into an enterprise search platform before deploying MCP, please feel free to contact us. We’ll show you in a demo how to turn your existing data landscape into a real context layer – and use MCP where it really adds value: