Introduction:
In the world of AI-powered applications, it’s easy to assume that exposing your existing APIs to an LLM magically turns them into a Model Context Protocol (MCP) server. After all, both provide endpoints and functions, right?
If you’ve been experimenting with building MCP servers or auto-generating them from existing APIs, you’ve probably realized that it doesn’t just “work out of the box.” The truth is simple: your API is not an MCP. In this post, we will explore why this is the case, what an MCP really is, and how to design one properly.
How Not to Build an MCP Server
The fastest way to fail at building an MCP server is to treat it like an API wrapper generator. Many developers start by pointing an MCP SDK or code generator at their REST API and expect it to produce a fully functional MCP server, one that LLMs can reason with and use effectively.
The result? A bloated MCP server with hundreds or thousands of auto-generated tools that an LLM can’t meaningfully navigate. Even with recent advancements in large context windows and multi-million-token capacities, LLMs still perform best with smaller, well-structured contexts.
Feeding an LLM thousands of similar API endpoints as “tools” only creates confusion. Instead of becoming more capable, it becomes more uncertain about which tool to use and when.
So, before you turn your API catalog into an MCP server, let’s take a step back and understand what an MCP actually is.
What Is an MCP?
Model Context Protocol (MCP) defines how an AI model (LLM) interacts with external systems in a structured, safe, and explainable way.
It standardizes how tools, resources, and prompts are exposed to LLMs so they can act intelligently within well-defined boundaries.
An MCP server isn’t just a bundle of endpoints. It’s a contextual ecosystem that connects an LLM to data and actions with intent and clarity.
What Makes Up an MCP Server
Component | What It Is | Examples |
Tools | Executable actions or functions that the AI can invoke | API calls, running scripts, or multi-step workflows |
Resources | Data sources the AI can access and load into its context | Local files, database queries, cloud documents |
Prompts | Reusable templates or structured guides for LLM interactions | Code review prompts, bug triage templates, onboarding flows |
An effective MCP server organizes these elements carefully, giving the LLM exactly what it needs to achieve a goal, not everything it could possibly do.
Why Your API Doesn’t Make a Good MCP
1. Too Many Tools, Too Little Focus
Let’s imagine you auto-generate MCP tools for every API endpoint you have, maybe hundreds or even thousands.
LLMs are notoriously bad at choosing from too many options. Giving an LLM 1,000 tools to pick from is like asking someone to find one specific screw in a bucket of bolts.
Even with “infinite context” or million-token windows, more context doesn’t mean better reasoning. The model’s ability to make accurate, confident choices actually drops when overloaded with irrelevant or redundant information.
Don’t overwhelm the LLM with tools – curate them.
2. APIs Aren’t Written for LLMs
Most APIs were written for developers — not for language models.
A typical API description might look perfectly clear to a human engineer, but to an LLM, it’s often vague and incomplete. LLMs rely on rich examples, consistent naming, and contextual hints to a greater extent than humans do.
When we build MCP tools, we don’t just describe them — we teach the model how and when to use them.
That’s why tool descriptions in MCP servers are often verbose, structured in XML or JSON, and accompanied by examples. We also test them using evals — automated evaluations that check whether the LLM is using the right tool for the right purpose.
So, if your API documentation isn’t written with the same clarity, structure, and examples that an LLM needs, it’s not ready to be an MCP.
3. APIs Focus on Resources, LLMs Focus on Goals
Most business APIs are designed around resource management — creating users, updating records, and managing assets.
LLMs, on the other hand, don’t think in terms of “resources.” They think in terms of goals.
An LLM doesn’t care about POSTing to /createOrder — it cares about completing a purchase. It doesn’t want to “fetch user data” — it wants to help you find an active customer.
The OpenAPI schemas that define modern APIs were never designed with this kind of intent-driven reasoning in mind. That’s why simply mapping your endpoints to MCP tools often produces disappointing results.
The Hybrid Solution
So how do you go from a developer-oriented API to an LLM-ready MCP server?
The answer lies in a hybrid approach — one that combines automation with thoughtful curation.
Here’s a roadmap:
- Auto-generate your MCP server code.
Use existing SDKs or generators to get started quickly.
For more information, please go through my previous article on converting your API to the MCP server. - Remove unnecessary tools.
Don’t expose every endpoint — only those that represent meaningful user goals or actions. - Evaluate and rewrite tool descriptions.
Ensure each tool has a rich, example-driven description that makes sense to an LLM. - Add new, higher-level tools.
Create goal-oriented tools that abstract away low-level API calls into outcomes the LLM can reason about. - Write your evals.
Continuously test whether the LLM is invoking the right tools for the right scenarios.
By following this hybrid approach, you’ll bridge the gap between traditional API design and the intent-driven, reasoning-oriented world of MCP.
Summary:
Building an MCP server isn’t just about wiring APIs to an AI model — it’s about creating a thoughtful interface between human goals and machine reasoning.
APIs are for developers. MCPs are for models.
If you take the time to understand this distinction and design your MCP server with purpose, your LLM will not only perform better but also appear more intelligent in how it utilizes your systems.