MCP the basics to master
How MCP is one of AI aspects that is changing API Integration (and this is not only for local usage)
“The future of integrations isn’t about adding APIs — it’s about connecting intelligence.”
If you like me are around the tech world for a while, you’ve probably witnessed how REST, GraphQL, and later SDK-based integrations changed the way we connected services. But now, something much bigger is happening: MCP (Model Context Protocol).
MCP is not “just another API spec.” It’s a protocol designed for AI-native integrations, making it possible for models (like GPT, Claude, etc.) to talk to external systems safely and contextually — using a standardized interface, but be careful, as everything related to AI its also has a huge unnecessary buzzword around and fever, but a lot of good things as well, like the mcpservers.org
That said, let’s jump in step by step…
🧩 What is MCP?
MCP stands for Model Context Protocol. It defines how language models can discover, list, and call external tools, databases, and APIs — securely and in a structured way.
Think of it as OpenAPI meets LangChain, but standardized.
It allows a model to:
- Discover available tools and resources (
list_resources) - Execute them (
call_tool) - Understand their schema and context dynamically
And the best part: it’s not tied to a local environment. MCP can work:
- Locally (like in Cursor IDE or Claude Code)
- Over a network (e.g., via REST, gRPC, or WebSocket)
- Even remotely, exposing your API endpoints to AI safely
⚙️ Why it Matters
Before MCP, most AI integrations were ad hoc. Developers hardcoded functions or built brittle bridges between APIs and LLMs.
Now, MCP makes integrations:
- Discoverable: The model can ask what tools exist.
- Standardized: Any compliant model can use them.
- Context-aware: MCP can expose not only endpoints but also metadata, descriptions, and examples.
- Secure: You can scope what’s visible and executable.
It’s the missing layer between AI and real-world systems.
🚀 The Basics: Listing and Calling MCP Tools
Here’s the core idea, illustrated in a Node.js-like pseudocode:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import { MCPClient } from "mcp-sdk";
const mcp = new MCPClient({
baseUrl: "https://api.example.com/mcp",
token: process.env.API_TOKEN
});
// 1️⃣ List all resources
const tools = await mcp.listResources();
console.log(tools);
// 2️⃣ Call a specific tool
const result = await mcp.callTool("weather.get_forecast", {
city: "Berlin",
days: 3
});
console.log(result);
That’s the simplicity of MCP — once your model or environment supports it, you can discover and use tools just like this.
🧠 Example 1: MCP with a Weather API
Let’s expose a simple Weather API through MCP.
manifest.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"name": "Weather API",
"description": "Fetch current and forecasted weather data",
"resources": [
{
"path": "weather.get_forecast",
"args": {
"city": "string",
"days": "number"
},
"returns": "Forecast data for the given city and period"
}
]
}
Backend (Node.js)
1
2
3
4
5
6
7
8
9
10
import express from "express";
const app = express();
app.post("/weather.get_forecast", async (req, res) => {
const { city, days } = req.body;
const forecast = await getWeather(city, days);
res.json({ city, forecast });
});
app.listen(8080);
Now your MCP endpoint can be discovered by any model that speaks MCP. No custom SDKs. No fine-tuning.
🧾 Example 2: MCP for a Database Query
Expose your PostgreSQL as a contextual MCP resource.
1
2
3
4
5
app.post("/db.query", async (req, res) => {
const { sql } = req.body;
const result = await db.query(sql);
res.json(result.rows);
});
Manifest:
1
2
3
4
5
{
"path": "db.query",
"args": { "sql": "string" },
"returns": "Query result rows"
}
Now, an AI model can literally do:
“List the last 5 customers who joined from Brazil.”
And the MCP layer will handle the structured call. It’s a “safe SQL bridge” between AI and your data.
🧮 Example 3: MCP for GitHub Automation
Let’s automate GitHub issues directly through MCP.
1
2
3
4
5
6
7
8
9
{
"path": "github.create_issue",
"args": {
"repo": "string",
"title": "string",
"body": "string"
},
"returns": "GitHub issue link"
}
1
2
3
4
5
app.post("/github.create_issue", async (req, res) => {
const { repo, title, body } = req.body;
const issue = await createIssueOnGitHub(repo, title, body);
res.json({ issue_url: issue.html_url });
});
Now, AI can create issues naturally:
“Open a GitHub issue in
weddingpassport.iorepo about CORS errors.”
This is hands-free ops done right.
but….. there’s a lot of flaws…
💀 The “Useless but Allegedly Useful” MCP Example
Not all integrations that look smart actually are. A perfect example is an MCP for Git commands — something like this:
1
2
3
4
5
6
7
8
{
"path": "git.commit",
"description": "Create a Git commit with a message",
"args": {
"message": "string"
},
"returns": "Result of the commit command"
}
And the corresponding endpoint:
1
2
3
4
5
6
7
8
app.post("/git.commit", async (req, res) => {
const { message } = req.body;
const { exec } = await import("child_process");
exec(`git commit -am "${message}"`, (error, stdout, stderr) => {
if (error) return res.status(500).json({ error: stderr });
res.json({ result: stdout });
});
});
At first glance, it feels cool — “Wow, the AI can commit code directly!” But think for a second.
🚫 Why it’s useless:
- AI already “knows” Git — Large Language Models can generate
git commit,git push, and even multi-step shell workflows without any MCP layer. - No real context benefit — You’re not giving the AI access to semantic or private knowledge. It’s just running a command that’s already in its pretraining.
- High risk, low reward — Allowing automated commits is dangerous without human review.
- Adds friction instead of removing it — Instead of “commit this change,” you now have to structure a JSON call.
This is the type of MCP trap many developers will fall into: rebuilding what the AI already masters natively, mistaking execution for integration.
💡 If your MCP doesn’t expose unique data, actions, or permissions — it’s probably not worth creating.
🌍 MCP Beyond Local
Most people think MCP only works inside tools like Cursor or Claude Code, but it can live anywhere:
- Inside your backend microservices
- As an API gateway for AI agents
- Deployed remotely to expose internal knowledge bases securely
Here’s a quick config for a remote MCP registry:
1
2
3
4
# MCP Gateway example
docker run -p 9090:9090 \
-e MCP_CONFIG=/app/manifest.json \
ghcr.io/mcp/mcp-gateway:latest
Boom. You just made your service MCP-compatible and remotely discoverable.
🧭 Conclusion
MCP is the missing protocol that finally unifies AI and system integration. It brings discoverability, structure, and safety to how models interact with the world — from weather APIs to corporate databases.
But remember:
“Not every API deserves to be an MCP. Use it where intelligence meets action.”
🧑💻 TL;DR
| Concept | Description |
|---|---|
| What | Model Context Protocol – a standard for AI to discover and use APIs/tools |
| Why | Enables structured, safe, discoverable integrations |
| Use cases | Data queries, automations, domain-specific tools |
| Avoid | Contextless or redundant MCPs |
| Scope | Local or remote — not limited to IDEs |