What Is MCP (Model Context Protocol)? A Developer's Guide
A practical guide to the Model Context Protocol (MCP) -- Anthropic's open standard for connecting AI models to external tools and data. Architecture, TypeScript server examples, and security best practices.
Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

AI Models Are Powerful -- But They Can't Do Anything Without Context
Large language models can reason, write code, and answer complex questions -- but they can't read your database, call your internal APIs, or access files on your machine. Every useful AI integration has required bespoke glue code: custom plugins, one-off API wrappers, hand-rolled tool definitions. If you've built integrations between an LLM and external systems, you've written this code. And you've written it again every time you switched models or hosts.
The Model Context Protocol (MCP) is Anthropic's answer to this fragmentation. It's an open standard that defines how AI applications connect to external data sources and tools through a single, uniform interface. Think of it as USB-C for AI -- one protocol, any data source, any model. I've been building MCP servers since the protocol launched, and the productivity gain is real. You write a server once, and it works with Claude Desktop, Claude Code, Cursor, and any other MCP-compatible client.
What Is MCP (Model Context Protocol)?
Definition: The Model Context Protocol (MCP) is an open standard created by Anthropic that defines a client-server architecture for connecting AI models to external tools, data sources, and services. MCP provides a universal interface so that any compliant AI application (client) can communicate with any compliant integration (server) without custom glue code.
Before MCP, every AI integration was a snowflake. Want Claude to query your database? Write a custom tool. Want it to read files? Write another tool. Want it to call the GitHub API? Another one. Now multiply that by every AI application you use -- Claude Desktop, your IDE assistant, your internal chatbot. Each one needs its own integration layer.
MCP collapses this N-times-M problem into N-plus-M. Build one MCP server for PostgreSQL, and every MCP client can use it. Build one MCP client, and it can talk to every MCP server. The ecosystem compounds.
MCP Architecture: Clients, Servers, and the Host
MCP follows a client-server architecture with three distinct roles. Understanding these roles is essential before writing any code.
+---------------------------------------------+
| HOST |
| (Claude Desktop, IDE, Custom App) |
| |
| +----------+ +----------+ +----------+ |
| | MCP | | MCP | | MCP | |
| | Client 1 | | Client 2 | | Client 3 | |
| +----+-----+ +----+-----+ +----+-----+ |
| | | | |
+---------------------------------------------+
| | |
+----+-----+ +----+-----+ +----+-----+
| MCP | | MCP | | MCP |
| Server | | Server | | Server |
| (Postgres)| | (GitHub) | | (Files) |
+----------+ +----------+ +----------+
- Host -- the AI application the user interacts with (Claude Desktop, Claude Code, Cursor, or your own app). The host manages one or more MCP clients.
- Client -- lives inside the host. Each client maintains a 1:1 connection to a single MCP server. The client handles protocol negotiation, capability exchange, and message routing.
- Server -- a lightweight process that exposes specific capabilities (tools, resources, prompts) to the client. Servers are where your integration logic lives.
This separation matters. A single host like Claude Desktop can run multiple MCP clients simultaneously -- one connected to a PostgreSQL server, another to GitHub, another to your file system. Each server is isolated, with its own permissions and lifecycle.
The Three Primitives: Tools, Resources, and Prompts
MCP servers expose capabilities through three primitives. Each serves a different purpose, and choosing the right one determines how naturally the AI model interacts with your integration.
Tools -- Callable Functions
Tools are functions the model can invoke. They take structured input, perform an action, and return a result. This is the most commonly used primitive. Examples include executing a database query, creating a GitHub issue, sending a Slack message, or running a shell command.
Tools are model-controlled -- the AI decides when and how to call them based on the user's request and the tool's description. Good tool descriptions are critical; a vague description leads to the model calling the wrong tool or not calling it at all.
Resources -- Readable Data
Resources are data the client can read, similar to GET endpoints in a REST API. They're identified by URIs and can represent files, database records, API responses, or any structured data. Unlike tools, resources are application-controlled -- the host application decides when to fetch them, not the model.
Resources support subscriptions, so a client can watch for changes. Think of resources as the "read" side of MCP -- the model gets context from resources and takes action through tools.
Prompts -- Reusable Templates
Prompts are parameterized message templates that guide the model toward specific workflows. A server might expose a "review-code" prompt that structures a code review request with the right context and instructions. Prompts are user-controlled -- the user explicitly selects a prompt, and the client fills in the parameters.
| Primitive | Controlled By | Analogy | Example |
|---|---|---|---|
| Tool | Model | POST endpoint | Execute SQL query, create Jira ticket |
| Resource | Application | GET endpoint | Read file contents, fetch DB schema |
| Prompt | User | Stored procedure template | Code review template, debug workflow |
Transport: How Clients and Servers Communicate
MCP defines two transport mechanisms. The right choice depends on where your server runs.
stdio -- Local Servers
The client spawns the server as a child process and communicates over standard input/output. This is the simplest transport and the default for local development. The server starts when the client needs it and stops when the client disconnects. No networking, no ports, no auth -- the operating system handles isolation.
Use stdio when the server runs on the same machine as the host. This covers most development workflows: file system access, local database connections, CLI wrappers.
HTTP + SSE -- Remote Servers
For servers that run on a different machine or serve multiple clients, MCP uses HTTP with Server-Sent Events (SSE). The client sends requests via HTTP POST and receives responses and notifications via an SSE stream. This transport supports authentication, load balancing, and horizontal scaling.
Use HTTP+SSE when you need shared infrastructure -- a centralized MCP server that multiple team members connect to, or a server that accesses resources not available on the client machine.
Building an MCP Server in TypeScript
Let's build a practical MCP server. This server exposes a PostgreSQL database as an MCP tool, allowing an AI model to run read-only SQL queries. This is one of the most useful MCP servers you can build -- it gives your AI assistant direct access to your data.
Project Setup
mkdir mcp-postgres && cd mcp-postgres
npm init -y
npm install @modelcontextprotocol/sdk pg zod
npm install -D typescript @types/node @types/pg
Server Implementation
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import { Pool } from "pg";
// Connection pool -- configure via environment variables
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 5,
});
const server = new McpServer({
name: "postgres-query",
version: "1.0.0",
});
// Tool: execute a read-only SQL query
server.tool(
"query",
"Execute a read-only SQL query against the PostgreSQL database. " +
"Returns rows as JSON. Only SELECT statements are allowed.",
{
sql: z.string().describe("The SQL SELECT query to execute"),
},
async ({ sql }) => {
// Validate: only allow SELECT statements
const normalized = sql.trim().toUpperCase();
if (!normalized.startsWith("SELECT")) {
return {
content: [
{
type: "text",
text: "Error: Only SELECT queries are allowed.",
},
],
isError: true,
};
}
try {
const result = await pool.query(sql);
return {
content: [
{
type: "text",
text: JSON.stringify(result.rows, null, 2),
},
],
};
} catch (error) {
return {
content: [
{
type: "text",
text: `Query error: ${(error as Error).message}`,
},
],
isError: true,
};
}
}
);
// Resource: expose database schema for context
server.resource(
"schema",
"db://schema",
{ description: "Database table schemas" },
async () => {
const result = await pool.query(`
SELECT table_name, column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position
`);
return {
contents: [
{
uri: "db://schema",
mimeType: "application/json",
text: JSON.stringify(result.rows, null, 2),
},
],
};
}
);
// Start the server with stdio transport
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("PostgreSQL MCP server running on stdio");
}
main().catch(console.error);
That's a complete, functional MCP server in under 80 lines. The @modelcontextprotocol/sdk package handles all the protocol negotiation, capability exchange, and message framing. You just define tools and resources.
Connecting to Claude Desktop
To use this server with Claude Desktop, add it to your MCP configuration file at ~/.claude/claude_desktop_config.json:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["tsx", "/path/to/mcp-postgres/index.ts"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
}
}
}
}
After restarting Claude Desktop, you can ask questions like "What are the top 10 customers by revenue?" and the model will use your MCP server to query the database directly.
Real-World MCP Server Examples
The MCP ecosystem is growing fast. Here are high-value server patterns I've seen in production.
File System Server
Expose a directory tree as MCP resources with tools for reading, writing, and searching files. Anthropic provides an official reference implementation. This is the backbone of AI-assisted coding in Claude Code -- the model reads your project files through MCP.
API Integration Servers (GitHub, Jira, Slack)
Wrap external APIs as MCP tools. A GitHub MCP server might expose tools like create-issue, list-pull-requests, merge-pr, and resources like repo://owner/name/readme. Jira and Slack follow the same pattern. The key insight: you define the tools once, and any MCP client can use them.
Database Servers (PostgreSQL, SQLite, MongoDB)
Give AI models structured access to your data. The PostgreSQL example above is a starting point. Production versions add query validation, result pagination, schema-aware auto-completion hints, and audit logging.
Web Scraping and Search Servers
Expose web search and content extraction as tools. The model can retrieve up-to-date information, check documentation, or pull data from websites. Brave Search and Fetch are popular reference servers in this category.
MCP in Claude Code and Cursor
MCP isn't theoretical -- it's the integration layer powering the most advanced AI coding tools available today.
Claude Code uses MCP extensively. When Claude Code reads your files, runs terminal commands, or interacts with your git repository, it's going through MCP servers. You can extend Claude Code by adding your own MCP servers -- a database server for querying production data, an API server for deploying code, or a monitoring server for checking system health.
Configure MCP servers for Claude Code in your project's .mcp.json file:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "ghp_your_token_here"
}
}
}
}
Cursor also supports MCP, allowing you to connect external tools and data sources to its AI assistant. The same MCP servers work in both environments -- that's the point of a protocol standard.
Security Considerations
MCP servers are powerful, which makes security non-negotiable. A poorly secured MCP server is an AI-powered attack surface. Here are the practices that matter.
Input Validation and Sanitization
Every tool input must be validated. SQL queries should be restricted to read-only operations at minimum. File paths must be sandboxed to prevent directory traversal. API calls should use least-privilege tokens. The AI model's inputs are untrusted -- treat them the same way you'd treat user input in a web application.
Sandboxing and Isolation
Run MCP servers with minimal permissions. Use dedicated database users with read-only access. Mount file systems read-only where possible. Run servers in containers with restricted capabilities. The stdio transport provides process-level isolation by default, but you still need to limit what the process can access.
Authentication for Remote Servers
HTTP+SSE servers must implement authentication. OAuth 2.0 is the recommended approach. Never expose an unauthenticated MCP server on a network -- it would allow anyone to execute tools through your AI infrastructure.
Audit Logging
Log every tool invocation with its inputs, outputs, and the requesting client. This is critical for debugging, compliance, and detecting misuse. If an AI model executes an unexpected query against your database, you need a trail.
The MCP Ecosystem
Anthropic maintains a set of reference MCP servers covering common integrations: filesystem, GitHub, GitLab, Google Drive, PostgreSQL, SQLite, Slack, Brave Search, Puppeteer, and more. These are open source and serve as both production-ready tools and implementation references.
The community ecosystem is expanding rapidly. Independent developers have built MCP servers for Jira, Confluence, Notion, AWS services, Kubernetes, Docker, monitoring systems, and dozens of other tools. The MCP GitHub organization and the official documentation site are the best starting points for discovering available servers.
The protocol specification itself is versioned and evolving. Key areas under active development include streamable HTTP transport (replacing SSE for better proxy compatibility), enhanced authorization flows, and multi-modal content support for images and audio.
Frequently Asked Questions
What is MCP and why was it created?
MCP (Model Context Protocol) is an open standard by Anthropic that provides a universal way for AI applications to connect to external tools and data sources. It was created to solve the fragmentation problem -- before MCP, every AI integration required custom code for each combination of AI app and external service. MCP standardizes this into a single protocol that any client and server can implement.
How is MCP different from function calling or tool use?
Function calling is a feature of individual LLM APIs -- you define tools in your prompt, and the model outputs structured calls. MCP is a protocol layer that sits between the AI application and external services. Function calling tells the model what tools exist. MCP defines how the application discovers, connects to, and invokes those tools across process boundaries. An MCP client typically translates MCP tools into function-calling format for the model.
Can I use MCP with models other than Claude?
Yes. MCP is an open protocol, not locked to Claude. Any AI application can implement an MCP client. Cursor (which supports multiple models) already uses MCP. The protocol is model-agnostic by design -- the server doesn't know or care which model the client is using. Community clients exist for OpenAI-based applications and open-source model hosts.
What programming languages can I use to build MCP servers?
Anthropic provides official SDKs for TypeScript and Python, which are the most mature options. Community SDKs exist for Go, Rust, Java, C#, and Ruby. The protocol is JSON-RPC based, so you can implement a server in any language that can read and write JSON over stdio or HTTP.
Is MCP secure for production use?
MCP itself is a protocol -- security depends on your implementation. The stdio transport inherits OS-level process isolation. HTTP+SSE transport requires you to implement authentication (OAuth 2.0 recommended) and TLS. Always validate tool inputs, use least-privilege credentials, sandbox file system access, and log all tool invocations. Treat MCP server inputs as untrusted, just like user input in a web application.
What is the difference between stdio and HTTP+SSE transport?
Stdio transport runs the server as a local child process, communicating over standard input/output. It's simple, requires no networking, and is ideal for single-user local development. HTTP+SSE transport runs the server as a network service, accepting HTTP POST requests and streaming responses via Server-Sent Events. Use HTTP+SSE for shared servers, remote access, and multi-client scenarios.
How do I add an MCP server to Claude Code or Claude Desktop?
For Claude Code, create a .mcp.json file in your project root with the server configuration. For Claude Desktop, add server entries to ~/.claude/claude_desktop_config.json. Both use the same format: a server name, the command to run, optional arguments, and environment variables. After configuration, restart the application and the MCP tools become available automatically.
One Protocol, Infinite Integrations
MCP is still young, but the trajectory is clear. The protocol solves a real problem -- connecting AI models to the outside world in a standardized way -- and adoption is accelerating. If you're building AI-powered tools, investing in MCP servers now pays off immediately (they work with today's clients) and compounds over time (every new MCP client gets your integrations for free). Start with a single server for the tool you use most, get comfortable with the primitives, and expand from there.
Written by
Abhishek Patel
Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.
Related Articles
Self-Hosted ChatGPT: Run Open WebUI with Local LLMs (Complete Guide)
Deploy a private ChatGPT alternative with Open WebUI and Ollama. Complete Docker Compose setup with model selection, RAG document upload, web search, multi-user config, and security hardening.
11 min read
AI/ML EngineeringCan You Run LLMs Without GPU? CPU Benchmarks & Reality Check
A deep dive into running large language models on CPUs. Includes performance benchmarks, limitations, and optimization strategies.
10 min read
AI/ML EngineeringAI Agent Frameworks Compared: LangGraph vs CrewAI vs AutoGen (2026)
A practitioner comparison of LangGraph, CrewAI, and AutoGen -- benchmarks on research, code gen, and data analysis agents with code examples, token efficiency, and production guidance.
14 min read
Enjoyed this article?
Get more like this in your inbox. No spam, unsubscribe anytime.