← Back to blog

Model Context Protocol (MCP): The USB-C Port for AI Applications

2026-03-14

It seems like every developer in the world is getting down with MCP right now. Model Context Protocol is the hot new way to build APIs, and if you don't know what it is — you're ngmi.

People are doing wild things with it. One developer got Claude to design 3D art in Blender, powered entirely on vibes. And just a few days ago, MCP became an official standard in the OpenAI Agents SDK.


What Even Is MCP?

If you're an OG developer, you probably know what a REST API is. You might even know about GraphQL, gRPC, or maybe years ago you used SOAP. Well, now there's a new architecture in town.

Model Context Protocol is basically a new standard for building APIs that you can think of like a USB-C port for AI applications.

It was designed by Anthropic (the team behind Claude) and provides a standard way to give large language models context — access to data and tools they didn't have before.

Anthropic is so bullish on this technology that their CEO expects virtually all code to be written by AI by the end of the year. Whether you believe that or not, MCP is quickly becoming the standard for how LLMs interact with external systems.


How MCP Works: Clients and Servers

Like other API architectures, MCP has a client and a server.

  • The client is something like Claude Desktop, Cursor, or Windsurf.
  • The server is what you build — it maintains a connection with the client so they can pass information back and forth via a transport layer.

In a REST API, you have a bunch of different HTTP verbs (GET, POST, PUT, DELETE) that you send requests to via different URLs. In MCP, we're really only concerned with two main primitives:

Resources

A resource might be a file, a database query, or some other information the model can use for context. Conceptually, you can think of it like a GET request in REST. Resources are read-only — no side effects.

Tools

A tool is an action that can be performed — like writing something to a database or uploading a file. That's more like a POST request in REST. Tools can have side effects and perform computations.

What we do as developers is define tools and resources on the server so the LLM can automatically identify and use them when it encounters a prompt that needs them.


Building an MCP Server

Here's the basic anatomy of an MCP server in TypeScript using the official SDK:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const server = new McpServer({ name: "my-server" });

Adding a Resource

A resource fetches data for the model to use as context:

server.resource("horses-looking-for-love", "horses://profiles", async () => {
  const result = await db.query("SELECT * FROM horses WHERE status = 'single'");
  return {
    contents: [
      {
        uri: "horses://profiles",
        mimeType: "application/json",
        text: JSON.stringify(result.rows),
      },
    ],
  };
});

Adding a Tool

A tool lets the LLM perform actions with side effects:

server.tool(
  "create-match",
  {
    horse1Id: z.string().describe("ID of the first horse"),
    horse2Id: z.string().describe("ID of the second horse"),
    dateTime: z.string().describe("ISO datetime for the date"),
  },
  async ({ horse1Id, horse2Id, dateTime }) => {
    await db.query(
      "INSERT INTO matches (horse1, horse2, date) VALUES ($1, $2, $3)",
      [horse1Id, horse2Id, dateTime]
    );
    return { content: [{ type: "text", text: "Match created successfully!" }] };
  }
);

Notice how Zod is used to validate the shape of the data going into the function. This prevents the LLM from hallucinating random arguments. Providing data types along with descriptions makes your MCP server far more reliable.

Running the Server

import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const transport = new StdioServerTransport();
await server.connect(transport);

You can use Standard I/O as the transport layer for local use, or Server-Sent Events (SSE) / HTTP if you deploy to the cloud.


Connecting to a Client

To use your MCP server, you need a client that supports the protocol. Claude Desktop is the most common, but Cursor, Windsurf, and others work too. You can even build your own client.

In Claude Desktop, go to Developer Settings, which brings you to a config file:

{
  "mcpServers": {
    "horse-tender": {
      "command": "deno",
      "args": ["run", "--allow-all", "main.ts"]
    }
  }
}

After restarting Claude, your MCP server should be running. Then you can:

  1. Attach a resource to fetch context (e.g., horse profiles from your database)
  2. Prompt Claude about things specific to your application
  3. Use tools to let Claude write back to your database or perform actions

Because Claude is multimodal, you can also add PDFs, images, or anything else to the context — like all the horse images in your storage bucket.


Why MCP Matters

You might be thinking: "This is just an API for APIs — that sounds like dumb over-engineering."

Fair point. But having a protocol like this makes it a lot easier to plug and play between different models. It makes LLM applications more reliable in general. The same MCP server works with Claude, GPT, and any other model that supports the protocol.

The ecosystem is already growing fast. People are using MCP for:

  • Automated trading (stocks and crypto)
  • Industrial-scale web scraping
  • Cloud infrastructure management (Kubernetes clusters)
  • Database operations
  • File and storage management

Check out the awesome-mcp repo to see what people are building.


The Elephant in the Room

Anthropic's CEO has said that 90% of coding will be done entirely by AI within 6 months, and nearly all code will be AI-generated within a year.

I'm going to press X to doubt on that one.

It's only a matter of time before some AI agent accidentally wipes out billions of dollars in customer data — or becomes self-aware and just deletes it for fun.

That being said, MCP is a genuinely useful protocol. It standardizes how LLMs interact with external tools and data, making it easier to build reliable AI-powered applications. Just make sure to vibe code responsibly.


Key Takeaways

  • MCP is a standard protocol for giving LLMs access to external data (resources) and actions (tools).
  • Think of it like a USB-C port for AI — one standard interface for many different connections.
  • Resources = read-only data (like GET requests). Tools = actions with side effects (like POST requests).
  • Use Zod or similar validation to prevent LLM hallucinations in tool arguments.
  • The ecosystem is growing fast, with support in Claude Desktop, Cursor, Windsurf, and the OpenAI Agents SDK.
  • MCP servers are essentially APIs for APIs — and that's actually a good thing for interoperability.