MCP8 min read

What MCP is and why it matters for email deliverability

Model Context Protocol (MCP) is Anthropic's open protocol for giving LLMs tools. In plain English: it lets Claude or Cursor run real actions instead of describing them. Here is why that matters for email.

Every week a new "AI agent" product ships, and half of them are wrappers around an LLM that still can't do anything except generate text. The difference between a chatbot and an agent is a protocol: something that lets the model reach out of the chat window and run real actions in the real world. For Anthropic's models that protocol is called MCP, and for email deliverability work it changes what is possible to automate.

The short version

MCP is a standard way to give an LLM tools. A tool server advertises a handful of functions; the model decides when to call them. With an email-deliverability MCP server your agent can run placement tests, check DNS, look up blacklists and parse headers — without you writing orchestration code for any of it.

MCP in one paragraph

Model Context Protocol is an open standard published by Anthropic in late 2024. It defines a JSON-RPC conversation between an LLM client (Claude Desktop, Cursor, Claude Code, Zed, and anyone else who adopts it) and a tool server (a small local process you run). The server advertises a list of tools with descriptions and input schemas. The client forwards those to the model. When the user asks a question, the model picks a tool, fills in the arguments, and the client executes the call. Responses flow back to the model, which then either calls more tools or writes a reply.

How an MCP server actually works

Nothing about MCP is magic. A server is just a Node, Python or Go process that speaks a specific JSON-RPC dialect over stdio. It has two responsibilities:

  1. On startup, respond to tools/list with a catalogue of tools it supports. Each tool has a name, a human-readable description and a JSON Schema for its arguments.
  2. When the client sends tools/call, execute the corresponding function and return the result — a string, a structured object, or an error.

The model reads the tool descriptions the same way it reads a user prompt. If you describe start_test as "run an inbox placement test across 20+ seed mailboxes" the model will pick it when the user says "check where my email lands". Description quality is load-bearing.

Why a chat-only model is limited for deliverability

Ask a vanilla LLM why your email is going to spam and it will tell you — in generic terms — about SPF, DKIM, and DMARC. It can't look at your actual DNS. It can't look at your actual headers. It can't run a test. So its answer is always a list of possibilities, never a diagnosis. That's useful for learning, useless for debugging a live outage.

Give the same model a deliverability MCP server and the conversation changes. "My cold email is landing in Gmail spam" is no longer a question the model has to guess at. It can run start_test, read the verdict, call check_auth on the sending domain, and come back with "your DKIM selector s1 resolves but signs with the wrong domain — align it to the From domain and retest". Concrete, actionable, fixable.

The four things an email MCP unlocks

1. Run placement tests on demand

The model can kick off a test with start_test, wait, then fetch the full verdict with get_test. No dashboard, no copy-paste. The model summarises inbox/spam/missing counts and per-provider breakdowns in the same turn.

2. Audit authentication (SPF / DKIM / DMARC)

check_auth resolves the three relevant records and returns a structured verdict. The model can compare it against best practice and flag specific misconfigurations, not generic ones.

3. Check DNSBL listings

check_blacklist looks up a domain or IP across Spamhaus, SORBS, SURBL and similar lists. If you ask "is my sending IP clean?" the agent has a real answer, not a suggestion to go check manually.

4. Know which providers matter

list_providers returns the current list of seed mailboxes. The model uses it to decide which providers are worth highlighting in a report — Gmail always, Outlook usually, Mail.ru if your audience is Russian-speaking, ProtonMail if privacy matters to your buyers.

MCP vs function calling

If you've used OpenAI's function-calling API you already understand half of MCP. Function calling happens inside a single API request: you send a list of functions to the model, the model returns a call, your code executes it, you send the result back. It's per-integration, per-app.

MCP is the same idea lifted into a portable protocol. The tool definitions don't live in your app code — they live in a reusable server you can plug into any MCP-compatible client. You write one ldm-inbox-check-mcp server; Claude Desktop, Cursor, Claude Code and Zed users can all install it the same day without you writing a plugin for each.

MCP vs REST API

Every MCP server is a thin wrapper around a REST API. The REST API is still the right answer for most things:

  • CI/CD deploys that run a placement test on every release.
  • Scheduled monitors that email you when your inbox rate drops below 90%.
  • Multi-tenant SaaS where each of your customers has their own key.

MCP is the right answer for interactive work: debugging a live problem, preparing a fix ticket, reviewing a template before you send it. The rule of thumb — if a human is in the loop reading the output, use MCP. If a cron job is in the loop, use REST.

An example round trip

Here is what actually happens behind the scenes when you ask the agent to test a domain. The user types:

Test deliverability for sender news@acme.io,
subject "Weekly product update", body is the
HTML I just pasted. Tell me what to fix.

The client forwards the prompt plus the list of available tools (from tools/list) to the model. The model responds with a tool call:

{
  "tool": "start_test",
  "arguments": {
    "from": "news@acme.io",
    "subject": "Weekly product update",
    "html": "<html>..."
  }
}

The client executes it. The MCP server calls the Inbox Check REST API, gets back a test ID, returns { id: "t_abc123" }. The model then calls get_test in a loop until status is complete, reads the verdict, and writes a reply:

Your test landed 14/20 in inbox, 4/20 in spam,
2/20 missing. SPF passes, DKIM passes, DMARC is
aligned. The 4 spam hits are all at Outlook.
SpamAssassin triggered HTML_IMAGE_ONLY_32 and
HTML_SHORT_LINK_IMG_3 — your template is 90%
image, 10% text. Add a plain-text alternative
and a 200-word text body and retest.

Notice how much of that is diagnosis, not description. That's the agent pattern.

Authentication note

The MCP server reads your API key from an environment variable at startup and keeps it local. The LLM sees tool outputs (test IDs, verdicts) but never the key itself. That boundary is enforced by the protocol — good to know when your security team asks.

Frequently asked questions

Is MCP Anthropic-only?

The protocol itself is open. Anthropic's clients (Claude Desktop, Claude Code) ship with first-class support. Cursor, Zed, Continue.dev and a growing list of third-party hosts also speak it. OpenAI has a different but conceptually similar spec; the ecosystem is still consolidating.

Do I need to write an MCP server to use one?

No. For email work, ldm-inbox-check-mcp is already published on npm. You add one JSON block to your client's config, restart, and the tools are available.

Does MCP send my data to Anthropic?

The tool outputs go to the model the same way your chat messages do. If you use Claude, Anthropic processes them. The MCP server itself runs locally — it never leaves your machine.

Can a non-technical user set this up?

Claude Desktop config is a single JSON file; Cursor has a GUI panel. If you can edit a settings file, you can install an MCP server. Five minutes, one restart.
Related reading

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required