Running a placement test used to mean: open the dashboard, paste the HTML, wait, screenshot the results, copy the verdicts into a ticket. With the Inbox Check MCP server you give an AI agent the ability to do all of that in one turn. You type a sentence, the agent fires the test, reads the verdict, and tells you which DNS record to fix. This article shows how to wire it up and what to say to the agent.
Node.js 20+, an Inbox Check API key (ic_live_...), and either Claude Desktop or Cursor. Total setup time is about five minutes. If you have never used MCP before, read the next section first.
What MCP is, in one paragraph
MCP (Model Context Protocol) is Anthropic's open protocol for letting LLM clients talk to tool servers over a standard JSON-RPC interface. A tool server publishes a list of functions it can run (e.g. start_test, get_test), a description of each, and a schema for the inputs. The LLM decides when to call which tool based on the user's prompt. That means the user asks a question in plain English, and the agent picks the right tool chain without being told.
Why this beats a plain REST integration
A REST client needs you to hand-write the sequence: call this endpoint, parse this response, decide what to do next. With MCP, the agent chains calls itself. Ask "test my latest campaign and tell me what to fix" and the agent runs the placement test, waits for results, parses the verdict, looks up your DNS records, and proposes concrete fixes — all without you writing a line of orchestration code.
REST is still the right answer for CI/CD, cron jobs, and anything that has to run unattended. MCP is the right answer for interactive debugging, "why did this campaign fail" conversations, and fix-ticket preparation.
Install the ldm-inbox-check-mcp server
The server is published to npm. Two ways to use it:
Option A: global install
npm install -g ldm-inbox-check-mcp
# the binary is now on your PATH as ldm-inbox-check-mcpOption B: npx (no install)
Skip the install and let the client run it on demand with npx -y ldm-inbox-check-mcp. Slower on first run, simpler to keep up to date.
Configure Claude Desktop
Claude Desktop reads claude_desktop_config.json on start-up. The file lives at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add an mcpServers block:
{
"mcpServers": {
"inbox-check": {
"command": "npx",
"args": ["-y", "ldm-inbox-check-mcp"],
"env": { "INBOX_CHECK_API_KEY": "ic_live_..." }
}
}
}Fully quit Claude Desktop (Cmd+Q, not just close the window) and relaunch. The hammer icon in the chat composer should show the Inbox Check tools available.
Configure Cursor
Cursor's MCP settings live in Settings → MCP. Click Add new MCP server and use:
Name: inbox-check
Command: npx
Args: -y ldm-inbox-check-mcp
Env: INBOX_CHECK_API_KEY=ic_live_...Restart Cursor. Open a composer window and check that @inbox-check shows up as an available tool source.
Available tools
The server exposes a small, focused set:
start_test— kicks off a new placement test. Arguments: sender domain, subject, and either raw HTML or a reference to a previously uploaded template.list_test— lists recent tests with their status and inbox rates, so the agent can pull context from history.list_providers— returns the current list of seed mailboxes (Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail, iCloud and more), useful when the agent needs to decide which providers to highlight.get_test— fetches the full verdict for a given test ID: inbox/spam/missing counts, authentication results, SpamAssassin score, per-provider folder placement.
Example prompts
Once the server is wired up, these prompts all work out of the box:
- "Test my latest cold-email template (paste HTML) and tell me what to fix before I send it."
- "Run a placement test from
news.mybrand.comand summarise where Gmail routed it." - "Compare the last three tests I ran — is my inbox rate trending down?"
- "What SPF include do I need to add based on the failing authentication result in test
abc123?"
A three-step agent workflow
The most useful loop is "run → read verdict → fix DNS". Step-by-step:
- Run. You paste an HTML template into Claude Desktop and say "test this from domain X and tell me what's broken". The agent calls
start_test, gets a test ID, pollsget_testuntil the status is complete. - Read verdict. Agent summarises the numeric result, calls out any failing auth, highlights the providers where placement dropped most, and reads back the SpamAssassin breakdown if triggered.
- Fix DNS. If authentication failed, agent proposes exact DNS record changes — the SPF include to add, the DKIM selector to publish, the DMARC record to write. You paste those into Cloudflare or your DNS host.
Cursor adds one extra thing: the agent can also see your Terraform or Cloudflare Pulumi code, so when it proposes a DNS fix it can open a PR directly against your infra repo rather than asking you to paste it somewhere. A good pattern for teams that keep DNS in code.
Rate limits and API keys
The free tier gives you a few dozen tests a month; paid tiers scale up to monitoring-level volume. An API key lives forever in your account settings — rotate it if you leak it anywhere. The MCP server reads INBOX_CHECK_API_KEY on startup and does not ship it back to the LLM (you want this — the key stays in your local environment).
Rate limits: roughly 5 tests per minute per key on the standard plan. The agent will retry on 429, but a user loop that fires ten tests in one prompt will slow itself down.
When not to use MCP
Skip MCP and use the REST API directly when:
- You are running tests from a CI pipeline on every deploy.
- You are building a scheduled monitoring system (cron + Slack alerts).
- You are integrating into a product that has its own users — MCP is designed for one developer + one agent, not for multi-tenant SaaS.
For every one of those cases, the REST API is a few fetch calls. A separate article walks through that setup end-to-end.