Deliverability is a slow bleed. You ship a working setup, your campaigns land in the inbox for two months, and then one Tuesday Gmail quietly moves you to Promotions and your open rate halves. You find out a week later. A $59/mo dashboard solves this by pinging you on change — but so does a 30-line Node script and a Slack webhook, for $0.
This article shows three patterns we use in production for scheduled placement monitoring: a classic cron-plus-Node pipeline, an interactive agent loop via MCP, and a serverless GitHub Actions workflow. Pick whichever fits your team. The ingredients are the same.
The three ingredients
- Our placement test API. Free tier 20 tests/day. Two HTTP calls: submit, poll. Docs in the API comparison article.
- Our MCP server. Free. Lets an AI agent call the API (and auth checks, blacklist checks, header parsing) as tools during a conversation. Good for ad-hoc audits where you want reasoning, not alerts.
- A scheduler. cron, systemd timers, GitHub Actions, Vercel Cron, any of them. Run the script on a timer, have it alert when something breaks.
GlockApps' monitoring tier is exactly these three ingredients in one managed bundle. The free-stack version takes an hour to wire up and has the advantage that it is your code — you decide what thresholds alert, what channels the alerts go to, what post-mortem data you keep.
A pattern for daily monitoring
Before showing the code, here is the shape that works in practice — learned the hard way after a few false-alarm and missed-alarm incidents:
- Run one placement test per day, not per hour. Mailbox seeds are finite and rate limits apply; more tests don't give more signal.
- Run at a consistent time of day (e.g. 09:00 UTC). Providers' filters behave slightly differently by time; consistency matters for trend detection.
- Alert on change, not state. Missing inbox on day 1 with a new setup is expected; inbox → Spam after 30 days of green is a real event.
- Keep 30 days of history. A rolling CSV is enough. You'll reference it when debugging a customer complaint three weeks later.
- Alert to a channel people actually read. A Slack webhook beats a dashboard no one opens.
Pattern A: cron + Node + Slack webhook
The workhorse. Runs on a $5 VPS, a Raspberry Pi, or any always-on machine. One file, two env vars, cron entry, done:
// placement-monitor.js — run daily via cron
// Requires: Node 18+, env PLACEMENT_API_KEY, SLACK_WEBHOOK
import fs from 'node:fs';
const API = 'https://check.live-direct-marketing.online';
const KEY = process.env.PLACEMENT_API_KEY;
const SLACK = process.env.SLACK_WEBHOOK;
const FROM = 'sender@yourdomain.com';
const HIST = '/var/log/placement.csv';
async function run() {
const start = await fetch(`${API}/api/check`, {
method: 'POST',
headers: { 'Authorization': `Bearer ${KEY}`, 'Content-Type': 'application/json' },
body: JSON.stringify({ from: FROM, providers: ['gmail','outlook','yahoo'] }),
}).then(r => r.json());
// Your ESP sends the real email to start.seeds here
// (via SendGrid API, SMTP, or whatever you use)
await sendToSeeds(start.seeds);
// Poll up to 5 minutes
let report;
for (let i = 0; i < 30; i++) {
await new Promise(r => setTimeout(r, 10_000));
report = await fetch(`${API}/api/check/${start.id}`, {
headers: { 'Authorization': `Bearer ${KEY}` },
}).then(r => r.json());
if (report.status === 'complete') break;
}
// Write to history
const row = `${new Date().toISOString()},${report.results.gmail.folder},${report.results.outlook.folder},${report.results.yahoo.folder}\n`;
fs.appendFileSync(HIST, row);
// Alert if anything is not inbox
const notInbox = Object.entries(report.results)
.filter(([, v]) => v.folder !== 'inbox')
.map(([k, v]) => `${k}=${v.folder}`);
if (notInbox.length > 0) {
await fetch(SLACK, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text: `Placement alert for ${FROM}: ${notInbox.join(', ')}`,
}),
});
}
}
async function sendToSeeds(seeds) {
// Implement against your ESP. Example with raw SMTP below.
}
run().catch(e => console.error(e));
Cron entry in /etc/crontab:
0 9 * * * node /opt/monitor/placement-monitor.js >> /var/log/placement.log 2>&1
That is the complete Pattern A. Thirty lines, one cron line, one Slack webhook. You replaced the monitoring tier.
Pattern B: Claude Desktop + MCP for ad-hoc audits
Cron is for alerting. MCP is for investigation. When Pattern A fires a Slack alert saying Outlook placement dropped to Junk, you want an agent that can look at why — and MCP is built for exactly that.
With ldm-inbox-check-mcp installed in Claude Desktop (see the MCP server setup article), the workflow is:
- Slack alert fires: "Outlook placement = junk".
- Open Claude Desktop. Ask: "Audit sender@example.com. Run check_auth, run a placement test to Outlook only, and tell me what's likely broken."
- Agent calls
check_auth(finds DMARC p=quarantine with no DKIM alignment for the outbound mailer), callsstart_test, waits, reads the result, and writes a fix-list. - You act on the fix-list. Typically a one-line DNS change. Pattern A confirms the fix on the next morning's run.
This is the pattern we run ourselves. Pattern A is the watchman. Pattern B is the detective. GlockApps offers the watchman; the detective doesn't exist there.
Pattern C: GitHub Actions scheduled workflow
If you'd rather not run a VPS, GitHub Actions gives you a free scheduler with secrets management and a commit history of every run. Create .github/workflows/placement.yml:
name: Daily placement check
on:
schedule:
- cron: '0 9 * * *' # 09:00 UTC daily
workflow_dispatch:
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Run placement monitor
env:
PLACEMENT_API_KEY: ${{ secrets.PLACEMENT_API_KEY }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SENDGRID_KEY: ${{ secrets.SENDGRID_KEY }}
run: node scripts/placement-monitor.js
- name: Commit history
run: |
git config user.name github-actions
git config user.email github-actions@github.com
git add history/placement.csv
git diff --staged --quiet || \
git commit -m "placement: daily run $(date -u +%F)"
git push
Commit scripts/placement-monitor.js (Pattern A code), set secrets in repo settings, and the workflow runs daily at 09:00 UTC. Every run leaves a commit — you have a free audit log. Free plan scheduling is enough for one domain daily.
Thresholds and alerting logic
The naive "alert if not inbox" rule generates false positives. Real placement tests are noisy — one provider going to Promotions once isn't a crisis. Here's the smarter logic we use:
- Red alert: two consecutive days of any provider routing to Spam. This is almost always a real regression.
- Yellow alert: three of the last seven days with a non-inbox verdict. This is trend-level degradation worth investigating.
- Silent (no alert): one-off non-inbox verdict when the previous day was green. Note in history, don't page.
Implement in your monitor script by reading the history CSV before alerting. Twenty lines of logic, done. This is the kind of customisation a managed dashboard doesn't let you tune.
When to pay for GlockApps' managed monitoring
There is one clean answer: high-volume senders with no dev team. If you send millions of transactional emails a month and nobody on staff is comfortable maintaining a Node script, pay for the managed product. The hour-a-month of script upkeep adds up, and GlockApps has better seed coverage (80+ mailboxes vs our 20+) which matters when you serve regional ISPs.
For everyone else — startups, SaaS teams, agencies, solo senders — the free stack is not a compromise. It is the better tool. You get AI-agent audits (Pattern B) that no managed product provides, and your alerts go exactly where you want them.
Start with Pattern C (GitHub Actions). It needs no server, it versions your history, and you can read run logs from your phone. Graduate to Pattern A if you want more than once-daily runs or heavier post-processing. Layer Pattern B on top whenever you need a second pair of eyes.