Deliverability is not something you check monthly. When a domain's reputation slips, you have hours, not days, to react. A Slack alert tied to continuous inbox placement tests turns "we heard from a customer that our mail stopped arriving" into "Slack pinged us at 09:12 and we rolled back the template change". Here is how to build that, end to end, in about 40 lines of code.
- Inbox Check API — runs the placement test on a schedule.
- Node.js cron — drives the schedule (or GitHub Actions / Kubernetes CronJob).
- Slack incoming webhook — posts to a channel when the inbox rate crosses a threshold.
Step 1 — Create the Slack webhook
- Go to
api.slack.com/appsand create a new app with "From scratch". - Pick a workspace, then under Features → Incoming Webhooks, toggle it on.
- Click Add New Webhook to Workspace, pick a channel (we use
#deliverability-alerts), and copy the URL. It looks likehttps://hooks.slack.com/services/T0.../B0.../xxx. - Store that URL in your secret manager as
SLACK_ALERT_WEBHOOK.
Step 2 — Grab an Inbox Check API key
Sign in to the dashboard, head to API → Keys, create a key scoped to tests:read and tests:write. Store as INBOX_CHECK_KEY.
Step 3 — The bot
Save this as bot.js. It runs a single test, computes the inbox rate, and posts to Slack if below threshold.
// bot.js
import { InboxCheck } from 'inbox-check';
const THRESHOLD = 0.80;
const SENDER = process.env.SENDER;
const client = new InboxCheck({ apiKey: process.env.INBOX_CHECK_KEY });
async function run() {
const test = await client.tests.create({
sender: SENDER,
subject: 'Nightly placement check',
html: '<p>Placement probe</p>',
providers: ['gmail', 'outlook', 'yahoo', 'mailru', 'yandex'],
});
const result = await client.tests.waitFor(test.id, { timeoutMs: 180_000 });
const inbox = result.providers.filter((p) => p.folder === 'inbox').length;
const rate = inbox / result.providers.length;
if (rate < THRESHOLD) {
await postToSlack({
text: `:rotating_light: Inbox rate ${(rate * 100).toFixed(0)}% for ${SENDER}`,
rate, result,
});
}
}
async function postToSlack({ text, rate, result }) {
const perProvider = result.providers
.map((p) => `• *${p.name}*: ${p.folder}`)
.join('\n');
await fetch(process.env.SLACK_ALERT_WEBHOOK, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text,
attachments: [{
color: rate < 0.5 ? '#d90429' : '#f48c06',
fields: [
{ title: 'Inbox rate', value: `${(rate * 100).toFixed(0)}%`, short: true },
{ title: 'Sender', value: SENDER, short: true },
{ title: 'Per provider', value: perProvider, short: false },
{ title: 'Test URL', value: result.shareUrl, short: false },
],
}],
}),
});
}
run().catch((e) => { console.error(e); process.exit(1); });Step 4 — Schedule it
Two easy options.
Option A: system cron
# Every 30 minutes
*/30 * * * * cd /opt/inbox-bot && \
SENDER=hello@yourdomain.com \
INBOX_CHECK_KEY=$(cat /etc/secrets/ic-key) \
SLACK_ALERT_WEBHOOK=$(cat /etc/secrets/slack-url) \
node bot.js >> /var/log/inbox-bot.log 2>&1Option B: GitHub Actions
name: Inbox placement alert
on:
schedule: [{ cron: '*/30 * * * *' }]
workflow_dispatch:
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci
- run: node bot.js
env:
SENDER: hello@yourdomain.com
INBOX_CHECK_KEY: ${{ secrets.INBOX_CHECK_KEY }}
SLACK_ALERT_WEBHOOK: ${{ secrets.SLACK_ALERT_WEBHOOK }}Tuning so you do not get alert fatigue
A 30-minute cadence with a single-test trigger will cry wolf. Two knobs prevent that.
- Require two consecutive failures before alerting. Cache the last run's result (a small JSON file or KV entry is enough) and only post when the current AND previous runs both fell below threshold.
- Run-book link in the alert. Every incident should tell the on-call exactly what to check next: DNS, recent template change, warmup status, complaint spike. Put a link to that doc in the Slack attachment.
A bot that pings you four times a day for a single soft failure gets muted within a week. Once muted, it is worse than no bot — because you now believe it is working. Tune aggressively; add the second-failure gate before you ship.
Where to go from here
- Post a daily summary at 09:00 with the 24-hour rolling inbox rate — healthy signal, not an interrupt.
- Include SPF/DKIM/DMARC verdicts in the payload. A sudden DKIM failure is the fastest predictor of a larger drop.
- Extend to per-provider alerting. If inbox rate is 100% everywhere except Outlook, the fix is different from a cross-provider drop.
- Swap Slack for PagerDuty / Opsgenie for out-of-hours coverage. The JSON shape for both is a one-line change.