A head of sales messaged us on a Tuesday morning. “We went from 3.1% reply rate last week to 0.8% yesterday. Same sequences, same list source, same reps. What did we break?” The answer — after fifteen minutes of triage — was an email service provider had silently pushed an IP warm-up policy change that bumped all cold-outbound tenants onto a hotter shared pool. Placement on Outlook dropped 30 points across his whole team. Nothing in their stack had changed. Everything in their deliverability had.
Overnight reply-rate collapses follow a small set of patterns. Once you know the patterns, the triage is fast.
Step 1: check placement, not copy. Step 2: check authentication alignment. Step 3: check volume shape. Step 4: check reputation and blacklist status. Step 5: only now consider copy, targeting or external events. 80% of overnight drops are found in the first two steps.
The pattern: what “overnight” usually means
When a metric halves in 24 hours, it is almost never an organic shift. Organic shifts take weeks. A one-day step change points to a discrete event. In email, those events fall into a short list:
- Authentication break. A DNS record was changed, a DKIM key rotated without propagating, SPF includes changed.
- Blacklisting. The sending IP or domain landed on a major list (Spamhaus, SORBS, Barracuda, UCEprotect).
- IP reputation shift. On shared pools, a noisy neighbour triggers a block on the whole /24.
- Provider filter update. Gmail, Outlook or Apple quietly shipped a change that hit your content pattern.
- Volume spike. A new rep, a new sequence, a list upload pushed the domain past its warm-up ceiling.
- Tracking domain burn. Your click/pixel domain got flagged; every email carrying it is penalised.
Note what is not on this list: copy. Copy doesn't decay overnight. Copy has a slow, aggregated decay curve measured in weeks.
The 15-minute triage
Minute 0–5: placement test
Run the current template through a provider seed matrix. Compare to the last known baseline. If Gmail primary fell from 85% to 30%, or Outlook Focused from 70% to 10%, you have confirmed a placement event. The drop pattern tells you which provider is unhappy.
Minute 5–10: authentication audit
Pull your SPF, DKIM and DMARC records. Verify:
- SPF: under 10 DNS lookups, no
~allwhen you meant-all, all active sending sources included. - DKIM: the selector used by your current ESP resolves and matches the signing key.
- DMARC: policy intact, RUA reports arriving, alignment on both SPF and DKIM.
The most common cause we see is a DKIM selector rotation that propagated to the ESP but not to DNS, or vice versa. Six hours of misalignment is enough to tank a week's reputation.
Minute 10–15: blacklist and reputation
Run the sending IP and domain through DNSBL lookups. Check Google Postmaster Tools for the sending domain. Check Microsoft SNDS for the sending IP. Any red on any surface is enough to halve reply rate.
Inbox Check runs a full placement matrix plus SPF/DKIM/DMARC alignment checks and spam-engine scores in about 60 seconds. Free, no signup. For continuous monitoring with webhook alerts on drops, use the API.
The most common causes we actually find
DNS change blast radius
An IT admin added a new SaaS tool to SPF and, in doing so, broke the include chain for the existing ESP. This is the single most common cause of overnight drops at small companies. Look at DNS change logs for the last 48 hours.
New-rep volume shock
A new SDR starts, their sequences go live, and the domain's daily volume jumps 40% in 24 hours. Gmail and Outlook both respond to step changes in volume far more aggressively than to absolute volume.
ESP IP pool rotation
On shared-IP ESPs, your outbound gets rehomed onto a different /24 during maintenance or load rebalancing. Reputation of the new pool is not yours to control.
Content fingerprint hit
A specific phrase, URL or HTML pattern in your new template matches a spam fingerprint the filter updated yesterday. Revert one template and see if placement normalises.
Tracking domain flag
Your click-wrapper or pixel domain appears on a URL blacklist (SURBL, URIBL, Spamhaus DBL). Every email carrying the wrapper is now penalised regardless of content.
What not to do in the first 24 hours
- Don't rewrite copy. If you rewrite and placement is the cause, you will attribute the non-fix to the rewrite.
- Don't switch ESPs. You'll lose the reputation baseline and add a new unknown to the debug.
- Don't increase volume to “push through.” This is the single worst response to a filter event.
- Don't stop sending entirely. Zero volume also looks suspicious to filters. Drop 30–50%, hold steady, and recover.
The recovery sequence
Once you've identified the cause:
- Fix the root (DNS, volume shape, content, blacklist removal).
- Drop daily volume to 50–70% of previous baseline.
- Send only to high-engagement recipients for 3–5 days.
- Re-run placement tests every 24 hours.
- Ramp volume back to baseline over 7–10 days.