Walk into any SDR team and ask what their “delivered” rate was last quarter. You will hear 98%, 99%, sometimes 100%. Good. Now ask what their inbox placement rate was. You will usually get a blank stare, an “our ESP handles that,” or a guess that happens to match the delivered rate. That gap — between the number SDRs quote and the number that actually matters — is the single biggest blind spot in B2B outbound.
“Delivered” means a receiving mail server accepted the SMTP handshake. “Inbox” means the message landed in a folder the recipient reads. The first is table stakes. The second is what drives pipeline. Most teams confuse the two because their dashboard only shows the first.
What your dashboard actually means by “delivered”
Here is the exact chain of events behind that green check mark:
- Your ESP opens an SMTP connection to the recipient's MX.
- It sends
MAIL FROM,RCPT TOand the message body. - The receiving server responds with a 2xx code (typically 250 OK).
- Your ESP logs that response as “delivered.”
That's it. Everything that happens after the receiving server accepts the message — folder routing, per-user filters, tenant-level rules, ML classifiers, Promotions tab categorisation, Focused-Other sorting, Quarantine, silent dropping — is completely invisible to your ESP. All of it counts as “delivered.”
220 mx.example.com ESMTP ready
EHLO sender.example
250-mx.example.com Hello
MAIL FROM:<rep@sender.example>
250 OK
RCPT TO:<buyer@prospect.com>
250 OK
DATA
...message body...
.
250 2.0.0 Ok: queued as ABC123 ← your ESP calls this "delivered"What inbox placement actually measures
Inbox placement is the post-acceptance reality: which folder did the message land in for an actual human, on each major provider? It is empirical, not inferred. You cannot compute it from SMTP logs. You cannot estimate it from bounce rates or open rates. You have to put real messages into real mailboxes on real providers and check.
A realistic cold campaign breakdown looks like:
Sent: 1,000
Delivered (ESP): 982 (98.2%)
Inbox primary: 421 (42.1%)
Promotions/Other: 287 (28.7%)
Junk/Spam: 193 (19.3%)
Quarantined/Dropped: 81 ( 8.1%)Your ESP's “98.2%” is technically true. It is also irrelevant to your pipeline. The only percentage your VP of Sales should actually care about is the 42.1% primary inbox.
Provider asymmetry hides in the average
Even that 42% aggregate hides brutal provider-level asymmetry. The same campaign might hit 65% Gmail primary, 20% Outlook Focused, 90% Yahoo Inbox, 10% corporate Defender. If 70% of your ICP lives on Microsoft 365 — which for mid-market and enterprise B2B, it does — the campaign is failing where it matters most.
The provider mix that kills SDR campaigns
- Microsoft 365 / Exchange Online. Strictest filters. Tenant admins add their own rules. Quarantine is common and invisible.
- Google Workspace. More forgiving than consumer Gmail; per-org policies vary widely.
- Consumer Gmail. Heavy Promotions-tab routing for anything that looks like marketing.
- Apple iCloud / Hide My Email. Opaque ML, MPP-inflated engagement signals.
- Corporate MX with secure gateways (Proofpoint, Mimecast, Barracuda). Link-rewriting, sandboxing, bot clicks.
Closing the blind spot
The fix is not exotic. It is instrumentation every other mature engineering discipline takes for granted: put a canary in front of production, measure outcomes continuously, alert on deviation.
- Seed matrix. Maintain real mailboxes across the providers your ICP uses.
- Per-template baseline. Run every new sequence through the matrix before it goes to list.
- Per-campaign spot-check. Add a seed address to every live sequence so you get a continuous signal.
- Weekly trend. Placement drifts even with zero config changes — filters update, neighbour reputation changes, IPs get flagged.
Inbox Check tests your campaigns across 20+ real provider seed mailboxes and returns per-provider inbox/spam/missing verdicts with SPF/DKIM/DMARC and spam-engine scores. Free for spot tests, no signup. Automate continuous checks via the API.
What your VP of Sales should actually ask
If you are running an SDR team, swap these questions into your weekly review:
- What was our Gmail primary rate this week? (Not “delivery” rate.)
- What was our Outlook Focused rate this week for the tenants we care about?
- Did any sequence show a >10-point placement drop vs last week?
- Are we DMARC-aligned on every sending domain in use?
- What was the reply rate on the slice that actually reached the inbox, not on the whole list?
The answers to these questions will tell you more about why your pipeline is up or down than any activity-metric dashboard ever will.