Every ESP dashboard on the market leads with the same number: delivery rate. Mailchimp, SendGrid, HubSpot, Klaviyo, Customer.io — all of them show a big green 99.2% and call it a day. The problem is that this number doesn't mean what you think it means, and making decisions based on it is how teams end up with flat revenue curves despite "perfect" deliverability stats.
Delivery rate measures whether a message was accepted by the recipient's mail server. Inbox placement measures whether a human will actually see it. The two numbers routinely differ by 30 to 40 percentage points, and only the second one moves opens, replies and revenue.
What delivery rate actually measures
Delivery rate is a binary outcome reported by your outbound mail transfer agent: did the receiving server respond with an SMTP 2xx status code, or did it bounce? That's it. A 250 OK from gmail-smtp-in.l.google.com means Gmail has accepted your bytes into its system. It does not mean the message will reach an inbox, a spam folder, a Promotions tab, or even that Gmail won't silently drop it seconds later for policy reasons.
ESPs report delivery rate because it is the only datapoint they own. Once the recipient MTA says 250, the ESP's role in the conversation is over. Everything downstream — filter decisions, folder routing, silent discards — happens inside the recipient's infrastructure, invisible to the sender.
What inbox placement actually measures
Inbox placement measures the folder a message lands in when it reaches a real mailbox. Not "was it accepted" but "did the recipient's inbox actually display it above the fold". To measure this you need real accounts at every provider you care about — Gmail, Outlook 365, Yahoo, Mail.ru, Yandex, GMX, ProtonMail — and you need to send a test campaign to those accounts and observe where it lands.
A proper inbox placement report gives you a percentage per provider and per folder. For example: Gmail Inbox 72%, Gmail Promotions 24%, Gmail Spam 4%. That is the number that predicts your open rate. A message in Promotions gets maybe 15% of the attention a Primary-tab message gets. A message in Spam gets zero.
Why ESPs only report delivery rate
Two reasons, one structural and one commercial.
- Structural: an ESP cannot see inside Gmail's filter. Once the
250 OKcomes back, the ESP has no telemetry on what happens next. The only way to measure folder placement is seed testing, which requires accounts the ESP does not own. - Commercial: 99% delivery rate looks great in sales demos. 62% inbox placement looks like a problem. Guess which one ends up on the marketing site.
Some ESPs (Mailgun, SendGrid, Postmark) offer paid seed-list add-ons. They exist because customers started asking the obvious question. They're never turned on by default.
The typical gap in real numbers
Across the roughly 500 domains we've tested in the last quarter, the mean delivery rate was 99.3% and the mean inbox placement was 61.8%. A 37-point gap. For domains with broken SPF or misaligned DKIM, the gap widens to ~55 points — 99.1% delivered, 44% inboxed. For well-authenticated senders with clean lists, the gap narrows to single digits — 99.6% delivered, 91% inboxed.
The gap is where your revenue disappears. A 10,000-send campaign at 99% delivery and 60% inbox placement means 3,900 people never saw your message. At an average order value of $50 and a 1% conversion rate on inbox impressions, that's roughly $1,950 of missed revenue per campaign.
Which metric correlates with business results
Open rate correlates almost perfectly with inbox placement and almost not at all with delivery rate. This is mechanically obvious once you think about it: a message in Spam cannot be opened by a human, and a message in Inbox almost always is, by at least the Apple MPP pre-fetch and any AI assistant the recipient uses. The same applies, even more strongly, to clicks, replies and revenue.
If you run a regression across a year of campaign data, inbox placement rate explains roughly 68% of the variance in revenue per send. Delivery rate explains about 3%. The signal is not even close.
How to measure inbox placement
Seed-based testing is the only reliable method. You send a specimen message to a panel of real mailboxes across the providers that matter to you, and the test platform reports which folder each copy landed in. Good testing platforms also capture authentication results (SPF/DKIM/DMARC), spam engine scores (SpamAssassin, Rspamd), DNS health (MX, PTR, blacklists) and content analysis.
Run a test before every major campaign and after any change to your DNS records, ESP, sending domain or content template. For ongoing monitoring, a weekly seed test on your transactional and marketing streams is enough to catch reputation drift before it becomes a revenue problem.
When delivery rate is still useful
Delivery rate is not useless — it is a list hygiene and infrastructure signal. A sudden drop from 99% to 93% tells you that addresses are bouncing, which is usually a list quality or authentication issue. A persistent delivery rate below 95% means you should clean your list with a validation service before you worry about folder placement.
But once delivery rate is sitting above 98%, it stops being informative. All the interesting variance — and all the revenue — is in the inbox placement number your ESP is not showing you.
The rule: if a metric can be 99% while your campaigns flop, it isn't the right metric. Delivery rate is a prerequisite, not a goal. Inbox placement rate is the goal.