Metrics7 min read

Open rate vs inbox placement: when they agree, when they fight

We ran 150 campaigns through both open tracking and seed-based inbox placement testing. The two metrics disagreed 61% of the time. The pattern of disagreement is instructive.

People who write about deliverability metrics tend to pick a side. Either open rate is defensible and inbox placement is a noisy sample, or open rate is broken and inbox placement is ground truth. Reality is less tidy.

To figure out how the two actually relate, we took a sample of 150 real campaigns sent by clients of an email agency over a three-month window and compared the ESP-reported open rate against an independent seed-based inbox placement score. The findings are more interesting than the headline "open rate is broken".

The setup

  • 150 campaigns from 14 senders, volumes ranging from 2,000 to 1.2 million recipients.
  • Each campaign included a 22-seed inbox placement test across Gmail, Outlook, Apple iCloud, Yahoo, Mail.ru, Yandex, ProtonMail, GMX, and several business domains.
  • Open rate reported by the ESP within 48 hours of send.
  • Inbox placement computed as percentage of seeds landing in the primary inbox.

Headline results

On the same 150 campaigns:

  • Mean reported open rate: 41%.
  • Mean inbox placement rate: 76%.
  • Correlation between the two (Pearson r): 0.31.

A correlation of 0.31 is weak. If one metric were a reliable proxy for the other, we would expect something north of 0.7. Instead, knowing the open rate tells you only a little about inbox placement and vice versa.

The four quadrants

Plot every campaign on a grid: open rate on the x-axis, inbox placement on the y-axis. Each campaign lands in one of four quadrants, and the quadrants are diagnostic.

High open, high placement (29% of campaigns)

Both metrics agree that the campaign worked. Usually transactional, welcome sequences, or highly-engaged newsletter lists. Nothing to worry about.

Low open, low placement (10%)

Both metrics agree the campaign failed. Usually indicates a reputation problem — recent blocklisting, new sending IP, poor SPF/DKIM setup, or a spam-filter penalty. Both numbers are telling you the same bad news.

High open, low placement (14%)

This is where open rate lies most clearly. The ESP reports 60%+ opens, but inbox placement is under 50% — meaning half of actual recipients had the message routed to spam or promotions. The high open rate is pre-fetch firing on the fraction that did get delivered to Apple inboxes. The low placement tells you the campaign actually underperformed.

Teams that trust open rate in this quadrant keep sending and degrade their sender reputation further. Teams that check inbox placement pull back and diagnose.

Low open, high placement (47%)

The most common disagreement, and the most interesting one. Open rate says 25–35%, but inbox placement says 80%+. What is happening is that the mail is reaching the inbox, but the open-rate metric is dragged down by Outlook image blocking and scanner-free enterprise tenants.

Teams that trust open rate here conclude the campaign underperformed when in fact it reached most recipients. They may then rewrite content, change cadence, or waste optimisation cycles on a problem that does not exist.

The expensive misdiagnosis

In our sample, the "low open, high placement" quadrant was the single largest category. Nearly half of campaigns looked like failures by open rate and were successes by placement. That is a lot of marketing effort being redirected to solve the wrong problem.

When open rate is actually right

In fairness, open rate does carry some information. Two situations where it adds value beyond inbox placement:

  1. Time-to-engagement. Even inflated open events have timestamps. The distribution of open-event times across the first 48 hours is a usable signal for send-time optimisation, as long as you strip the first-hour MPP spike.
  2. Provider-level segmentation. Opens by provider, compared to placement by provider, can surface provider-specific issues. If Gmail placement is 90% but Gmail opens are 8%, something is blocking image loads — possibly a broken image host, not a deliverability issue.

The combined dashboard

The strong recommendation, based on the data, is not to pick one metric over the other but to report both and flag disagreements. A dashboard that shows:

Campaign X performance
——————————————————————————
Inbox placement: 82% (Gmail 88 / Outlook 71 / Apple 94)
Open rate:       31% (reported)
Click rate:      4.1%
Reply rate:      0.4%

⚠ Open rate looks low relative to placement.
   Likely cause: Outlook image-blocking in enterprise tenants.
   Not a delivery problem.

gives a reader the information needed to act correctly. The anomaly flag matters more than the raw numbers — it tells the team whether to treat a divergence as a real signal or as a known artefact.

Run the comparison on your own campaigns

Inbox Check gives you per-provider placement for any campaign in under 10 minutes. Pair that with your ESP's open rate and you get the same diagnostic view we used to build the quadrant analysis above. Free test at the homepage, API docs at /docs.

When you have to pick one

If resource constraints force you to report exactly one number to leadership, pick inbox placement. The rationale:

  • Placement is closer to the outcome the business cares about (mail reaching humans).
  • Placement is not gameable by pre-fetch or scanners.
  • Placement correlates with revenue in a way open rate increasingly does not.
  • Placement declines fast when reputation drops — giving early warning that open rate buries.

Common objections

"Placement is only a sample."

True. The seeds are a statistical proxy for the rest of the list. But the sample is cleanly measured (folder inspection is ground truth per seed), whereas open rate is a full-list metric that is also a heavily contaminated proxy. A clean sample beats a noisy census.

"My ESP does not report inbox placement."

Most native ESP implementations do not. You add it with a third-party test that sends to seed mailboxes and reports back via API or webhook. Adding it to a campaign takes 30 seconds of setup.

"I need a metric that updates in real time."

Modern seed-test services produce results within 3–10 minutes of send for most providers. Not real-time, but fast enough for any practical decision loop.

FAQ

Can inbox placement be gamed?

Partially. A sender can warm up seeds differently from regular recipients, which inflates placement for the seeds. Reputable seed networks rotate and randomise addresses to make this harder.

How many seeds is statistically enough?

For a single provider, 5–8 seeds is enough to detect gross placement issues. For a reliable per-provider number across 6–8 providers, you want 15–25 seeds total.

What is the refresh cadence that makes sense?

Once per campaign for broadcasts. Daily or hourly sampling for high-volume automated flows. Continuous monitoring for mission-critical transactional mail like password reset and receipt emails.

Is there a metric that combines open and placement?

Some vendors publish a "real engagement rate" that multiplies placement by a click-derived engagement factor. It is a useful single number for executive summaries, but it hides the diagnostic value of looking at each component.
Related reading

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required