Metrics7 min read

Open rate is a lie: why inbox placement is the only metric that matters

Apple Mail Privacy Protection inflated opens to 80%+ overnight in 2021. Outlook blocks pixels by default. AI email summarisers open messages before humans see them. If you're still making decisions on open rate in 2026, you're making them on fiction.

On 20 September 2021, open rates stopped measuring what they had measured for two decades. Apple shipped Mail Privacy Protection in iOS 15, and every image in every email sent to an Apple Mail user started getting pre-fetched by a proxy — regardless of whether a human ever opened the message. Five years later, teams are still reporting open rates as if the metric means what it used to.

TL;DR

Open rate now measures a mix of MPP pre-fetches, Outlook's blocked-pixel behaviour, AI summariser bots and actual humans — in unknowable proportions. Any A/B test decision based on opens is effectively random. Measure clicks, replies, revenue and inbox placement instead.

What Apple MPP actually does

Apple Mail Privacy Protection, enabled by default on every iPhone, iPad and Mac running Mail.app, pre-fetches every image in every incoming email through Apple's proxy servers. It does this regardless of whether the user opens the message. The proxy loads your tracking pixel along with everything else, fires your open-tracking endpoint, and records an "open" that never happened.

Roughly 55% of email opens globally now happen in Apple Mail. That means more than half of all open events your ESP records are MPP pre-fetches, not human eyes. The actual human engagement rate could be 20% or 70% — the metric cannot distinguish. Apple also randomises the timestamp, so time-to-open is noise. And it routes through proxy IPs, so geo-location is useless.

Outlook's pixel blocking

Outlook.com and the new Outlook client (Monarch) block tracking pixels by default as of late 2023. Unlike Apple's pre-fetch, Outlook simply doesn't load the image — so your open never fires at all. The user can click "Download pictures" to load images, but the default state is silent blocking.

This cuts the other direction from MPP: Outlook users look like they never open your mail, when in reality the message may have been read cover-to-cover. Combined, MPP and Outlook's policy create a metric with false positives in one direction and false negatives in the other — which is not a metric at all.

AI summarisers open before you do

The newest source of noise: AI email clients. Superhuman AI, Shortwave, Notion Mail, Gmail's AI summaries, and every enterprise Copilot integration fetches email bodies before the human sees them, so it can pre-compute summaries and suggested replies. Most of these fetchers load images, which fires your open pixel.

For B2B audiences, AI clients now account for an estimated 8–12% of "opens" and climbing. These are machine reads, not human ones, and their engagement patterns are useless for decision-making — a bot that opens every single message doesn't tell you anything about your subject line.

What opens now actually measure

In 2026, an "open" event could be:

  • An Apple MPP pre-fetch from the Apple proxy.
  • An AI summariser bot at Superhuman, Shortwave or Copilot.
  • An antispam sandbox at Google, Microsoft or Proofpoint fetching the body for scanning.
  • A security product detonating links in an appliance before delivery.
  • An actual human opening the email on a device without pixel blocking.

None of these are tagged. Your ESP dashboard shows one number. It could be 85% with ten real human reads or 85% with eight hundred real human reads. You cannot tell from the data.

Why A/B testing on opens is broken

Every serious marketer has run subject-line A/B tests. Variant A gets 42% opens, Variant B gets 39%, ship Variant A to the rest of the list. That methodology was shaky in 2020 and dead in 2026.

MPP is content-agnostic. It pre-fetches every image regardless of the subject line. If half your audience is on Apple Mail, half of your open signal has zero correlation with the subject line you're testing. You can run the test with two identical variants and get a "significant" 3% difference just from random MPP variance in which accounts checked mail while the test was running.

The fix is to A/B test on signals that still reflect human behaviour: click-through rate, reply rate, and revenue per send. These require larger sample sizes, but the signal is real.

What to measure instead

  1. Reply rate for any conversational or sales-oriented mail. A human must type to reply. Impossible to fake with MPP. Weight this highest.
  2. Click-through rate on body links. Harder to fake, though AI clients and security sandboxes also click. Still the cleanest engagement signal for marketing email.
  3. Revenue per thousand sends (RPM). The only bottom-line metric that can't be spoofed by a proxy. If RPM is flat, nothing else matters.
  4. Inbox placement rate. Measures whether the message got the chance to be opened at all, by any reader. Correlates directly with every downstream metric.
  5. Unsubscribe and complaint rates. Both require human action by definition. Low complaint rate is a reputation signal ISPs actually care about.

How to rebuild your reporting

If you're still running a weekly campaign report with "Open rate" at the top, rewrite it. Move opens below the fold as a diagnostic, not a KPI. Put placement rate, reply rate, click rate and RPM at the top. If you want a single headline number for engagement, use unique_clicks / sends — cleaner than opens, correlated with revenue, and much harder to game.

For subject-line testing, switch to measuring clicks within the opened email (which still correlate with whether the human read the message) or reply rate on cold outreach. Run your tests on larger populations to compensate for the weaker signal.

The open rate refugees' playbook

Three moves: stop reporting opens as a headline, lead with reply rate or RPM instead, and use inbox placement tests as the upstream proxy for "did the message have a chance". Anything else is decoration.

Frequently asked questions

Do clicks still work as a signal?

Mostly. AI clients and security sandboxes do click links, which inflates clicks for B2B audiences by maybe 5–10%. Compared to opens, clicks remain the cleanest engagement signal available.

Should I strip tracking pixels entirely?

For B2B cold outreach, yes — Gmail is increasingly treating tracking pixels as a negative signal, and the data is too noisy to be useful. For bulk marketing where you need some engagement signal, keep the pixel but weight it low.

How do I know if I'm reaching the inbox without open rate?

Inbox placement tests. Send a specimen to a panel of seed mailboxes and check the folder each landed in. This measures placement directly, without relying on pixel fires.

Isn't open rate still useful as a trend indicator?

Marginally. If your open rate suddenly drops 30% week-over-week, something is probably wrong with placement. But the absolute number and short-term variance are noise.
Related reading

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required