Open your ESP's campaign report. You see "unique clickers" and "total clicks". Unique seems trustworthy — one per person. Total seems like useful depth — more clicks is more engagement. Neither is reliable under modern email-infrastructure reality.
What "unique" actually means
Unique clicks count distinct recipients who clicked at least one link. The deduplication key is usually the tracking token bound to the recipient. A single human clicking three links counts once.
But a single enterprise recipient can trigger clicks from:
- Their company's secure email gateway.
- Their own mail client pre-fetch.
- Their endpoint security agent.
- An archival indexer two hours later.
- Their actual human click.
All five fire against the same tracking token. Your ESP counts one unique clicker. The number is accurate in the "we saw the token" sense and meaningless for engagement.
What "total" actually measures
Total is the raw event count. It tells you how many times a tracker fired. Scanners dominate. Headless browsers dominate. Curious engineers dominate when they forward the email to their team.
In one B2B audit we did, 73% of total clicks were scanner events. The company's "click depth" dashboard — a favourite exec metric for measuring "engagement quality" — was essentially a bot-activity chart.
Unique clicks compress noise and signal into a single token per recipient. Total clicks over-weight recipients with the most aggressive security stack. Averaging them does not help — you are averaging two different errors.
Three failure modes worth knowing
The "every recipient is unique" illusion
If your list quality is declining and more messages go to spam, fewer scanners touch them. Paradoxically, your human-click share grows. Unique CTR can stay flat while real engagement drops, or vice versa, masking the underlying deliverability problem.
Total clicks driven by image maps
Emails with clickable image maps (feature grids, "click each product" layouts) rack up total clicks from scanners rendering the HTML and firing one click per region. Total clicks-per-delivered becomes a function of HTML complexity, not interest.
Unique CTR skewed by domain mix
If you add a large enterprise segment to your list, unique CTR jumps because scanners create artificial unique-token clicks. The content did not change. The list changed. Your dashboard attributes a subject-line A/B "lift" that is actually demographic.
Metrics that hold up
- Human-only unique CTR — unique clickers minus known-scanner tokens, divided by delivered. Not perfect (misses unsignatured scanners) but much cleaner than raw unique CTR.
- Landing-page verified clicks — clicks that triggered a landing-page analytics event (GA4, Amplitude, server-side). The extra hop strips most bots because scanners rarely render JS.
- Replies per delivered — smaller volume but honest. Bots do not reply. Use this especially for newsletters and sales outreach.
- Click-to-conversion ratio — if this drops while CTR rises, your CTR is picking up noise, not engagement. Good leading indicator for re-audit.
- Revenue per delivered email — the honest top-line. Slow to compute, but immune to bot noise.
A minimal dashboard rebuild
If you can only keep three numbers per campaign:
1. Inbox placement rate (% of delivered that reached Primary/Inbox)
2. Human-verified CTR (landing-page-event clicks / delivered)
3. Conversion per delivered (revenue or goal completions / delivered)None of these three are corrupted by scanners. Two of them (placement, conversion) are insulated from ESP-side event distortions entirely.
Most teams already measure the other two. Placement is the gap. Inbox Check gives you placement data per mailbox provider in minutes — Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail — free, no signup, and also available via API if you want to integrate it into your pre-send pipeline.
How to migrate your team off CTR-first reporting
- Do not remove CTR from dashboards. People panic. Add the new metrics alongside, label them clearly.
- Pair each A/B test with a placement check. When both variants land in Primary equally, comparing CTR is at least fair. When one variant lands in Promotions, CTR differences are noise.
- Quarterly audit — decompose CTR into human / scanner / prefetch and share the breakdown. After two quarters the exec team understands and stops asking about the raw number.
- Send-time optimisation — use conversion or human CTR. Never total or unique CTR.