Metrics7 min read

Unique vs total clicks: both are lies

Every ESP reports two numbers. Both are wrong in different ways. Here is the honest accounting and what to replace them with.

Open your ESP's campaign report. You see "unique clickers" and "total clicks". Unique seems trustworthy — one per person. Total seems like useful depth — more clicks is more engagement. Neither is reliable under modern email-infrastructure reality.

What "unique" actually means

Unique clicks count distinct recipients who clicked at least one link. The deduplication key is usually the tracking token bound to the recipient. A single human clicking three links counts once.

But a single enterprise recipient can trigger clicks from:

  • Their company's secure email gateway.
  • Their own mail client pre-fetch.
  • Their endpoint security agent.
  • An archival indexer two hours later.
  • Their actual human click.

All five fire against the same tracking token. Your ESP counts one unique clicker. The number is accurate in the "we saw the token" sense and meaningless for engagement.

What "total" actually measures

Total is the raw event count. It tells you how many times a tracker fired. Scanners dominate. Headless browsers dominate. Curious engineers dominate when they forward the email to their team.

In one B2B audit we did, 73% of total clicks were scanner events. The company's "click depth" dashboard — a favourite exec metric for measuring "engagement quality" — was essentially a bot-activity chart.

The two metrics have opposite biases

Unique clicks compress noise and signal into a single token per recipient. Total clicks over-weight recipients with the most aggressive security stack. Averaging them does not help — you are averaging two different errors.

Three failure modes worth knowing

The "every recipient is unique" illusion

If your list quality is declining and more messages go to spam, fewer scanners touch them. Paradoxically, your human-click share grows. Unique CTR can stay flat while real engagement drops, or vice versa, masking the underlying deliverability problem.

Total clicks driven by image maps

Emails with clickable image maps (feature grids, "click each product" layouts) rack up total clicks from scanners rendering the HTML and firing one click per region. Total clicks-per-delivered becomes a function of HTML complexity, not interest.

Unique CTR skewed by domain mix

If you add a large enterprise segment to your list, unique CTR jumps because scanners create artificial unique-token clicks. The content did not change. The list changed. Your dashboard attributes a subject-line A/B "lift" that is actually demographic.

Metrics that hold up

  1. Human-only unique CTR — unique clickers minus known-scanner tokens, divided by delivered. Not perfect (misses unsignatured scanners) but much cleaner than raw unique CTR.
  2. Landing-page verified clicks — clicks that triggered a landing-page analytics event (GA4, Amplitude, server-side). The extra hop strips most bots because scanners rarely render JS.
  3. Replies per delivered — smaller volume but honest. Bots do not reply. Use this especially for newsletters and sales outreach.
  4. Click-to-conversion ratio — if this drops while CTR rises, your CTR is picking up noise, not engagement. Good leading indicator for re-audit.
  5. Revenue per delivered email — the honest top-line. Slow to compute, but immune to bot noise.

A minimal dashboard rebuild

If you can only keep three numbers per campaign:

1. Inbox placement rate       (% of delivered that reached Primary/Inbox)
2. Human-verified CTR          (landing-page-event clicks / delivered)
3. Conversion per delivered    (revenue or goal completions / delivered)

None of these three are corrupted by scanners. Two of them (placement, conversion) are insulated from ESP-side event distortions entirely.

Placement is the missing leg

Most teams already measure the other two. Placement is the gap. Inbox Check gives you placement data per mailbox provider in minutes — Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail — free, no signup, and also available via API if you want to integrate it into your pre-send pipeline.

How to migrate your team off CTR-first reporting

  1. Do not remove CTR from dashboards. People panic. Add the new metrics alongside, label them clearly.
  2. Pair each A/B test with a placement check. When both variants land in Primary equally, comparing CTR is at least fair. When one variant lands in Promotions, CTR differences are noise.
  3. Quarterly audit — decompose CTR into human / scanner / prefetch and share the breakdown. After two quarters the exec team understands and stops asking about the raw number.
  4. Send-time optimisation — use conversion or human CTR. Never total or unique CTR.

FAQ

If neither unique nor total clicks is reliable, what did the industry use before CTR?

Open rate (even worse today because of MPP and image proxies) and unsubscribe rate. Unsubscribe rate is actually useful because it is a human action with a cost to the user — no bot unsubscribes.

Is there a 'click uniqueness' score I can compute?

Some senders compute clicks-per-click-token-per-hour and flag outliers. A human rarely clicks the same link 20 times in 10 minutes. That filter catches some sandbox behaviours cheaply.

Do ESPs plan to fix this?

Incrementally. HubSpot and Marketo have added 'bot filtering' features. They catch signatured scanners but not the long tail. Assume any ESP filter is 50 to 70% accurate.

What single metric should I trust most?

Conversion per delivered email. It is immune to click-tracking noise and captures what matters. The downside is lag; pair it with placement for a real-time sanity check.
Related reading

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required