After a string of articles arguing that clicks are inflated, delayed, or artificial, you might reasonably conclude that CTR belongs in the bin. That would be an overcorrection. CTR remains a useful directional signal — as long as you know when it earns the right to be trusted.
This is the nuanced take: what CTR still does well, what it does poorly, and a practical policy for the B2B marketer who cannot wait for revenue data on every send.
Where CTR still earns its keep
Within-campaign comparisons on the same recipient set
If variant A and variant B go to randomly-split halves of the same list on the same day, scanner noise is approximately the same in both halves. CTR differences remain interpretable. Not perfect — the underlying engagement rate is still inflated — but the direction of lift is sound.
Trend detection on a stable audience
Month-over-month CTR for the same newsletter to the same list (minus churn) is a useful trend indicator. A sharp drop often signals deliverability trouble rather than content trouble, because scanner noise is sticky.
Subject-line and preheader testing
The path from subject line to click goes through "open". Both opens and clicks are corrupted, but they are corrupted in correlated ways. CTR as a response metric to subject-line experiments is still usable if you treat it as a noisy estimator and demand strong effect sizes.
Unsubscribe-link click tracking
Nobody pre-fetches the unsubscribe URL. Scanners specifically avoid it to prevent list damage. If you instrument the unsubscribe click, you get a near-pure human signal — low volume but honest.
Underappreciated insight: instrument your unsubscribe mechanism with a tracker fire before the redirect. The result is the one click event per campaign you can fully trust.
Where CTR lies loudest
Absolute CTR benchmarks
"Industry average B2B CTR is 4.2%" is meaningless. The benchmark is polluted in the same ways your own data is polluted, and the mix of enterprise vs SMB lists in the benchmark does not match yours. Use internal trend data, not external benchmarks.
Lift claims below 15%
With scanner noise contributing 20% to 40% of observed clicks in B2B, a reported 8% lift from a new design might be 2% or it might be negative after noise correction. Require at least 15% to 20% observed lift before taking action, unless you have cleaned the data.
Click-based send-time optimisation
Scanners click within seconds. Humans click over hours. If your send-time optimiser uses "first 60 minutes CTR" to decide the best send hour, it is optimising for the recipients most heavily scanned. Those are not your buyers.
Cross-segment comparisons
Comparing CTR between an enterprise segment and an SMB segment is apples to oranges. The enterprise segment has more scanners. Its observed CTR is higher for reasons unrelated to message quality.
A practical B2B CTR policy
- Tier your decisions by CTR trust level.Decisions requiring high trust (launches, rebrands, pricing changes) need revenue data. Decisions requiring moderate trust (subject-line selection) can use CTR with noise filtering. Decisions requiring only directional data (copy polish) can use raw CTR.
- Always report human-filtered CTR alongside raw CTR.Label them clearly.
ctrandctr_humanorctr_bot_adjusted. - Pair every CTR with placement data. A CTR improvement that coincides with a placement regression is usually illusory.
- Use revenue per delivered for quarterly reviews.It is slow, lagging, but honest. Run with a 30-day attribution window and revisit quarterly.
- Delete benchmark screenshots from your deck.Your numbers compared to anonymous internet averages generates more confusion than insight.
A worked example
Your team ships a new welcome email. Before/after CTR looks like this:
Welcome v1 (old): 18.4% unique CTR
Welcome v2 (new): 21.2% unique CTR
Observed lift: +15%Is v2 better? Apply the filters:
After bot filtering:
Welcome v1: 11.0% human CTR
Welcome v2: 12.1% human CTR
Real lift: +10%
Conversion (signup completion per delivered):
Welcome v1: 6.2%
Welcome v2: 6.9%
Real lift: +11%
Placement:
Welcome v1: Gmail Primary 78%, Outlook Focused 64%
Welcome v2: Gmail Primary 81%, Outlook Focused 67%Three signals agree: v2 is better. The raw CTR lift (+15%) was the loudest signal and slightly overstated the effect. The conversion lift (+11%) is the honest answer. The placement improvement tells you the gain is real, not a list-mix artifact.
Three independent signals pointing the same direction is the standard of evidence for B2B decisions. Use Inbox Check for the placement leg — free tests, no signup, and a simple API for automating the check before every send.
When to stop caring about CTR
For these specific use cases, CTR adds zero value, do not waste reporting real-estate on it:
- Transactional emails — you care about deliverability and time-to-open, not CTR.
- Password reset and security notices — same.
- Internal product notifications — use in-product telemetry instead.
- Weekly digests where clicks are not the goal (informational content) — reading time on landing, if anything.
The one-sentence summary
CTR is a noisy, directionally useful signal for within-campaign A/B tests and month-over-month trend detection on stable audiences — and nothing more. Replace it wherever a cleaner signal exists, and triangulate it with placement and conversion when it is the best available.