Strategy9 min read

From vanity to value: an executive rewrite of the email scorecard

The typical email scorecard is five vanity metrics in a trench coat. Here is the line-by-line rewrite that an executive can actually act on.

Every marketing team has an email scorecard. It appears in the weekly review, the monthly readout, sometimes the quarterly pack. In almost every organisation, that scorecard has five metrics on it, and four of them are vanity. Opens, clicks, list size, campaign volume, and — if you are lucky — conversion. Of those, only conversion survives executive scrutiny.

This article rewrites the scorecard from the ground up with an executive audience in mind. Each vanity metric is swapped for a value-anchored replacement, and the logic is shown line by line so you can defend each swap to a skeptical team.

The core principle

An executive scorecard metric must be (1) denominated in revenue or a clean proxy, (2) resistant to gaming, (3) actionable through specific levers. Four of the five typical metrics fail at least two of these tests. Fix them, or take them off the scorecard.

The typical scorecard

Email Scorecard — Typical (2027)
───────────────────────────────

1. Open rate                22%        (target 25%)
2. Click-through rate        3.5%      (target 4%)
3. Conversion rate           0.8%      (target 1%)
4. Unsubscribe rate          0.2%      (target <0.3%)
5. List size                 145,000   (growing)

Status: yellow.

Four of these numbers are either actively misleading or uninformative at the executive level. Only conversion rate survives unchanged. Let us rewrite each.

Line 1: Open rate → Inbox placement rate

Problem with open rate: inflated by pixel prefetch (Apple MPP and similar), uncorrelated with revenue, not comparable across campaign types, and actively anchoring on a fake-positive signal.

Replacement: inbox placement rate. The percentage of messages reaching the primary inbox, measured via seed-list across major providers.

Why it works:
  - Directly measurable with existing tooling.
  - Leading indicator of revenue (placement drops precede revenue drops).
  - Not gameable by pixel prefetch.
  - Provides per-provider detail when drops occur.

Target: 85–90% blended. Below 75% is an incident.

Line 2: Click-through rate → Verified click rate

Problem with click-through rate: inflated by security scanners (Microsoft Safe Links, Mimecast, Proofpoint), which follow every link in the message on delivery. In security-heavy B2B environments, scanner clicks can account for 40–60% of reported clicks.

Replacement: verified click rate — clicks filtered to human timing signatures (not first-5-seconds, not perfect sequential pattern), ideally cross-referenced with session-level analytics on the landing page.

Why it works:
  - Filters the scanner noise that dominates raw clicks.
  - Aligns with actual human interest.
  - Is stable across providers, unlike raw CTR which is Outlook-heavy-biased.

Target depends on campaign type; the signal is the trend, not the absolute level.

Line 3: Conversion rate → Revenue per thousand sent (RPM)

Problem with conversion rate: conversion rate alone is close to right, but it hides the denominator. A 1% conversion on 10k sends is a different business than 1% on 1M sends, and the scorecard should show that.

Replacement: revenue per thousand sent (RPM), or its equivalent for non-commerce programmes (verified actions per thousand, pipeline per thousand). Same denominator, revenue numerator.

RPM = (email-attributed revenue / messages sent) * 1000

Why it works:
  - Denominated in dollars, so it lands in the P&L.
  - Normalises for volume, so "send more" is not trivially winning.
  - Sensitive to placement, so it tracks deliverability in the headline metric.

Target: set based on historical baseline, with a monthly trending chart.

Line 4: Unsubscribe rate → Net list health

Problem with unsubscribe rate: measures exit only, ignores entry, misses complaints and bounces that are more damaging, and treats the list as static.

Replacement: net list health — new subscribers minus all forms of loss (unsubscribes, hard bounces, complaints), expressed as a monthly percentage.

Net list health = (new subs - unsubs - bounces - complaints) / list size * 100

Why it works:
  - Captures the full flow, not just one side.
  - Differentiates a growing programme from a shrinking one even when
    unsub rates look similar.
  - Includes complaints, which matter more than unsubs for reputation.

Target: +2% to +5% per month for healthy growth; negative triggers investment in list-growth work.

Line 5: List size → Engaged list size

Problem with list size: inflates over time as subscribers drift to inactivity. A 145k list where 40k are active is not a 145k list for decision-making purposes.

Replacement: engaged list size — subscribers with any verifiable engagement (click, reply, conversion) in the last 90 days.

Why it works:
  - Matches the audience that can plausibly be reached with commercial intent.
  - Drops placement-degraded and inactive segments out of the headline size.
  - Tracks real list value, not gross list size.

Target: engaged fraction above 30% of total list size for mainstream B2C programmes; higher for tight B2B programmes.

The rewritten scorecard

Email Scorecard — Value-Anchored (2027)
───────────────────────────────────────

1. Inbox placement rate       85%        (target 88%)     ▼ 2pp
2. Verified click rate         2.1%      (trend)          ●
3. Revenue per thousand (RPM)  $76       (target $80)     ▲ 4%
4. Net list health            +2.8%/mo   (target >+2%)    ●
5. Engaged list size           52k       (target 55k)     ▼

Status: yellow on placement and engaged size; investigating.

Same five-line format. Same colour scheme. Same meeting cadence. Different metrics. Executives can act on this scorecard in a way they cannot act on the original.

Start with the placement swap

The highest-leverage line to rewrite first is open rate → inbox placement rate. Inbox Check gives you free per-provider placement in under two minutes, and a paid API for continuous measurement. Run one test and see the real number — you now have the numerator for Line 1 of the value-anchored scorecard.

Executing the rewrite

Seven-day plan for teams that want to make the swap:

  1. Day 1: Measure current inbox placement with a free test; capture the number.
  2. Day 2: Calculate RPM from the last four quarters' revenue and send volume.
  3. Day 3: Segment verified clicks out of total clicks (filter <5s clicks and sequential scanner signatures).
  4. Day 4: Compute engaged list size from 90-day engagement data.
  5. Day 5: Produce the rewritten scorecard as a parallel report to the existing one.
  6. Day 6: Present to the head of marketing with the rationale for each swap.
  7. Day 7: Agree the switch date and retire the old scorecard at the next monthly review.

Handling the transition politically

Two common forms of resistance:

"Our open rate is how our agency reports success."

Your agency reports on what the scorecard rewards. Change the scorecard, the reporting changes. If the agency cannot produce placement and RPM numbers, it is time to ask why, not to preserve the old metric.

"The new numbers look worse."

They look more honest. The old numbers were not better performance; they were less accurate reporting. Executives generally prefer accurate reporting once past the first quarter of discomfort.

FAQ

Should we keep open rate as a footnote for continuity?

Yes, in an appendix. Historical continuity has some value; headline status does not. Move it to a secondary section so comparison is possible without anchoring behaviour.

What if our attribution to email revenue is weak?

Use a conservative attribution model consistently. The absolute RPM number matters less than its movement. A stable attribution method applied consistently produces a useful trend even if the absolute value is imperfect.

How do we handle lifecycle and transactional emails in the same scorecard?

Separate scorecards. Lifecycle and transactional have different target bands and incident thresholds; mixing them destroys the meaning of the metrics. Two separate five-line scorecards is the right structure.

Do we need to rewrite the practitioner scorecard too?

The practitioner scorecard can keep a larger diagnostic set (including per-campaign opens and clicks for subject-line testing). The rewrite is specifically for the executive-facing scorecard.
Related reading

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required