Every email marketer eventually hits the same wall. You send a campaign to 50,000 subscribers. Your ESP dashboard shows 98.4% delivered, a 22% open rate, 2.1% click-through. It looks healthy. Then a customer replies asking why they haven't heard from you in three months — and you find your last four campaigns buried in their Spam folder. The ESP reported every one of them as "delivered", because from the MTA's perspective, they were. Accepted by the receiving server is not the same as accepted by the inbox.
The gap between what your ESP shows you and what your subscribers actually see is the single biggest blind spot in email marketing. The only tool that closes it is a handful of seed addresses, added to every send, that you can log into and inspect directly. This guide walks through why the standard metrics fail, how seed testing works, how many seeds you actually need, and how to make it part of your normal workflow without slowing anyone down.
Open rate tells you how many people opened an email they already received; seed tests tell you whether the email reached the inbox at all — and those are completely different questions.
What seed emails actually are
A seed email is simply a real mailbox, on a real provider, that you own and can log into. You add its address to the recipient list of every campaign you send. When the campaign goes out, that mailbox receives the same message your subscribers receive — sent from the same IP, with the same envelope sender, the same DKIM signature, the same content. Because it is a real mailbox on a real provider, the receiving provider applies the same placement logic it applies to everyone else. If Gmail decides the message belongs in Promotions, your Gmail seed ends up in Promotions. If Outlook decides it is spam, your Outlook seed ends up in Junk.
You then either check each seed mailbox manually, or you use a tool that logs into the mailboxes through IMAP or API, looks for the message, and reports which folder it landed in. The result is a direct, provider-by-provider placement map: Gmail Inbox, Gmail Promotions, Outlook Inbox, Outlook Junk, Yahoo Inbox, and so on.
A seed panel — the set of seed mailboxes you use — is usually 5 to 25 addresses spread across the providers you care about. For B2C in North America and Western Europe, that means Gmail and Outlook.com dominate, with Yahoo, AOL, iCloud, and GMX in the tail. For B2B, it means custom Google Workspace and Microsoft 365 domains. For Russia and CIS, Mail.ru and Yandex. For privacy-focused audiences, ProtonMail and Tuta. The composition matters, and we'll come back to it.
Why the metrics you already have don't tell you enough
Your ESP hands you four metrics after every campaign: delivery rate, open rate, click rate, and unsubscribe rate. Each of them is useful, none of them tells you where your email landed, and here is why.
Delivery rate measures the wrong boundary
Delivery rate is almost always computed as (sent - hard bounces) / sent. It counts messages that the receiving MTA accepted for delivery. An MTA can accept a message and then route it straight to the Spam folder; from the sender's side, that counts as delivered. Real inbox placement on most shared ESP IP pools runs 20 to 40 points below delivery rate. A 98.7% delivery rate can sit on top of 61% inbox placement and nobody would know unless they looked.
Open rate has been broken by Apple Mail MPP
Since iOS 15, Apple Mail Privacy Protection (MPP) pre-fetches remote images for every email a user receives, whether or not the user actually opens the message. If your subscriber uses Apple Mail — and roughly 50-60% of consumer mail clients in the US do — their "open" fires before they ever look at the message. Open rate became a hybrid signal: part human behaviour, part automated prefetch, and the ratio is different for every audience.
That means a 22% open rate might represent 22% of people who read the email, or it might represent 10% readers plus 12% Apple prefetches from messages that were silently routed to Spam but still got prefetched. You cannot tell the difference from open rate alone.
Click rate is honest but delayed and sparse
Click rate is not affected by MPP, so clicks are a more reliable engagement signal. But typical click-through rates are 1-3%, the signal is sparse, and clicks lag hours behind sends. By the time you see that your click rate crashed, the campaign is over.
Spam complaints lag by days
Gmail Postmaster Tools refreshes domain reputation once per day and smooths it over a rolling window. Microsoft SNDS updates less often. Spam complaints trickle in over 72 hours. By the time a reputation issue shows up in these dashboards, you have already sent two or three more campaigns into the hole.
A signal that is direct (reads the folder, not a proxy metric), fast (available within minutes of the send), and provider-specific (tells you Gmail-Promotions is fine but Outlook-Junk just broke).
That signal is seed placement.
How seed testing works end to end
The mechanics are straightforward. A seed test has four stages: add addresses, send the campaign, collect results, read the placement map.
- Add the seeds. Your seed addresses are added to the campaign recipient list — either as a dedicated seed list, a seed segment, or by pasting them into the To: field for transactional sends. To the ESP, they look like ordinary subscribers.
- Send the campaign. Nothing changes about your send. Same IP, same DKIM, same content, same send time. The seeds travel the same path as real subscribers.
- Collect the results. Each seed provider decides where to deliver the message — Inbox, Promotions, Spam, or silently drop it. The seed service polls the mailboxes and records where it found each message.
- Read the placement map. You get a table: for each provider, which folder the message landed in, how long the headers show it spent in the queue, whether SPF/DKIM/DMARC passed at the recipient side.
A good seed service returns this as JSON so you can wire it into Slack, a dashboard, or a CI check. Here is what a typical response looks like:
{
"test_id": "t_8z1q2m",
"sent_at": "2026-11-23T09:14:02Z",
"finished_at": "2026-11-23T09:17:41Z",
"summary": {
"inbox": 17,
"promotions": 3,
"spam": 2,
"missing": 0,
"pending": 0,
"inbox_pct": 77.3
},
"results": [
{ "provider": "gmail.com", "folder": "inbox", "spf": "pass", "dkim": "pass", "dmarc": "pass" },
{ "provider": "googlemail.com", "folder": "promotions", "spf": "pass", "dkim": "pass", "dmarc": "pass" },
{ "provider": "outlook.com", "folder": "inbox", "spf": "pass", "dkim": "pass", "dmarc": "pass" },
{ "provider": "hotmail.com", "folder": "junk", "spf": "pass", "dkim": "pass", "dmarc": "pass" },
{ "provider": "yahoo.com", "folder": "inbox", "spf": "pass", "dkim": "pass", "dmarc": "pass" },
{ "provider": "mail.ru", "folder": "inbox", "spf": "pass", "dkim": "pass", "dmarc": "pass" },
{ "provider": "yandex.com", "folder": "inbox", "spf": "pass", "dkim": "pass", "dmarc": "pass" },
{ "provider": "proton.me", "folder": "inbox", "spf": "pass", "dkim": "pass", "dmarc": "pass" }
]
}That response is all you need to answer the two questions that matter after every send: did we land in the inbox, and if not, where did the drop happen.
How many seeds, and which providers
More seeds give you more confidence; fewer seeds make the workflow cheap enough that you'll actually do it on every send. The sweet spot for most senders is 15 to 25 addresses. Below 5 you start seeing noise — one Gmail account parking a message in Promotions doesn't tell you whether the rest of Gmail will do the same. Above 30 the marginal information gets thin and the cost goes up.
Panel composition
The panel should mirror where your actual audience sits. For a mostly US B2C list, a reasonable split is:
- 6-8 Gmail accounts — mix of aged consumer accounts and newly-created ones. Gmail treats these differently.
- 4-5 Microsoft accounts — outlook.com, hotmail.com, live.com, plus at least one Microsoft 365 tenant on a custom domain. O365 has a different filtering stack to consumer Outlook.
- 2-3 Yahoo/AOL accounts — still a meaningful slice of US inboxes.
- 1-2 Apple iCloud accounts — growing share, own filtering quirks.
- 1-2 ProtonMail or Tuta accounts — privacy-sensitive audiences.
- 1 GMX or Web.de account — if you send into Germany.
If you send into Russia or CIS, swap in 2-3 Mail.ru and 2 Yandex accounts. If you send into Asia, add Naver and QQ. The panel is local; a US-only panel will mislead you on a global list.
Aged versus fresh accounts
Gmail and Outlook both weight account age and engagement history when scoring incoming mail. A message that lands in the Inbox of a brand-new Gmail account with zero history is a stronger positive signal than one that lands in the Inbox of a long-active account that has opened your mail before. A healthy seed panel mixes aged accounts (baseline behaviour for an engaged subscriber) and fresh accounts (baseline behaviour for a cold subscriber). If you only seed aged engaged accounts, you'll be fooled by positive bias.
How to add seeds in your ESP
The mechanics differ slightly between ESPs, but the pattern is always one of three:
- Dedicated seed list. Create a list or audience called
seeds, add the seed addresses to it, and include it in the send along with your regular list. Works in Mailchimp, Brevo, Klaviyo, MailerLite. - Seed segment or tag. Tag seed contacts with a
#seedlabel inside your main list. Use a segment that includes the main list AND the seed tag. Works well in HubSpot, ActiveCampaign, Klaviyo, ConvertKit. - Direct addition to campaign. Paste the seed addresses into the campaign's "Send to" field alongside the list selection. Works in GetResponse, Campaign Monitor, Amazon SES console.
Whichever pattern you pick, stick to it. Consistency matters more than picking the "best" option — seed panels are only useful when every campaign goes through them the same way.
If your panel is 20 addresses and your list is 50,000, seeds are 0.04% of recipients and the effect on reported open/click rates is negligible. But if your list is 2,000 and your panel is 20, that is 1% and it will distort your numbers. Most ESPs let you filter a tag out of reporting — do that.
Reading the placement map
After each send, you get back something like: 17 Inbox, 3 Promotions, 2 Spam. What do you do with that?
The baseline
Healthy senders on warm domains tend to see 85-95% Inbox, 5-10% Promotions, and 0-5% Spam. A single Spam hit on one provider is noise. Three or more Spam hits, or any Spam hit on Gmail, is signal.
The trajectory matters more than the snapshot
A one-off campaign at 72% Inbox is less concerning than a gradual decline from 92% to 78% to 65% over three sends. The trajectory is what you act on. Keep seed results in a log — a Google Sheet, a database table, a Slack channel — and look at the last 10 sends every week.
Provider-specific drops are diagnostic
If you drop to Spam on Outlook but stay Inbox on Gmail, it is almost always a Microsoft-specific reputation issue or SmartScreen content flag. If you drop on Gmail but stay Inbox on Outlook, it is almost always a Google-specific engagement or authentication problem. If you drop everywhere at once, it is usually authentication (DMARC broke, DKIM signature invalid) or content (added a new link, a new image host, a new footer).
Common mistakes and how to avoid them
Mistake: only seeding aged accounts
Aged accounts that have opened your mail before will land you in the Inbox almost every time. That doesn't mean a new cold subscriber would. Mix in fresh accounts to get a realistic picture.
Mistake: seeding the same panel for years
Panels age. Engagement history accumulates. After 12-18 months a panel that was "cold" starts behaving like an engaged panel because your mail keeps landing there. Rotate in fresh seeds every quarter.
Mistake: seeding only the providers you like
If 35% of your list is on Outlook and you have one Outlook seed, you will consistently undercount your Outlook problems. Weight the panel to roughly match your list distribution.
Mistake: seeding only big campaigns
Small sends and transactional sends are where reputation is actually built. Seed everything that touches subscribers — welcome series, password resets, order confirmations, re-engagement — because a single broken automation can quietly drag down reputation for weeks.
Mistake: treating seed results as pass/fail
Seeds are a diagnostic, not a verdict. A 72% Inbox result is not a "fail" — it is a prompt to investigate. Read the placement map, compare to last week, and decide whether to act.
How Inbox Check automates all of this
Running a seed panel by hand — provisioning 20 mailboxes, rotating them, logging into each one after every campaign — is the reason most teams never do it. Inbox Check gives you a fresh panel of 20+ seed addresses per test, across Gmail, Outlook, Yahoo, Mail.ru, Yandex, ProtonMail, GMX and more, fully automated and free. You either paste the addresses into your campaign, or hit the API from your send script, and you get the placement map back in under three minutes.
The free Inbox Check tool generates 20+ fresh seed addresses per test across Gmail, Outlook, Yahoo, Mail.ru, Yandex, ProtonMail and more. No signup, no credit card.