A free inbox placement tool has a boring, expensive secret at its core: a fleet of real mailboxes on every major consumer and business provider. Not simulated. Not SMTP sink-holes that record headers. Real accounts, logged in via real clients, with real IMAP sessions and real message histories. This is what actually lets us tell you whether Gmail put your mail in the Inbox, Promotions or Spam folder — because we opened the mailbox and looked.
This post is the long version of that sentence. Why real mailboxes matter, what we run, how we keep them alive, and why most competitors do not bother.
Why real mailboxes beat simulated ones
A lot of deliverability tools cheat. They do not actually send a message into Gmail and check the folder. They infer placement from SMTP logs, from DNSBL reputation, from content scoring. The math looks plausible and the reports look confident, but the reports are guesses.
The problem is that Gmail's filter is not a function of SPF, DKIM, DMARC, content and IP reputation alone. It is a machine-learning model whose weights the outside world cannot see, trained on engagement signals we also cannot see. The only honest way to know whether Gmail put mail in Inbox or Spam is to send that mail into a Gmail mailbox and ask Gmail where it went.
That is what a seed mailbox is: a real address we own, on a real provider, whose folder state we can read programmatically after a test message arrives.
The provider list
As of this post we maintain at least one seed mailbox on each of the following providers:
- Gmail (consumer
@gmail.com) - Google Workspace (custom domain on Google)
- Outlook / Hotmail / Live (consumer)
- Microsoft 365 (custom domain on Microsoft)
- Yahoo Mail
- AOL
- Mail.ru
- Yandex Mail
- Rambler
- GMX (Germany)
- Web.de (Germany)
- Orange (France,
@orange.frand@wanadoo.fr) - Free.fr (France)
- Laposte (France)
- Libero (Italy)
- Seznam (Czechia)
- ProtonMail
- Tutanota
- FastMail
- iCloud Mail
- Zoho Mail
Twenty-one providers as of this writing. The list grows when a customer asks for a provider we do not cover, and we are willing to create and maintain an account there for long enough to keep results honest.
Creation strategy: one per provider
We create one seed account per provider, using the provider's standard signup flow from a residential-looking IP in a plausible geography. No account-generator services, no burner phone providers — both are immediately detected and throw the account into a junk segment that will never behave like a real user's mailbox.
Each account goes through a thirty-day soak period before it is admitted to the seed pool. During soak we:
- Subscribe to five or six real newsletters (a newspaper, a developer weekly, a retailer, a utility bill) to generate organic inbound traffic.
- Send a handful of messages out to other seeds, to establish that the account sends as well as receives.
- Read some of the inbound mail on a desktop client, some on a mobile client. Mark a few as important, archive others.
- Leave the account alone for stretches of a few days so the engagement pattern is not a robotic daily heartbeat.
After thirty days we have a mailbox that looks like a mildly-used real human account. That is the one we use as a seed.
Keeping them warm
An unused mailbox gets flagged as abandoned and starts being treated like a spam trap. To prevent that, each seed has a lightweight keep-warm job: every few days it exchanges a few messages with other seeds, reads a random subset of its inbox, and clicks one or two links on real newsletters. The pattern is randomised so it does not look automated, and it runs at human-plausible hours in the seed's stated timezone.
Read / ack patterns to avoid filter bias
Here is the subtle part that most competitors get wrong. A seed mailbox that automatically marks every test message as read, or as not-spam, will teach the provider's filter "messages from these senders are always wanted". Over time, every test sender lands in the Inbox regardless of what they sent. That produces accurate-looking-but-useless reports.
We do not touch test messages. A test send arrives, we record the folder placement and headers, we leave the message exactly where the provider put it. No read flag, no move, no mark-as-not-spam, no star. The seed learns nothing from the test, which is the entire point.
Test messages are read-only. Our collector reads the folder they land in, records headers, captures an (optional) screenshot, and never clicks, flags, moves, or replies. Anything else poisons future results.
Rotation policy
Even with discipline, a seed drifts. Consistent arrivals from the same universe of test senders bias the filter over months. We rotate every seed roughly every six months — the old mailbox is retired, a new one is created and soaked, and the switchover happens once the new seed matches the old one on a battery of calibration sends from known-reputable senders. The companion article on seed rotation goes deeper on this mechanic.
Monitoring drift with canary campaigns
Every hour we run a small canary campaign through each seed. The canary is two test sends from known-reputable senders (our own transactional domain, a known-good newsletter we subscribe to) and two sends from known-bad senders (a deliberately-misconfigured domain, a domain with a listed IP). We check that the reputable senders land in Inbox and the bad ones land in Spam. When the ratio drifts outside expected bands, the seed is marked suspect and excluded from production tests until we investigate.
canary results (last 24h, per-seed inbox rate on known-good senders):
gmail.com (seed A) ..... 100% (expected ~100%) OK
gmail.com (seed B) ..... 96% (expected ~100%) OK
outlook.com (seed A) ..... 100% (expected ~100%) OK
mail.ru (seed A) ..... 40% (expected ~100%) SUSPECT -> investigate
protonmail.com (seed A) ..... 100% (expected ~100%) OK
...Handling provider account suspensions
Providers suspend seed accounts fairly often. Usually the trigger is a Too Much Unusual Activity signal: login from a new IP range, too many inbound messages from previously-unseen senders, a reputation complaint from one of the test senders. When a seed is suspended, we do not fight the provider. We retire the account, create a new one, and start its soak period. The placement reports for that provider are marked degraded on the public status page until the new seed exits soak.
Cost per provider per month
A rough breakdown, per provider, averaged over a year:
- Consumer free (Gmail, Outlook, Yahoo, Mail.ru, Yandex, AOL, GMX, Web.de, Orange): roughly $0 in account fees, but call it $3–5 of amortised engineering and monitoring time.
- Paid consumer (iCloud, FastMail): $3–10 per month plus the same $3–5 in engineering.
- Workspace providers (Google Workspace, Microsoft 365, Zoho): $6–15 per month per mailbox, plus domain and DNS, plus the overhead.
Twenty-one providers averages somewhere around $200–300 a month in direct costs and roughly a day a week of engineering attention in normal times. In a bad month (two suspensions and a rotation) it is half an engineer. This is genuinely why most tools do not do it.
Why most competitors do not bother
Running a seed pool is unglamorous operational work. It does not demo well on a marketing site, it cannot be announced as a feature, it takes continuous attention, and the instinct of most founders is to replace it with a simulation as soon as possible. The simulation ships in a weekend, demoes fine, and is wrong.
The only argument for keeping a real pool is the one we started with: you cannot honestly tell a customer where Gmail put their mail unless you open a Gmail mailbox and look.