Salesforce email runs across three distinct surfaces, and each one has its own deliverability failure mode. Marketing Cloud Email Studio pushes large volume from shared or dedicated IPs through the Marketing Cloud MTA. Sales Cloud Cadences (formerly known as High Velocity Sales) send rep-level email through a connected inbox or through Einstein Activity Capture. Service Cloud case emails go out from the org's sending profile. The three surfaces do not share reputation, they do not share authentication setup, and they do not fail the same way. An enterprise without a per-surface placement test program is flying blind.
The cost of a bad send at enterprise scale is not recoverable. A Marketing Cloud journey that lands forty percent in Junk across a ten-million-record audience burns subscriber trust you cannot rebuild in a quarter. A Cadence where the signing key is mismatched across the sales team cuts pipeline directly. Both are detectable with a seed placement test the morning before activation.
Marketing Cloud Email Studio — the enterprise volume lane
Email Studio is the high-volume arm. Authentication here depends on whether your tenant uses Sender Authentication Package (SAP) or not:
- With SAP. Salesforce provisions a dedicated sending domain for your tenant, a dedicated IP (or dedicated segment of a shared pool), and a custom reply management address. SPF, DKIM, and the private domain's DNS live on your registrar with CNAMEs pointing into Salesforce.
- Without SAP. You send from a shared IP under a Salesforce-owned domain, which caps your deliverability ceiling. Any tenant doing meaningful B2C volume needs SAP — without it, the fate of your sends is tied to every other Salesforce customer's reputation.
Even with SAP, three configuration points quietly determine placement:
- DKIM selector alignment. Marketing Cloud signs under a selector published on your sending domain. If the
d=value does not align with the header From domain, DMARC strict alignment fails. Enterprise DMARC policies atp=rejectwill then quarantine the entire send. - CloudPages and tracking domain. The tracking link rewrite domain (typically
cl.exct.netor a branded variant) affects content classification. Branded tracking domains with clean history outperform shared ones measurably. - IP warm-up schedule. New dedicated IPs need a multi-week warm-up. Sending one million messages on day one of a cold IP will land a substantial fraction in Junk regardless of authentication.
A Marketing Cloud tenant with SAP but a DMARC policy at p=reject and a third-party vendor (like a transactional mailer or a survey tool) sending under the same domain without its own aligned DKIM. The third-party mail gets rejected outright. Marketing mail is fine. Nobody can reproduce the problem because nobody is looking at the third-party sender.
Sales Cloud Cadences — the rep-level lane
Cadences (and underlying Einstein Activity Capture) send through the rep's connected email account. The sender is a real person, the infrastructure is Google or Microsoft, and the placement question is per-user.
This is where enterprise deliverability most often fails quietly. Sales operations focuses on Cadence step configuration and conversion metrics; nobody owns per-rep placement. The symptom is uneven reply rates across the sales team that get blamed on "bad territories" or "weak reps" when the root cause is a connected inbox reputation problem affecting one segment of users.
Per-rep seed testing in an enterprise org
With 50 or 500 sales reps, manually running a placement test per rep every week does not scale. The workable pattern is:
- Weekly sampled placement test across a rotating subset of reps — say twenty percent, so every rep gets tested at least monthly.
- Triggered placement test on any rep whose reply rate drops more than one standard deviation below team median.
- Mandatory placement test on any rep before they are promoted to a new region or product line that will significantly change their sending pattern.
DKIM, SPF, and DMARC at enterprise scope
The single biggest source of enterprise Salesforce placement problems is domain governance — who is allowed to sign as@your-corp.com, and who is actually doing so.
- DKIM key inventory. Every vendor sending as your domain needs its own DKIM selector. Enterprises routinely have fifteen or more: Marketing Cloud, Pardot, a survey tool, a ticketing vendor, three regional ESPs, a calendar scheduler, a customer feedback tool. Keep a written inventory of which selector belongs to which vendor.
- SPF discipline. The ten-lookup SPF limit is a constant problem at enterprise scope. Delegate third-party senders onto subdomains with their own SPF records rather than piling every include onto the primary SPF.
- DMARC staged rollout. Never jump straight to
p=reject. Spend at least ninety days atp=noneconsuming aggregate reports, fix every aligned-failure source, then move top=quarantine pct=10, gradually raisingpctto 100 before promoting top=reject.
A native integration for leading CRMs is in private beta — placement tests inside the CRM, alerts on drops.
A pre-activation checklist for any new automation
Before flipping the switch on a new Marketing Cloud journey or a new Sales Cloud cadence, walk the list:
[ ] Sending domain SPF under 10 lookups, no permerror.
[ ] DKIM selector confirmed on the sending domain, d= aligns with header From.
[ ] DMARC policy matches rollout stage (none / quarantine / reject).
[ ] For Marketing Cloud: branded tracking domain live, no 404s on test links.
[ ] For Marketing Cloud: IP warm-up stage matches today's planned volume.
[ ] For Cadences: the specific reps being enrolled have passed a seed test
within the last 14 days.
[ ] The message template has been sent to the seed set, not just previewed.
[ ] Seed placement across the audience's primary providers is >= 85% inbox.
[ ] Rollback plan documented: what triggers a pause, who calls it.Reading enterprise placement data
Enterprise teams often drown in data — IP reputation scores, feedback loop counts, postmaster dashboards from five providers, DMARC aggregate reports. The placement test cuts through it by answering one question directly: where did this specific message, sent from this specific configuration, actually land? Everything else is a proxy.
When the placement test says ninety-two percent inbox across the seed set and the launch metrics say thirty percent open rate, you have a content or targeting problem, not a deliverability problem. When the placement test says sixty percent Junk and open rate is bad, you stop the send and fix the root cause. The business question becomes answerable in minutes instead of weeks of post-mortem.