Make.com (ex Integromat) is the scenario-building tool that pairs best with a team who wants more power than Zapier offers without the operational overhead of self-hosting n8n. The flip side: a Make scenario iterates fast. A Google Sheets trigger with 2,000 rows and a Send Email module will fire 2,000 emails in about two minutes. If the sender domain is misconfigured, all 2,000 land in Spam. You only get one chance to avoid that.
This article walks through a scenario pattern we have rolled out at several clients: insert a placement-test HTTP module at the top, pause the scenario, and branch on the result before iterating over the real list.
Never run a Make scenario that sends to more than 100 real recipients without a placement test on the exact subject and body that is about to go out.
The scenario pattern
The shape of the guarded scenario is six modules. Trigger → HTTP (create test) → Sleep → HTTP (fetch result) → Router → real Send on the TRUE branch, Slack alert on the FALSE branch.
- Trigger — whatever fires your campaign. Google Sheet row, Airtable record, schedule, webhook.
- HTTP: Make a request → POST to
/api/tests. Send the same subject and HTML you are about to send to real recipients. Parse the response as JSON. Extract thetoken. - Sleep for 5 minutes. This gives seed mailboxes time to receive and categorise.
- HTTP: Make a request → GET to
/api/tests/:token. Parse the response. - Router with two routes: one that filters on
score >= 70(real send), one that filters onscore < 70(Slack alert and halt). - Iterator + Send Email on the TRUE route. Or your existing SendGrid / Mailgun / Gmail module.
HTTP module configuration
The HTTP module in Make accepts raw headers and bodies, which is all you need. Below are the exact headers to send when creating a placement test. Replace the token source with your own if you already have an authenticated account.
POST https://check.live-direct-marketing.online/api/tests
Content-Type: application/json
Accept: application/json
User-Agent: Make-Scenario/1.0
{
"subject": "{{ 1.subject }}",
"from": "campaigns@yourbrand.com",
"html": "{{ 1.html_body }}",
"text": "{{ 1.text_body }}"
}The response includes a token, a seedAddresses array, and an ETA. You can also pass the seed addresses directly into a parallel Iterator + Send module so the same campaign body is delivered by your production ESP to the seed mailboxes — this is what actually measures your production placement (not just Inbox Check's own sender).
Parsing the placement result
The GET response looks like this (trimmed):
{
"token": "tk_3fh2...",
"status": "complete",
"score": 87,
"providers": {
"gmail": { "placement": "inbox" },
"outlook": { "placement": "inbox" },
"yahoo": { "placement": "inbox" },
"proton": { "placement": "spam" },
"gmx": { "placement": "promotions" }
},
"authentication": {
"spf": "pass",
"dkim": "pass",
"dmarc": "pass"
}
}Add a Router after the GET. Filter 1 passes if score >= 70. Filter 2 passes otherwise. On the alerting branch, push a Slack message with the top-level score, the three auth statuses, and a link to the full report.
A native integration is in private beta. It will ship as a Make app with a single Check Placement module — no HTTP wiring, no Sleep timing guesswork, automatic scenario halt on low scores.
Operational tips
Scenario runs and concurrency
Make bills per operation, and the Sleep module counts as an operation. Five minutes of Sleep is still a single operation. More important: if you have multiple concurrent scenario runs (because a trigger fires several times in rapid succession), the placement tokens and Sleep timers interleave. Give each run its own token — do not cache across runs.
Staging scenarios
Build your scenario in a staging team, with a staging Google Sheet that contains only seed addresses. Once you see a stable score of 90+, clone to production and change the trigger. This is the safest way to validate the placement-test module wiring without ever touching real recipients.
When the score regresses
If the score drops overnight with no code change, check three things in order: DNS (did SPF / DMARC change?), ESP account status (is there a complaint spike?), content (did subject line drift toward spam-trigger territory?).
Cost of the guard
One placement test per scenario run, plus the Sleep operation. For a scenario that runs twice a day, that is 60 extra operations a month — well under the hobby plan. The cost of one avoided blacklist incident is several orders of magnitude higher.
Can I skip the Sleep and poll instead?
What if I run 100 scenarios per day?
Does the HTTP module support bearer tokens?
Authorization header with Bearer YOUR_TOKEN. Free tests do not require authentication.