Your SPF record is in Terraform. Your DKIM selector is in Terraform. Your MX is in Terraform. Your DMARC aggregate-report endpoint is in Terraform. But the monitor that tells you whether any of the above is actually working? That one still lives in a SaaS UI somewhere, clicked together by whoever joined the team last Tuesday. This article sketches what a proper Terraform provider for an inbox placement service should look like — the early spec for what will eventually be terraform-provider-inbox-check.
This provider does not exist yet. We are publishing the spec early so teams can tell us which resources matter most, which state-vs-reality behaviours are non-negotiable, and which existing HCL patterns to follow. Target ship: H2 2026.
The case for IaC around deliverability
Deliverability is a cross-cutting concern. DNS is infrastructure. Sending IPs are infrastructure. Warm-up schedules and placement monitors are infrastructure too — they have a lifecycle, they have dependencies (you cannot monitor a domain before the DNS is in place), and they drift when people fiddle with dashboards. Terraform is the idiomatic way to express that lifecycle in every other corner of modern infra. There is no reason deliverability should be an exception.
Concretely, managing monitors via HCL buys you three things: a code review on every change, a diff you can revert, and an audit log that lines up with every other infrastructure change. Click-ops in a SaaS dashboard gives you none of those.
Resources we would expose
The core surface is small. Three resources cover 90% of the use-cases we see:
inbox_check_test— a one-shot placement test. Create triggers the test; the resource attributes expose the result (inbox count, spam count, per-provider detail). Good for CI runs that want the result as data.inbox_check_monitor— a recurring placement test. Takes a schedule (cron expression), a sender domain, a template reference, and an alert threshold. This is the core resource for most teams.inbox_check_webhook— an endpoint registration. Events fire when a monitor run completes or crosses a threshold. Attributes include a rotating signing secret.
Two data sources round it out: inbox_check_seed_list (the current list of seed providers, for use in count-style resources) and inbox_check_latest_result (fetch the most recent result for a monitor, handy for dashboards).
Example .tf
Configuring the provider and one monitor:
terraform {
required_providers {
inbox_check = {
source = "inbox-check/inbox-check"
version = "~> 0.1"
}
}
}
provider "inbox_check" {
# Reads INBOX_CHECK_API_KEY from env by default.
# Explicit form:
# api_key = var.inbox_check_api_key
}
resource "inbox_check_monitor" "welcome_email" {
name = "welcome-email-nightly"
sender_domain = "news.yourbrand.com"
subject = "Welcome to YourBrand"
html_template = file("${path.module}/templates/welcome.html")
schedule = "0 2 * * *" # daily 02:00 UTC
alert_threshold = 0.80 # fire webhook if inbox rate < 80%
tags = {
team = "growth"
environment = "production"
}
}
resource "inbox_check_webhook" "slack_bridge" {
url = "https://ops.yourbrand.com/hooks/inbox-check"
events = ["monitor.completed", "threshold.crossed"]
monitors = [inbox_check_monitor.welcome_email.id]
}
output "webhook_secret" {
value = inbox_check_webhook.slack_bridge.signing_secret
sensitive = true
}Authentication handling
The provider reads INBOX_CHECK_API_KEY from the environment by default — the canonical way Terraform providers handle secrets, matches the AWS and Cloudflare patterns. Explicit api_key argument is supported for multi-account setups. A future iteration will support short-lived OIDC-exchanged tokens for runners (GitHub Actions, GitLab CI) so no long-lived key ever sits in a secrets manager.
State vs reality (drift detection)
The provider treats the API as source of truth on Read. If someone edits a monitor in the UI, terraform plan shows the drift and will revert on apply. That is the non-negotiable behaviour. The open question is historical data: if a user deletes a monitor via terraform destroy, do we preserve the result history or wipe it? Current intent: preserve by default, with a destroy_history = true escape hatch.
The point of an IaC provider is not just creating things in bulk — it is catching changes that happened outside the code. If your nightly monitor's threshold was quietly dropped from 85% to 40% last month by someone debugging a regression, the next terraform plan should make that visible.
Importing existing tests
Teams never start on IaC — they migrate onto it. The provider supports Terraform 1.5+ import blocks, so you can bring existing monitors under management without deleting and recreating:
import {
to = inbox_check_monitor.welcome_email
id = "mon_01H9XABCDE..."
}
resource "inbox_check_monitor" "welcome_email" {
# filled in by `terraform plan -generate-config-out=welcome.tf`
}Run terraform plan -generate-config-out=welcome.tf and the provider emits the HCL skeleton for you. Review, commit, run apply — no re-creation, no downtime, no lost history.
A sample module
A common pattern: a Terraform module per sending domain, that stamps out three monitors (welcome, transactional, newsletter) and a webhook, with sensible defaults:
module "deliverability_yourbrand" {
source = "./modules/deliverability"
sender_domain = "news.yourbrand.com"
slack_channel = "#growth-alerts"
threshold = 0.85
}Inside the module, a handful of resources wire everything up. Plug a different sender domain in for each brand and you have monitoring as a first-class part of the platform — no clicking, no forgetting.
Current status (concept, targeted for H2 2026)
We are collecting requirements. The rough plan is:
- Q3 2026: private alpha with two design-partner teams. Read-only data sources first, then
inbox_check_monitor. - Q4 2026: public beta on the Terraform Registry under
inbox-check/inbox-check. All three resources. - Q1 2027: 1.0 with import-config-out support and OIDC auth.
In the meantime, every resource above is already available via the HTTP API. If you need IaC-style management today, a thin wrapper around curl plus the http provider is enough to get most of the way there.