A2A8 min read

What A2A is and where email deliverability fits

A2A is Google's open Agent-to-Agent Protocol — a standard for agents to discover each other, negotiate capability, and exchange structured tasks. Here is what it means for a deliverability tool.

A2A — the Agent-to-Agent Protocol published by Google in 2025 — is a specification for autonomous agents to discover each other, negotiate what they can do, and exchange structured tasks. If MCP is the protocol that gives a single model access to tools, A2A is the protocol that lets two agents behave as peers. For a deliverability tool like ours, that changes the integration story: we stop being a set of tool calls invoked by one LLM and start being a capability that other agents can consult when they need a second opinion on an outbound email.

One-line positioning

MCP = a model uses your tools. A2A = an agent delegates a task to another agent. Different layers, not competitors.

A2A in one paragraph

A2A defines four things and almost nothing else: (1) an Agent Card served at /.well-known/agent.json that lists who the agent is and what it can do; (2) a task-exchange HTTP contract that peers use to send and receive work; (3) a lightweight authentication model where peers present credentials the Agent Card advertises; and (4) sync and async delivery modes, so long-running work does not block the caller. Anything else — reasoning, planning, state management — is out of scope. A2A is deliberately small, because the value is interoperability.

The Agent Card and discovery

The Agent Card is the A2A entry point. It lives at a well-known URL on the agent's HTTPS host and is the first thing any peer reads. The card lists the agent's name, version, supported protocols, auth schemes, and — most important — an array of capabilities, each with a schema and a description. A well-formed card looks similar to an OpenAPI index, but narrower: it describes what the agent does, not the shape of every endpoint on the host.

GET https://check.live-direct-marketing.online/.well-known/agent.json

{
  "name": "Inbox Check Deliverability Agent",
  "version": "1.0.0",
  "protocols": ["a2a/1.0"],
  "capabilities": [
    { "id": "inbox-placement-test", "description": "Run a seed-list placement test across 20+ mailboxes" },
    { "id": "dns-audit",             "description": "Validate SPF/DKIM/DMARC/BIMI for a sending domain" },
    { "id": "blacklist-check",       "description": "Check an IP or domain against major DNSBLs" }
  ],
  "auth": [{ "type": "bearer", "docs": "https://check.live-direct-marketing.online/docs/api" }],
  "endpoint": "https://check.live-direct-marketing.online/a2a/tasks"
}

Capability negotiation

Negotiation in A2A is not a handshake — it is a read. Peer agent A fetches peer B's Agent Card, checks whether B advertises a capability whose schema matches the task A wants to delegate, and if so, sends the task. If B advertises version 1.1 of a capability and A only speaks 1.0, A either downgrades or selects a different peer. There is no multi-round handshake. The card is the contract.

Task exchange model

A task in A2A is a JSON envelope with a capability ID, an input payload that matches the capability's schema, and a task ID. A peer POSTs the task to the endpoint published in the Agent Card, optionally including a webhook URL for async results. The receiving agent validates, either executes immediately and returns the result inline (sync) or acknowledges and pushes a callback when finished (async). Errors come back as structured problem objects, not free-form strings.

Async vs sync modes

Deliverability work is inherently slow: a placement test has to wait for real mail to arrive in seed inboxes, which takes minutes. A2A solves this with explicit async support. The peer POSTs the task with a callback_url, receives a 202 Accepted with a task ID, and our agent POSTs the verdict to the callback when the placement test finishes. Sync mode is reserved for quick operations like DNS lookups that return in under a second.

Where deliverability fits

In an A2A-native cold-email stack, the deliverability agent sits alongside other specialised agents — a template reviewer, a DNS auditor, a send scheduler — and each one is reachable through its own Agent Card. A coordinator agent orchestrates the handoffs. Our deliverability agent is the one consulted whenever any other agent is about to send mail and wants a reality check before committing.

  • Placement verdict — run a seed-list test against a draft and return inbox/spam/missing counts per provider.
  • Authentication check — confirm SPF/DKIM/DMARC align with the proposed From domain.
  • Blacklist status — flag any sending IP or tracking domain that has drifted onto Spamhaus, SURBL or Barracuda.

Example multi-agent flow

A concrete pipeline from a 2026-era stack: a cold-outreach agent drafts an email for a target account, hands it to a reviewer agent for tone and compliance, then delegates to the deliverability agent to confirm the message will land in the inbox. If deliverability fails, the reviewer agent gets the verdict and rewrites. Only when the deliverability agent returns inbox_rate >= 0.9 does the scheduler agent queue the send.

  1. Cold-outreach agent drafts a message for target X. Task envelope: compose.
  2. Delegates to reviewer agent (capability review-template).
  3. Reviewer delegates to deliverability agent (capability inbox-placement-test) — async, with callback URL.
  4. Deliverability agent posts back a verdict. If it is below threshold, reviewer agent regenerates and the loop repeats.
  5. Green verdict → scheduler agent takes the handoff (capability schedule-send) and dispatches.

Adoption status as of 2026

A2A was released by Google in mid-2025 and reached a stable 1.0 in early 2026. Adoption in that first year has been strongest in two camps: agent-framework vendors (LangGraph, AutoGen, CrewAI all ship A2A adapters) and infrastructure vendors with autonomous-workflow products. The protocol sits well in niches where several narrow agents cooperate — exactly the shape of a cold-email stack, where drafting, reviewing, testing and sending are already separate concerns in most serious tools.

Pure single-model setups that were built on MCP have not rushed to A2A, and for good reason: if one model is doing all the reasoning, peer negotiation is overhead. The split we see in the wild is that teams adopt both — A2A to coordinate long-running agent work, MCP for the tool calls inside each agent.

Not either/or

MCP and A2A solve adjacent problems. Our service publishes a valid A2A Agent Card and ships an MCP server. A peer agent calls us through A2A; a developer asking Claude for a one-off check calls us through MCP. Same verdicts, different shape.

Frequently asked questions

Is A2A going to replace MCP?

No. They sit at different layers. MCP is a tool-calling protocol for a single agent; A2A is an inter-agent delegation protocol. Most production setups use both.

Do I need an A2A client library to consume other agents?

Not strictly — it is HTTP + JSON. But the major agent frameworks (LangGraph, CrewAI, AutoGen) now ship A2A clients that handle discovery, auth and callbacks for you.

How do I know which agents are A2A-reachable?

Read the /.well-known/agent.json on any service you think might be an agent. There are also early discovery registries, but most peer lookups in practice are configured directly.

What stops a malicious agent from flooding my endpoint?

The same things that protect any HTTP endpoint: rate limits, authentication tokens advertised in the Agent Card, and per-peer quotas. A2A is a contract, not a firewall.
Related reading

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required