A2A8 min read

A2A vs MCP — not a rivalry, a stack

MCP gives a single model tools. A2A lets autonomous agents delegate to each other. They sit at different layers. Here is when you reach for which.

Two protocols, two different problems, a lot of confused integrators. MCP (Model Context Protocol, Anthropic, 2024) gives a single LLM access to a set of tools. A2A (Agent-to-Agent Protocol, Google, 2025) lets autonomous agents delegate work to each other. They are not competitors any more than HTTP and SMTP are competitors. This article explains the differences, shows where each wins, and demonstrates a hybrid pattern where the two protocols compose cleanly.

TL;DR

Building a tool that a single agent should be able to call? Ship MCP. Building a service that other autonomous agents should be able to consult as peers? Ship A2A. Building both audiences? Ship both — they share the same business logic underneath.

Origins

MCP was introduced by Anthropic in November 2024 as a way to stop everyone hand-rolling tool adapters for every LLM client. It formalised the "model calls external function" pattern into a JSON-RPC-based protocol that Claude Desktop, Cursor, Zed, Continue and a growing list of clients now speak natively.

A2A came from Google's Cloud and DeepMind teams in mid-2025. The problem statement was different: autonomous agents built on different platforms (Vertex AI, LangGraph, CrewAI, custom stacks) had no standard way to hand work off to each other. A2A formalised discovery, capability advertisement, and task exchange between peers.

Scope differences

The core design split:

  • MCP scope: one model, many tools. The model reasons, the tool does exactly what it is asked, control returns to the model.
  • A2A scope: one agent delegating a bounded task to another agent that does its own reasoning. The peer agent decides how to fulfil the task; it might use MCP tools internally to do so.

That layering matters. A peer agent reached through A2A is itself often an MCP host for its own internal tools. A2A is about delegation across reasoning boundaries; MCP is about execution within one.

Discovery models

MCP discovery is static. The client has a configuration file listing the servers it should spawn, along with their commands and environment variables. At runtime, the client calls tools/list to enumerate what each server exposes, butwhich servers exist is a config-time decision.

A2A discovery is dynamic. A peer reads another peer's /.well-known/agent.json at runtime, parses the capabilities, and decides whether to delegate. Dynamic discovery enables agents to reach each other without either side's operator knowing in advance.

Authentication

MCP servers typically load credentials from environment variables set by the client that spawned them. The LLM never sees the key. This is a single-tenant model: one developer, one client, one tool server. Authentication is a local concern.

A2A is peer-to-peer over HTTPS, and the Agent Card advertises which auth schemes the peer accepts (bearer tokens, OAuth2, mTLS). Peers are often different organisations. Authentication is an inter-org concern and demands a stronger model.

Transport

MCP originally spoke over stdio — the client spawns the server as a subprocess and they communicate over standard streams. HTTP transport was added later for hosted servers but is still the minority case.

A2A is HTTPS-native. Peers live on different hosts in different orgs by default. Long-running tasks use async delivery with callback URLs or optional Server-Sent Events streams.

Task model

Under MCP, a tool call is synchronous and short. The model asks the server to run a function, the server runs it, returns a result. Anything long-running (a placement test taking two minutes) fits awkwardly — you typically start a job and have the model poll a status tool until it flips to "complete".

A2A treats long-running work as a first-class case. Tasks declaremode: async and the caller registers a callback URL; the responder posts the result when ready. For a deliverability agent whose work takes minutes, this is the right shape.

Side by side

Dimension              MCP (Model Context Protocol)     A2A (Agent-to-Agent)
---------              -----------------------------     --------------------
Author                 Anthropic (2024)                  Google (2025)
Layer                  Tool use within one agent         Delegation across agents
Discovery              Config file + tools/list          /.well-known/agent.json
Transport              stdio (typical) or HTTP           HTTPS only
Auth model             Local env var                     Bearer / OAuth2 / mTLS
Task model             Synchronous function call         Sync or async with callback
Natural consumer       Claude Desktop, Cursor, IDEs      Agent frameworks, pipelines
Natural producer       Developer tools                   Specialised service agents

When MCP wins

  • You want a developer at a keyboard to invoke your service through Claude Desktop or Cursor.
  • Your operations are short and have obvious function signatures.
  • The caller is always the same LLM, one client at a time, with local credentials.

Our MCP server at ldm-inbox-check-mcp is the canonical example. A developer opens Claude Desktop, types "test this template", Claude calls start_test and get_test, the developer reads the verdict.

When A2A wins

  • Another service — not a developer's chat client — wants to consume your capability unattended.
  • The work is long-running and should not block the caller.
  • The caller is itself an autonomous agent composing multiple peers.
  • You want discoverability: other teams should be able to find you without a custom integration.

A hybrid example

Here is how the two protocols compose in practice, using our deliverability agent as the integration point.

  1. A reviewer agent (LangGraph) is asked to vet an outbound cold email draft.
  2. The reviewer reads our A2A Agent Card, sees inbox-placement-test, and POSTs a task.
  3. Inside our service, the agent that services that capability runs its own tools through an internal MCP server — one tool for seed-mailbox dispatch, one for header parsing, one for result aggregation.
  4. The placement test completes; our agent POSTs the verdict back to the reviewer's callback URL.
  5. The reviewer decides whether to approve the draft.

A2A sits at the boundary between organisations. MCP sits inside each agent. Neither protocol is trying to do the other's job.

An anti-pattern to avoid

Do not wrap MCP tools in A2A just because A2A is newer. If the consumer is a single LLM calling functions, MCP is lighter, faster, and better-supported by IDE clients. Adding A2A for no reason means an extra HTTP hop and worse UX in Claude Desktop.

Choosing for a new integration

A decision checklist:

  1. Will the primary consumer be a human using an AI IDE or chat client? → MCP.
  2. Will the primary consumer be another autonomous service? → A2A.
  3. Is the operation long-running (> 10 seconds) and inherently async? → A2A handles this natively; MCP needs workarounds.
  4. Do you want discoverability across organisations? → A2A.
  5. All of the above? → Ship both. The business logic underneath is the same.

Frequently asked questions

Is one going to replace the other?

No. They solve different problems. Over the next few years you will see more services shipping both — Anthropic's and Google's teams have both publicly framed them as complementary.

Can I call an MCP server from an A2A agent?

Yes. An A2A agent is just a service; inside, it can host any number of MCP servers for its own reasoning. That is the hybrid pattern.

Which one is easier to ship first?

MCP, usually. The SDK and patterns are more mature, the client landscape is larger (Claude Desktop, Cursor, Zed), and you get value in a developer's IDE immediately. A2A pays off once you have peer agents to consume the capability.

Do I need to support both to be "agent-ready"?

Only if you have both audiences. If your product is niche and developer-focused, MCP alone is fine. If you are building infra for autonomous pipelines, A2A is mandatory and MCP is a nice-to-have.
Related reading

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required