Your AI agent can write code, draft emails, and analyze spreadsheets. But it cannot check your company’s Jira board, query your internal database, or hand off a task to a specialized agent on another team’s server. That is the problem MCP and A2A solve, and solving it required two separate protocols because the problems are fundamentally different.
The Model Context Protocol (MCP), created by Anthropic in November 2024, standardizes how AI agents connect to tools and data sources. The Agent-to-Agent protocol (A2A), launched by Google in April 2025, standardizes how agents discover each other and collaborate on tasks. Both are now open source under the Linux Foundation, governed by organizations that include OpenAI, Microsoft, AWS, Salesforce, and SAP.
Together, they form the foundation for how multi-agent systems will work in production.
MCP: How AI Agents Access Tools and Data
Think of MCP as USB-C for AI. Before USB-C, every device had its own connector. Before MCP, every AI integration required custom code. An agent that needed to read from GitHub, query a Postgres database, and send a Slack message required three separate integrations, each with its own authentication flow, data format, and error handling.
MCP replaces this with a single protocol. One standard interface for any tool, any data source, any service.
How the Architecture Works
MCP uses a client-server model with three components:
MCP Host: The AI application layer, the thing your user interacts with. Claude Desktop, Cursor, or your custom application. The host receives user requests and orchestrates access to external resources.
MCP Client: Lives inside the host. Translates requests into MCP’s structured format and maintains a 1:1 connection with a specific MCP server. Each client handles session management, error handling, and response validation. IBM BeeAI, Microsoft Copilot Studio, and Claude.ai all function as MCP clients.
MCP Server: The bridge to external systems. An MCP server wraps a tool or data source (GitHub, Slack, a database, a file system) and exposes it through three mechanisms:
- Resources: Read-only data retrieval (fetching a file, reading a database record)
- Tools: Actions with side effects (creating a Jira ticket, sending a message, running a query)
- Prompts: Reusable templates for common LLM-server interactions
Messages travel as JSON-RPC 2.0 over two transport options: stdio for local resources (fast, synchronous) and Server-Sent Events (SSE) over HTTP for remote services (asynchronous, event-driven).
MCP Adoption in Numbers
The ecosystem has grown faster than anyone predicted. As of early 2026, unofficial registries index over 17,000 MCP servers covering everything from developer tools to Fortune 500 enterprise deployments. Monthly downloads went from roughly 100,000 in November 2024 to over 8 million by April 2025. CData projects that by end of 2026, 75% of API gateway vendors and 50% of iPaaS vendors will ship MCP features.
In December 2025, Anthropic donated MCP to the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation co-founded by Anthropic, Block, and OpenAI. Platinum members now include AWS, Bloomberg, Cloudflare, Google, and Microsoft. This means no single company controls MCP’s direction.
# Conceptual MCP server exposing a tool
from mcp import Server, Tool
server = Server("jira-integration")
@server.tool("create_ticket")
async def create_ticket(project: str, title: str, description: str):
"""Create a Jira ticket in the specified project."""
ticket = await jira_client.create_issue(
project=project,
summary=title,
description=description
)
return {"ticket_id": ticket.key, "url": ticket.permalink()}
What MCP Does Not Do
MCP connects agents to tools. It does not connect agents to each other. If you have a research agent that needs to hand off findings to a writing agent running on a different server, MCP cannot help. That is where A2A comes in.
A2A: How AI Agents Collaborate With Each Other
A2A solves a different problem entirely. In any serious enterprise deployment, agents are built by different teams, using different frameworks, running on different infrastructure. A customer service agent built with CrewAI needs to escalate a billing dispute to a finance agent built with LangGraph on another team’s server. A2A makes this possible without either team knowing anything about the other’s implementation.
The core principle: agents are opaque. An A2A client does not see the remote agent’s internal memory, proprietary logic, or tool implementations. It only sees what the remote agent chooses to advertise and return.
The A2A Workflow
A2A communication follows three steps:
1. Discovery: Every A2A-compatible agent publishes an Agent Card, a JSON metadata file describing its capabilities, authentication requirements, and service endpoints. Client agents fetch these cards to find the right collaborator. Think of it as a business card that says “I can process refunds, I accept OAuth 2.0, and here is my endpoint.”
2. Authentication: A2A supports OpenAPI-aligned security schemes including API keys, OAuth 2.0, and OpenID Connect. This matters for enterprise deployments where agents cross organizational boundaries.
3. Task Execution: The client agent sends a task to the remote agent. Tasks have unique IDs and progress through defined states: submitted, working, input-required, completed, or failed. Communication uses JSON-RPC 2.0 over HTTPS, with support for asynchronous webhooks and Server-Sent Events for streaming.
A2A handles long-running operations naturally. A remote agent processing a complex analysis might take hours. The protocol supports status polling, webhook callbacks, and streaming partial results so the client agent is never left waiting in the dark.
Who Backs A2A
Google launched A2A with over 50 technology partners including Atlassian, PayPal, Salesforce, SAP, ServiceNow, and consulting firms like Deloitte, McKinsey, and PwC. Support has since grown to over 100 companies, with AWS and Cisco joining as validators. Google donated A2A to the Linux Foundation in mid-2025, and the current version (0.3) adds gRPC support, security card signing, and an expanded Python SDK.
{
"name": "RefundProcessor",
"description": "Processes customer refund requests",
"url": "https://agents.example.com/refund",
"version": "1.0.0",
"capabilities": {
"streaming": true,
"pushNotifications": true
},
"authentication": {
"schemes": ["OAuth2"]
},
"skills": [
{
"id": "process-refund",
"name": "Process Refund",
"description": "Evaluates and processes customer refund requests"
}
]
}
Example Agent Card: a remote refund-processing agent advertising its capabilities.
MCP vs. A2A: Complementary, Not Competing
This is the most misunderstood point in the ecosystem. MCP and A2A are not alternatives. They solve different layers of the same stack:
| MCP | A2A | |
|---|---|---|
| What it connects | Agents to tools/data | Agents to agents |
| Analogy | USB-C port | Phone call between colleagues |
| Created by | Anthropic (Nov 2024) | Google (Apr 2025) |
| Governed by | Linux Foundation (AAIF) | Linux Foundation |
| Transport | JSON-RPC 2.0 (stdio, SSE) | JSON-RPC 2.0 (HTTPS, gRPC, SSE) |
| Agent visibility | Full (tools are transparent) | Opaque (agent internals hidden) |
| Primary use case | Tool integration | Multi-agent collaboration |
A production multi-agent system typically uses both. Each individual agent uses MCP to access its tools (databases, APIs, file systems). When agents need to collaborate, they communicate through A2A. Auth0’s analysis puts it clearly: MCP is the agent’s toolbelt, A2A is the agent’s communication channel.
Commerce is one domain where both protocols converge. Google’s Universal Commerce Protocol (UCP) uses MCP-style tool integration and A2A-style agent coordination to let shopping agents research, compare, and buy products autonomously.
Practical Implementation: When to Use Which
Use MCP When:
- Your agent needs to read from or write to external systems (databases, APIs, SaaS tools)
- You want a standardized way to expose internal tools to AI agents
- You are building a single agent that interacts with multiple data sources
- You need your agent to work across different AI hosts (Claude, Cursor, custom apps)
Use A2A When:
- Multiple agents built by different teams need to collaborate
- You need agents to discover each other’s capabilities dynamically
- Tasks cross organizational or infrastructure boundaries
- You want to keep agent internals private while enabling collaboration
Use Both When:
- You are building a multi-agent system where each agent has its own tools AND agents need to coordinate
- Enterprise deployments where compliance requires both tool auditability (MCP) and secure cross-agent communication (A2A)
- Any production system that Gartner’s 40% prediction applies to: task-specific agents embedded in enterprise applications by end of 2026
The Governance Question
Both protocols now live under the Linux Foundation. This is significant. Before the donations, MCP was Anthropic’s protocol and A2A was Google’s. Enterprise adoption of either carried vendor lock-in risk.
The Agentic AI Foundation (AAIF), formed in December 2025, provides neutral governance. Its platinum members (AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, OpenAI) represent every major AI player. Individual projects like MCP maintain full technical autonomy; the foundation handles strategic direction and funding.
For teams evaluating these protocols today, the governance question is settled. Neither protocol is at risk of being abandoned or captured by a single vendor. Both have the backing and the contributor base to be long-term standards.
The real question is not “which protocol should I use?” but “which layer of my agent architecture am I building right now?” If the answer is tool access, start with MCP. If the answer is agent collaboration, start with A2A. If your system is complex enough to need both, you are probably building something that matters.
Frequently Asked Questions
What is the difference between MCP and A2A?
MCP (Model Context Protocol) standardizes how AI agents connect to tools and data sources like databases, APIs, and file systems. A2A (Agent-to-Agent) standardizes how AI agents discover each other and collaborate on tasks. MCP is an agent’s toolbelt; A2A is an agent’s communication channel. They are complementary, not competing.
Who governs MCP and A2A?
Both protocols are governed by the Linux Foundation. MCP was donated by Anthropic to the Agentic AI Foundation (AAIF) in December 2025. A2A was donated by Google to the Linux Foundation in mid-2025. Platinum members include AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI.
Do I need both MCP and A2A for a multi-agent system?
For most production multi-agent systems, yes. Each individual agent uses MCP to access its tools and data sources. When agents need to collaborate across teams, frameworks, or infrastructure boundaries, they use A2A. Simple single-agent applications may only need MCP.
How many MCP servers exist in 2026?
As of early 2026, unofficial registries index over 17,000 MCP servers. The ecosystem grew from roughly 100,000 monthly downloads in November 2024 to over 8 million by April 2025. Enterprise adoption is accelerating, with projections that 75% of API gateway vendors will ship MCP features by end of 2026.
What is an A2A Agent Card?
An Agent Card is a JSON metadata file that every A2A-compatible agent publishes. It describes the agent’s capabilities, authentication requirements, service endpoints, and available skills. Client agents fetch Agent Cards to discover and evaluate potential collaborators before sending tasks.
We cover AI agent development from protocol selection to production deployment. Subscribe for practical guides every week.
