MCP, or Model Context Protocol, is coming up in Github repos, YouTube Videos, Discord discussion, and even conference talks. But if you’re like many of the devs we’ve met at events lately, you’re probably wondering: what is MCP, exactly, and why is it suddenly such a big deal?
What is MCP?
MCP stands for Model Context Protocol, an open standard created by Anthropic that “enables developers to build secure, two-way connections between their data sources and AI-powered tools.”
In other words— MCP is like a USB-C for the AI world.
Before USB-C became the standard, you had to have HDMI, thunderbolt, and lightning cables along with various power adapters to stay connected. Every device locked you into a certain cable type. Now, USB-C cables allow you to connect to everything from a laptop to a game controller.
Similarly, without MCP, developers are forced to build custom connections for every AI model and service they needed to integrate with—a time-consuming process that locked developers into specific vendors or models. With MCP, developers now have a universal connection to all tools, resources, and AI models.
Why is everyone talking about MCP now?
Anthropic open-sourced MCP back in November 2024, but the tech world didn’t start buzzing about it until early 2025 when the CEOs of OpenAI and Google both brought the MCP conversation to X. When Google’s CEO tweets “to MCP or not to MCP?” and thousands of people reply—you know something big is happening.
Increased interest in MCP also goes hand-in-hand with the rise of agentic AI and AI agents. Everyone is excited about building AI agents to perform actions ranging from coding to customer support, but getting the agents to execute can be a challenge.
MCP serves as a connector—giving AI agents access to the data, tools, and resources the agents need to perform their tasks. For example, MCP can allow AI agents to review codebases and update Jira or review technical documentation to provide detailed support responses.
Now that MCP is backed by Anthropic, OpenAI, and Google it’s no longer just a buzzword. For devs chasing smarter agents and cleaner integrations, it’s starting to look like the backbone of AI-native software.
How does MCP work?
MCP involves three components: the MCP host, MCP client, and MCP server.
- MCP hosts are LLM applications like Claude Desktop or Integrated Development Environments (IDEs) that initiate the connections.
- MCP clients are what maintain the connections between the servers and the hosts and make data requests.
- MCP servers handle requests from clients by running commands or returning data in a standardized format that the client can understand.
MCP servers provide three types of capabilities
Hosts launch the client and clients send the request to servers. From there, MCP servers typically provide three capabilities: tools, resources, and prompts.
- Tools are actions AI can perform with permission, such as API calls or pulling and analyzing data from an app.
- Resources are files and data that the server makes available to the AI to provide context. For example, a resource could be a database of sales data that is then analyzed to create a sales report.
- Prompts are specialized instruction templates that tell LLMs how to interact or behave. For example, prompting the server to perform a code review or asking for an expert security analysis.
Let’s imagine what this might look like in practice. Let’s say you ask your MCP server to check the weather in San Francisco. The MCP server will use the tool function to pull data from a weather app. Next, you ask it to review a resource—a list of all your clothing items and recommend outfits to pack for San Francisco. Over time, you could then use prompts to train the LLMs over time to incorporate new clothing items and make more informed recommendations based on how you rated previous packing lists.
MCP use case examples
MCP is still relatively new, so there are many potential use cases, particularly for developers who are interested in building MCP servers or building on top of existing ones from public directories. Here are some MCP examples to consider as you get started.
Automating workflows and tool-to-tool communication
Tasks that have required manual interaction in the past can now be handled via MCP enabling AI agents to perform actions and maintain context over time. For example, a product manager working across Jira, Figma, and Slack could rely on MCP to translate design requests in Slack into Jira tickets or even authorize the AI agent to make design changes directly in Figma.
Code review and development
Developers can use MCP to give agents access to their environment to perform code reviews to help with identifying errors and debugging. For instance, you could use the GitHub MCP integration to manage branches, identify and triage bugs, and review pull requests.
Non-technical business tasks
MCP’s contextual awareness opens up many possibilities beyond coding and technical work. MCP servers can help with everything from writing about a niche topic to providing product-specific customer support. For example, you could grant an MCP server access to a product’s technical documentation and prompt it to respond to customer help requests.
Benefits of MCP
Much like standardized USB-C connections, MCP offers a lot of convenience. But the surge in MCP use goes beyond convenience alone.
- Simplified integration and development: write code once and integrate with various tools—no need to write custom code for every new integration
- No vendor lock-in: MCP creates a universal standard for AI models, allowing users to use the same tools and connections across multiple LLMs
- Cost savings: Less custom code leads to less development time, which means less money spent on repetitive work
- Scalability: MCP works with growing ecosystems because tools and data are communicating in a consistent and predictable way, making it easier to scale operations
- Real-time communication: Continuous two-way connections allow AI models to retrieve data and trigger actions dynamically and react to changes in the system in real time
- Ongoing workflow improvements: MCP provides a way for LLMs to continuously access external data sources while the host application builds contextual awareness, so LLMs can move towards more autonomous workflows
- Streamlined security: A common protocol makes it easier to manage governance best practices and access control
Getting started with MCP
You can build your first MCP server in 5 minutes — seriously, Sterling can show you how to do it. Now that you’ve got the MCP basics down, skip to 2:04 to get right into building. If you want a headstart, explore Postman’s MCP catalogue to access hundreds of public MCP servers.