Beyond the Basics: Tips and Use Cases for the Postman API MCP Server

The Postman API MCP server lets AI agents—Cursor, Claude, or VSCode—manage your various Postman resources (like workspaces, collections, and Spec Hub specifications) by turning natural language commands into API workflows. MCP helps you build agents and complex workflows on top of LLMs, by using the tools and context the MCP servers provide to them.

If you’ve read my previous post on getting started with the Postman API MCP server, you already know how powerful it can be. Now, let’s take it further. In this post, I’ll share some helpful tips and use cases to help you take full advantage of the MCP server in your own projects.

Tips to optimize working with the Postman API MCP server

Specify the MCP server to interact with Postman resources. Some LLM models try to interact with Postman using curl or the Postman CLI. To prevent this, it’s good practice to clearly state “When interacting with Postman resources, use the Postman MCP server you have access to” at the start of your prompt.

Carefully review and confirm operations. When you’re performing potentially destructive operations, like updating or deleting resources, always validate responses before you accept.

Pass resource IDs to reduce API calls. For example, to operate with a specific workspace, get the workspace’s ID and clearly state it. Otherwise, the LLM has to fetch all the workspaces, then select the specific one you want to work with, resulting in additional API requests.

Do more with the Postman API MCP server

With the Postman API MCP server, you empower AI agents to manage your Postman resources for you. Simply request complex workflows or API interactions, and let the AI take care of the heavy lifting. There are endless opportunities to explore!

Practical applications of the Postman API MCP server

I’ll cover a few practical use case examples in the comments. For each of these, I’ve used VSCode’s GitHub Copilot.

Manage environment variables

In this use case, we want to update our environment variable’s value.

Prompt

Change the value of my Postman environment variable “baseUrl” in the “Local” environment to use “http://localhost:8001”.

Results

After passing our prompt, we get the following response from the LLM:

Create an OpenAPI and sync the spec with Postman

We’re working on a Django REST Framework API (backend) and we want to keep it in sync with Postman.

Prompt

This repository contains a Django REST Framework API. We’ve implemented a CRUD API for a resource called “Customer”. We already have a Postman workspace called “Customers”. I want you to:

  • Infer an OpenAPI definition for my API (Customers CRUD), based on the model fields.
  • Retrieve my workspace “Customers” ID. Use the Postman API MCP server tools.
  • Create a Postman Spec in the existing “Customers” workspace.
  • Create the collection from the spec.

Results

The LLM created the API specification and its related collection:

1 Like

Update a collection’s documentation

In this case, we’ve updated our Postman Collection and we want to update its documentation to match the changes we’ve made.

Prompt

Update my collection “Customer API Collection” information to:

“CRUD operations on the customers table. These endpoints require authentication and permissions on the Customer table.”

Results

In this instance, the LLM used the PATCH collection method, which is the correct choice. It was able to get all the collections and get the specific collection by name:

1 Like

Perform a backend change based on Postman Spec changes

We’re working in a Django REST Framework app. We’re an API design-first company, so we want to make the changes in Postman’s Spec Hub first. The backend developers will use it to implement the changes.

Prompt

I’ve updated the Postman Spec called “Customer API” and I’ve added some new fields to the Customer resources. Please analyze the changes in the spec (retrieve it from Postman using the Postman API MCP server you have access to) and reflect them in the Django code (model and serializer).

Results

The LLM performs several requests to get the workspace, then the list of specs, and gets the specific spec. After analyzing the spec and model, the LLM makes the proper changes in our code:

1 Like