Here’s what we automated so you don’t have to

If you’ve ever tried to build prompts from live API data, you know how tedious it can get. Extracting the right fields, formatting the prompt, and figuring out what details the model needs is a lot of work.

This flow takes care of all of it.

It pulls metadata from the Postman API using GET /collections/:id, extracts fields like method, path, and parameters, then compiles a well-structured system prompt using TypeScript inside a flow module. You provide the user question, and the flow sends everything to the model.

It’s modular and forkable, so you can use it as-is or plug it into other workflows.

You can try the full flow in the public workspace. Drop a comment if you have ideas for improving it or using it in your own setup.

Here’s how it works:

The flow has two inputs:

  • The collection UID to get information about.
  • The question about the collection API.

The process of the prompt calculation has been isolated in a Flow Module block. This block runs another flow, which can be used in other flows. Think of it as a function that receives inputs and returns outputs.

We’ll use the Postman API’s Get a collection endpoint to fetch the collection’s information:

We’ll also use an environment that contains a Postman API key, which is required to perform the GET request, as the Postman API requires authentication.

We’ll extract data from the collection response, like the collection’s name, description, and item information (such as folders or requests):

We’ll process the collection response and extract text for each request containing:

  • The request’s name
  • Description
  • Method
  • Path
  • Body
  • Information related to the query parameters

All this information is important for the LLM to have, as it provides the context for what can be done with the API and how. We’ll use this data to compile the system prompt and return it:

In the main flow, we’ll calculate the final prompt by combining the system prompt we previously calculated with the user question (user prompt):

We’ll pass the prompt to the AI, and we show the AI’s reply: