💼 $100 Challenge – Time Saver Edition | Community Voting

The weekly community challenge is live, and this week you decide the winner. One entry will take home $100 cash.

Share a workflow that saved you 30 minutes (or more) in the past month.

Maybe it automated a step you used to do manually. Maybe it standardised something your team repeats often. Whatever the impact, we want to see it.

How to enter:

  1. Reply to this thread within the next 24 hours
  2. Add a quick note on how it saved you time
  3. Add a screenshot or GIF if it helps show the shortcut in action

How voting works:
Once submissions close tomorrow, we’ll open up community voting.
The post with the most :heart: reactions wins the prize.

Prize: $100 cash
Deadline: Thursday at 10:00 am BST / 2:30 pm IST
Winner announced: Friday (after voting closes)

Head to Discord, to the #weekly-challenge channel to talk through your favourites.

3 Likes

My Workflow: How Agent Mode Cut a Repetitive Status Process From an Hour to Minutes

Before this, the status check took far too long. I had to:

  • Open multiple Postman collections, re-run key requests and manually inspect failures.
  • Cross-check Jira tickets, copy links, and rewrite everything into a Confluence summary.
  • Look through Slack threads to recover missing context about test runs or skipped checks.

By the time everything was clean enough to share, nearly an hour had disappeared.

The workflow

Agent Mode shifted this entire routine into something far more streamlined:

  • A scheduled monitor runs on my main collections and tags each run with useful metadata: service, environment, branch, and Jira key.
  • A single “Snapshot” request calls an internal reporting endpoint that compiles the last 7 days of runs, failures, and linked tickets.
  • In Postman, I select the JSON response and ask Agent Mode:
    “Generate a brief: what changed, which tests failed and the impact per service.”
// Pre-request Script in Monitor Collection (runs before each test)
pm.globals.set("service", "auth-api");  // Or pm.collectionVariables.get("service")
pm.globals.set("environment", pm.execution.getEnvironment().name);
pm.globals.set("branch", pm.collectionVariables.get("branch") || "main");

// Test Script (adds metadata to run results)
pm.test("Tag run metadata", function() {
    const metadata = {
        service: pm.globals.get("service"),
        env: pm.globals.get("environment"),
        branch: pm.globals.get("branch"),
        timestamp: new Date().toISOString()
    };
    pm.visualizer.set(template`Run tagged: ${JSON.stringify(metadata, null, 2)}`);
});

Agent Mode returns a clean, structured summary ready for Confluence no rewriting or manual organizing.

The impact

The process that used to consume 50–60 minutes now takes just few minutes.

Agent Mode analyzes my “Status Process” collection and generates a ready to share change report from the latest monitor runs listing what changed, which services failed. This turned a manual, 50–60 minute status task into a quick, few minute Agent-powered summary.

10 Likes

@lunar-module-geolog1,

Do you have a any videos, screenshots or scripts used for this workflow?

Do you use a Jira MCP to help cross check those tickets using Agent Mode?

1 Like

The workflow that saved me over 30 minutes this month:

I built a small but powerful Auto-Seeder workflow in Postman that resets my entire development environment with a single click.

Any time my local database is refreshed or I switch environments, I used to spend 20–30 minutes manually:
-generating a token

-creating a test user

-inserting sample data

-updating variables

-preparing mock payloads

-verifying environment health

Now I run one collection, and Postman handles everything automatically. In about 10 seconds, the Auto-Seeder:

-refreshes or regenerates my access token

-auto-creates a test user

-seeds sample records

-sets core environment variables (seed_id, seed_email, token_expiry)

-performs a quick health check so I know the environment is ready

Over the past month, this tiny workflow has saved me over two hours of repeated setup time.Here’s the core script I use inside the collection:

// Refresh token if missing or expired
if (!pm.environment.get("access_token") || pm.environment.get("token_expiry") < Date.now()) {
    pm.sendRequest({
        url: pm.environment.get("auth_url"),
        method: "POST",
        body: {
            mode: "urlencoded",
            urlencoded: [
                { key: "client_id", value: pm.environment.get("client_id") },
                { key: "client_secret", value: pm.environment.get("client_secret") },
                { key: "grant_type", value: "client_credentials" }
            ]
        }
    }, (err, res) => {
        const data = res.json();
        pm.environment.set("access_token", data.access_token);
        pm.environment.set("token_expiry", Date.now() + 3500000);
    });
}

// Auto-generate test data
pm.environment.set("seed_id", crypto.randomUUID());
pm.environment.set("seed_email", `user_${Date.now()}@example.com`);

7 Likes

@spacecraft-astrono14,

Have you also tried abstracting these scripts out to the Team Package Library so you have a central place to manage them?

Multi-Environment Runner

Before when i wasn’t much aware switching environments, logging in, and fetching user data took 10–15 min per run with frequent errors.

Now, I run the collection once using a time-saving method or heck whatever: Auth Login auto-stores the token, Environment pings itself, and Profile fetches with no manual steps. At the end, the collection prints a dynamic pass or fail summary in the console, like in the SS provided for reference.

For more detail i have summarized everything using agent mode u can check on those SS attached.

42 Likes

Is that all happening before you perform the real testing for your endpoints? How are you using the summary reports?

What’s the workflow now, with this time saver collection in there too?

2 Likes

Yes this runs before my real testing. I basically use it to make sure the environment is ready it logs in, checks the basics, and confirms everything is working.

The summary that i print on console just helps me quickly see if anything is broken.

My workflow now is:-
Run this setup → see the quick summary on console→ if all good, I start my actual testing. If not, I fix the issue first instead of finding it later.

I hope this helps plus if u wanna know more deep dived i have added ss of the agent mode to summarize the flow.

27 Likes

Have you considered looking at using features like pm.execution.runRequest() to bring some of that set up into your main request, by referencing the requests in that Collection?

Hey @michaelderekjones :waving_hand:t3:

I’d love to see some of the cool things that you’re using - I’m sure there’s loads of time saver goodies in there for the community to see :grinning_face_with_smiling_eyes:

3 Likes

well tbh it depends i though about using it but for now i like the setup separte, just to make sure my main req remain clean and all the steps are reusable across diff environment. Also wehn i run small collection it make failure easy to spot before the real test.

21 Likes

One‑Click Regression Workspace

My One-Click Regression Workflow That Saves Me 30+ Minutes a Month

I used to run several API requests manually whenever I needed to verify my service. Each run meant switching URLs for dev/staging/prod, adding keys by hand, and checking responses one-by-one. A full regression sweep took me 20–30 minutes.

  1. I organized my key requests into a single collection
    These are the checks I previously ran manually.

  1. I created Dev, Staging, and Production environments
    Now my baseUrl and apiKey values are stored in environments, so I switch between them instantly instead of retyping URLs.

  1. I updated shared headers using Bulk Edit, actual “Time-saver”
    Using Bulk Edit, I updated all headers once instead of editing multiple rows. This removed repetitive manual work and keeps all requests consistent.

  1. Create Monitor
    The monitor uses my Staging environment and runs my full regression set on a schedule.

  1. Monitor Run Summary (The one-click regression check result)
    A monitor run shows me immediately whether everything still works.

What used to take half an hour now takes seconds. I select an environment, trigger the monitor (or let it run on schedule), and read one report. This saves me well over 30 minutes every month.

4 Likes

Automated API Onboarding Pipeline Reducing a 2-Hour Process to Under 5 Minutes

Onboarding a new third-party API into our workspace previously required between 90 and 120 minutes. The workflow involved importing the OpenAPI specification, configuring authentication, creating development and staging environments, generating basic validation tests, preparing internal documentation, and establishing monitors. This repeated process caused delays across sprints and introduced configuration inconsistencies.

To eliminate this overhead, I built an automated API Onboarding Pipeline using Postman Flows, Agent Mode, and Monitors. The workflow completes the entire onboarding sequence in approximately 3–5 minutes with no manual intervention beyond providing the specification URL.


1. Workflow Overview

1. Trigger: Onboard API Request

The user provides an OpenAPI/Swagger specification URL.
A pre-request script imports the specification, parses paths and authorization schemes, and prepares the metadata required by downstream flow blocks.

2. Automated Environment and Authentication Setup

Flows analyze the securitySchemes section of the specification and generate the appropriate authentication components:

  • OAuth2 token request (including refresh logic)
  • API key injection
  • Bearer token scaffolding

Development, staging, and production environments are created and populated automatically.

3. Test Generation

For each endpoint in the specification, tests are generated programmatically:

  • HTTP status validation
  • Schema validation using the referenced response schema
  • Rate-limit header checks
  • Failure logging into structured environment variables

This removes the need for manual boilerplate test construction.

4. Automated Documentation via Agent Mode

The fully-generated collection is passed to Agent Mode with a structured prompt.
Agent Mode produces Confluence-ready Markdown documentation that includes:

  • Endpoint tables
  • Parameters
  • Response structures
  • Example payloads
  • Observations about missing or ambiguous schema details

This replaces the manual documentation process previously performed by the team.

5. Monitor Creation and CI Integration

A post-processing script uses the Postman API to create a daily health monitor for the newly onboarded API.
It also exports the collection as a Newman-compatible JSON object and sends it to our CI pipeline endpoint, ensuring the new API is automatically included in regression testing.

53 Likes

Not yet ,I’m not using Jira in this workflow right now. I’m only running Agent Mode on Postman collections and an internal snapshot endpoint that shows all run results. But it could work the same way: if Jira endpoints were exposed through an MCP server, Agent Mode could pull matching issues automatically and include them in the same status report. That way, test results and Jira ticket context would show up together in one place.

As you originally mentioned it doing a Jira check, I though you had something setup to do that.

It is possible to wire all that up in the platform and use Agent Mode to help.

Here’s a short demo of @taliakohan using a Jira integration in her workflow:

7 Likes

I’d love to see what your Postman Flow looks like and the different steps that you have created.

Which CI tool are you using - do you have an example of your pipeline file that takes in the JSON file?

:sparkles: My Time-Saver Workflow: One‑Click Environment Setup

Before: Switching environments used to take me 10–15 minutes each run — logging in, fetching tokens, setting variables, and checking if everything was ready. Errors often popped up later in testing, wasting even more time.

After: I built a pre‑flight Postman collection that:

  • Auto‑logs in and stores the token

  • Pings the environment to confirm connectivity

  • Fetches the user profile automatically

  • Prints a quick pass/fail summary in the console

Now, I know in seconds whether my environment is healthy before real testing begins. This saves me 30+ minutes every week and prevents late‑stage surprises.

5 Likes

Can you share what that working Collection looks like so people can see how you have that set up and the scripts your using to perform those actions in your workflow?

Could you elaborate more on the flow that the images are showing, please?

They all look like they are from the Student Expert Course, I’m just trying to understand the link between that and a set of regression tests.