I have a Postman collection for a service with two environments: legacy and migrated. My goal is to run the same collection against both environments using Newman and compare the outputs to ensure the migrated service behaves identically to the legacy one. I need to use data files with test cases for specific endpoints.
Question 1: Running specific requests with iteration data
Is there a way to tell Newman to run only specific requests from a collection when using data files (besides using pm.execution.skipRequest() inside the collection itself)?
The issue I’m facing: even if my iteration data file contains variables used by only one specific request, Newman still executes all requests in the collection on each iteration. This results in unnecessary executions and makes it difficult to test specific endpoints with their dedicated test data.
What I’ve tried:
-
Using
--folderoption (works for folders, but not for individual requests) -
Conditional skipping with
pm.execution.skipRequest()(requires modifying the collection)
What I need:
-
A CLI option or approach to run specific request(s) with their data files without executing the entire collection
-
Preferably without modifying the collection scripts themselves
Example scenario:
# I want something like this:
newman run collection.json \\
-e legacy-env.json \\
-d endpoint-a-testcases.csv \\
--request "Endpoint A Request" # This option doesn't exist
Question 2: Automated tools for comparing Newman outputs
Besides writing custom scripts, are there any automated tools or Newman reporters that can compare outputs between two collection runs?
My use case:
-
Run collection with
legacy-env.json→ save results -
Run collection with
migrated-env.json→ save results -
Compare responses (status codes, headers, body) between the two runs
-
Generate a diff report
What I’m looking for:
-
Built-in Newman reporters or plugins for response comparison
-
Recommended tools/libraries for snapshot testing with Newman
-
Best practices for regression testing between environments
I’ve looked at:
-
newman-reporter-htmlextra(great for individual run reports) -
newman-reporter-json(gives raw data, but requires custom parsing)
But I haven’t found a solution specifically designed for comparing multiple runs or snapshot testing.
Additional context
This is for migration validation where we need to ensure the new (migrated) system produces identical responses to the old (legacy) system across hundreds of test cases. Any guidance on:
-
Newman best practices for this scenario
-
Existing tools or patterns
-
Alternative approaches
Would be greatly appreciated for any help
