Postman CLI Failed assertions are not reported in the summary (part 2)

I’d like to follow-up on this topic: Postman CLI Failed assertions are not reported in the summary - Ask the Experts and Postman Tips - Postman Community

Here’s some information to include when creating your question:

  • Descriptive Title: Postman CLI Failed assertions are not reported in the summary.
  • Introduce The Question: Is there any update on this? Do you need more info to answer the question?
  • Additional Data: I have the same scenario, not having the iteration marked as failed, but the tests are failing and I even throw manually an error.
  • Platform Details: Newman v6.2.1, Window 11

cc. @danny-dainton, @cruxto

Hey @rinczefi-personal

Welcome to the Postman Community! :postman:

You’re referring to the Postman CLI in the title but you’re listing the Newman version in the body. Which one are you using?

Can you provide more details for your own context, please?

1 Like

Sure. I have a nested file structure in my collection, like:

Collection
Folder 1:
  Request 1
  Request 2
  Request 3
Folder 2:
  Request 4
  Request 5
  Request 6
  Request 7
Folder 3:
  Request 8
  Request 9

I’m using datafiles, let’s say I have 2 object in my datafile = 2 iterations.
In my 5th request I added a post-script :point_down:

pm.execution.setNextRequest(null); // Skip remaining requests in the iteration
throw new Error("Request 5 failed. Marking iteration as failed.");

This skips the rest of the steps :green_circle:
But do not mark the iteration as failed :warning:

Using this script to run the newman:

newman run "collections/myCollection.postman_collection.json" \
  --environment "environments/myEnvironment.postman_environment.json" \
  --iteration-data "data/myDf.json" \
  --delay-request 3000 \
  --reporters json-runstats,htmlextra,cli \
  --reporter-htmlextra-export report.html \
  --reporter-json-export report.json

Hope that provided the necessary info. let me know if you need anything else. Thanks!

What’s actually failed at that point, to fail the iteration?

You have stopped the Collection run, nothing has failed.

I don’t believe anything further will get executed once it hits the setNextRequest line.

Now, the problem is that I have tried also without the setNextRequest line, and I got the same result… I think throwing an error should suffice to mark the iteration as failed. I also tried with making the pm.test fail with pm.expect.fail(message), but same result…

From the original post you linked, I mentioned:

I genuinely don’t know what is classed as a failed iteration :thinking:

Different things would fail inside that iteration, rather the iteration failing.

A request, a script or an assertion might fail and those are shown in the report. Adding a pm.test function that fails will fail that assertion but the iteration still ran and was successful.

Where would you like to see this shown? What value is that going to offer to someone viewing the report?

The idea is that I’d like a very simple report, only showing how many iteration has been successful. Let’s say I have 8 iterations in total (and like 80 scripts) and 5 of them failed (I mean one request in them failed with status 500 and I have a test which checks if the status is 200). In the report I’d like to see that the iteration success rate is 3/8. That’s what I’d expect.

What value is that going to offer to someone viewing the report?

I have a lot of datafiles and each has a lot of iterations, so I want to have an accurate summary about the iteration rates related per DF. And I’m testing periodically the API to early detect any possible failure from the backend. In my case it brings value to the project.

Different things would fail inside that iteration, rather the iteration failing.

Back to this. Chatting a little bit whit ChatGPT, I understand that those different things should make the iteration fail. Of course it’s just an AI model, not the truth, but it still made me curious.

I genuinely don’t know what is classed as a failed iteration :thinking:

Is there anybody who has any idea about this? Maybe who implemented it :thinking:

Thank you!

When something fails within an iteration, that failure or error is marked against whatever it was that failed (request, script, assertion) and you can see that on the summary table.

The Iteration run successfully from start to finish, items within that Iteration failed and are shown correctly on the Summary Table or against the individual requests.

In order to create a custom report to show you what you would like to see, you would need to create something locally, like a new template for a HTML report to display the data in the way that you’d like to see it.

Currently, you’re using my reporter htmlextra which displayed the data in a certain way but that’s not the only way it can be displayed. You can create and use a custom template to display the data in a way that works for you in your context. It’s all coming from the same data source.


I genuinely don’t know what is classed as a failed iteration :thinking:

This comment was based on not finding any place in the Newman codebase (maybe there is) that would ever bump that failed number, based on something failing in the iteration.

I decorate the iteration number in my HTML report with a red/green to provide a visual clue about if the iteration had failures. I don’t specifically have a failed 1 or X type metric though.

1 Like

So, this is the main question here… Is there any plan/initiative from postman’s side to implement something related to this? Increasing the failed iteration counter in case of certain scenarios? Thanks!

As you’re using Newman, you can add that as a feature request on that repo:

If you migrate over to using the Postman CLI, you can raise a feature request on this repo:

Or in the short term, you could create a custom reporter for Newman and include that functionality.