Now, the problem is that I have tried also without the setNextRequest line, and I got the same result… I think throwing an error should suffice to mark the iteration as failed. I also tried with making the pm.test fail with pm.expect.fail(message), but same result…
I genuinely don’t know what is classed as a failed iteration
Different things would fail inside that iteration, rather the iteration failing.
A request, a script or an assertion might fail and those are shown in the report. Adding a pm.test function that fails will fail that assertion but the iteration still ran and was successful.
Where would you like to see this shown? What value is that going to offer to someone viewing the report?
The idea is that I’d like a very simple report, only showing how many iteration has been successful. Let’s say I have 8 iterations in total (and like 80 scripts) and 5 of them failed (I mean one request in them failed with status 500 and I have a test which checks if the status is 200). In the report I’d like to see that the iteration success rate is 3/8. That’s what I’d expect.
What value is that going to offer to someone viewing the report?
I have a lot of datafiles and each has a lot of iterations, so I want to have an accurate summary about the iteration rates related per DF. And I’m testing periodically the API to early detect any possible failure from the backend. In my case it brings value to the project.
Different things would fail inside that iteration, rather the iteration failing.
Back to this. Chatting a little bit whit ChatGPT, I understand that those different things should make the iteration fail. Of course it’s just an AI model, not the truth, but it still made me curious.
When something fails within an iteration, that failure or error is marked against whatever it was that failed (request, script, assertion) and you can see that on the summary table.
The Iteration run successfully from start to finish, items within that Iteration failed and are shown correctly on the Summary Table or against the individual requests.
In order to create a custom report to show you what you would like to see, you would need to create something locally, like a new template for a HTML report to display the data in the way that you’d like to see it.
Currently, you’re using my reporter htmlextra which displayed the data in a certain way but that’s not the only way it can be displayed. You can create and use a custom template to display the data in a way that works for you in your context. It’s all coming from the same data source.
I genuinely don’t know what is classed as a failed iteration
This comment was based on not finding any place in the Newman codebase (maybe there is) that would ever bump that failed number, based on something failing in the iteration.
I decorate the iteration number in my HTML report with a red/green to provide a visual clue about if the iteration had failures. I don’t specifically have a failed 1 or X type metric though.
So, this is the main question here… Is there any plan/initiative from postman’s side to implement something related to this? Increasing the failed iteration counter in case of certain scenarios? Thanks!