Performance Test Runner Ignoring Scripts

i have an API that is returning bad data, but always returns 200 response code. but we think it is only during stress. when using the runner, we are able capture any bad data using the scripts. we tried running the performance test runner, but it only considers failures as status code non-200. it seems to ignore the scripts on the request. also, we are writing the error to the postman console. but no console logs are written during a performance test runner.

Can you share (please sanitize any sensitive data) a sample picture of the collection runner output and some of the scripts (from pre/post) that are being ran? It could be an issue with how the tests are being written.

Post-response script

const responseJson = pm.response.json();
for (const rec of responseJson) {
    let accountNumber = rec.details.accountNumber;
    let paymentMethodReferenceId = rec.paymentMethodReferenceId
      pm.test("pmrId=" + paymentMethodReferenceId, function () {
    try {
        pm.expect(accountNumber.length).to.be.lessThan(5);  //set to 5 to fail everytime
    } catch (e) {
        console.log("pmrId="+paymentMethodReferenceId+" failed. size="+accountNumber.length)  //catch e to log to console
        throw Error(e)
    }
});
}

FUNCTIONAL TEST - EVERTHING WORKS FINE!
Setup for functional run with 2 iterations. Expectation is a 200 response and all failures

As expected all failed. Correct messages

Console shows failures as written in test script

PERFORMANCE TEST - EVERYTHING PASSES, BUT IT SHOULD FAIL

Performance test completed. No errors. And console is empty

I believe the reason performance test is not failing is because it states in the postman documentation. Error rate - The percentage of requests that result in an error. Responses other than 2xx responses are considered errors.

I think you are correct about the errors only being tracked on the non-2xx. I have been digging deeper into the performance tests and find the documentation quite misleading. Consider the the block of their documentation which leads the reader to believe post-scripts (which are known for tests) are executed and impact the results. I understand that the post script is likely used to chain the data to the next request but if an error is thrown then it should report an error regardless if the http status was 2xx (or track it differently).

You can add scripts and tests to packages in your team’s Package Library, and run the contents of packages from the Collection Runner. Learn how to add packages to the Package Library, and import packages into your pre-request and post-response scripts.

This may be a feature request to allow the performance test to track HTTP Errors (non 2xx requests) but also allow the post request script error count to show up.

Some folks will look at this request and wonder why you are testing the response to be accurate when you are testing the performance of your system. Truthfully the performance piece is still in an early stage of its lifecycle that some improvements to make it an alternative to other tools like Artillery, Locust, jMeter, k6, or Goad are still very much needed.

Hey @buchananr2 :wave:t3:

Which section of the documentation is that, could you also share the link?

It is in the documentation for the Performance Runner

I think a clarification on the way errors are tracked either in this beginning section (linked) or further in the Debugging section of the documentation as they are only tracked based on non-2xx response codes.

I can see both sides - track only non-2xx as errors in performance because you are not validating the API responses/etc but also see where it could be a benefit for an API to not only check throughput but also validate responses to ensure things are accurate.

Checking that requests are acurate at load are essential when you are testing APIs with authorizations applied because the authorization server (OPA/Permify/OpenFGA/LDAP) could be overwhelmed and return bad data in the authorization checks of an API which could lead to an issue being found through performance testing.

With that being said, the validation checks should be ran post or out of bound of the performance testing piece (save responses to temp file and process post wrap-up but that would cause limitations) as you would be sacrificing CPU to processing post-request scripts which would diminish your throughput testing.

Sounds like great feedback for @malvika-chaudhary and her team.

1 Like

The reason i want to validate the response data while running a performance testing is because when the API is under stress, it is returning inaccurate response data. individual API calls, even several iterations calls done sequentially is returning correct data.

Thanks @mustafa.motiwala and @buchananr2 for the detailed feedback and examples. I agree that how this might be confusing especially if you are using assertions on functional testing.
I have taken a note of this feedback. If you want to talk to me and talk more about the problems that you might have faced or your overall experience of using performance testing, then please use this link: Calendly - Malvika Chaudhary

Regards,
Malvika