Data-driven testing in postman

I would like some advice about best practices regarding data-driven testing using postman.

I am writing a test suite for a company which has over 100 endpoints and I am trying to figure out the best way to implement data-driven testing.

As I can see it, I have a few options.

  1. I can duplicate the requests, hard code the parameters, and write the tests to expect the appropriate status code ect. Pros: The user experience for the company’s developers is closer to what they are used to. The tests have more readability. Cons: This blows up the structure of the postman collection, and the collection cannot be converted back into a swagger as documentation.

  2. I can use pre-request scripts to set environment, or collection, variables with positive and negative use cases, and then loop through the inputs in the tests using pm.setnextrequest(). Pros: The collection remains as is, and can be exported as documentation. Cons: Since everything has to be in one test, the readability is slightly lower. Furthermore, different environments might require different data sets, which nullifies the usability of the previous two cases.

  3. I can use external json files with data. This is great and works fine. But I don’t see how I can contain the full test data for an api test suite of over 100 requests in one json file. The only use case I see with this is having a separate json for each call, but then how should I use the runner? Furthermore, can nested variables be resolved? i.e: {{data.inputs.input1}} ? It doesn’t seem like it can from my testing.

Thanks,
vovky

Hi,

I think it is good to think about multiple aspects but at the same time to leave some room for experimentation, trying things out and correcting them along the way. Here are some ideas:

  • Especially for larger projects, don’t mix testing with documentation. Sometimes it works to put everything in one collection but if it doesn’t that it is time to split things without much regret.

  • 100 endpoints is quite a lot. Consider breaking this down to services/components or business functionality. Find something that is not too small but not too big (how about something like 25-30 endpoints per collection? )

  • Involve the developers in the process and explain them the options. Get some feedback on what will work best for them.

  • Think about long term maintainability of the collections. If you are the only one who fully understands how scripts and workflows work, this may not be good. Reduce the complexity or make sure that almost everybody is just as skilled as you are. While it may not be cool, sometimes duplicating requests make them easier to understand and maintain.

  • Postman can handle relatively large JSON files and you can structure them as you need. Just try one big file and see how it goes. If it does not go well, split it into multiple files.

1 Like

Hi @vdespa thank you for your response. Indeed, maintainability by the API developers is a primary concern of mine. Otherwise the test suite doesn’t help anyone if I have to fix the json/parameters for every API change.

If you have some time, I’m curious about what you think about my solution posted here:

It may be a little difficult for developers to maintain, I’ll grant that.