Best Practice when using a data file for a testsuite

Hi Everyone,

Im looking for a little insight wether to use a datafile for our automated suite

My Question is twofold:

We just finished our Test collection for our API and it’s running beautifullly in CI/CD. We were thinking of adding more testcases with the help of a data file, but we think we kind of hit a wall here:

Our collection as of now consists of different tests on our API different endpoints.
So far what I have seen and read is that a datafile is only used for a single endpoint. (say a single Post call)

1. Is there a way I can use a single datafile to drive my whole test suite?

Say I have 2 API endpoints and 3 methods

My post/action request bodies look like:

  "created": "2020-02-28T10:57:16+01:00",
  "createdBy": "automatedtest 1",
  "occurrence" : "2020-02-28T10:57:16+01:00",
  "location" : "Location 1",
  "category": "A",
  "description": "Description 1",

For my Get/action call I need 2 extra parameters

  "spaceid": "",
  "venueid": "",

My other post/image request body looks like:

  "fileName": "image.png",
  "image": "";

Do I need to make one big Json file and add all these parameters into 1 object?

Say i wanted to test if category is mapped correctly (post/action) - and this will take 26 iterations, do my other 2 tests have to run 26 times as well?

2. Can I use a datafile for negative testing?
I want to test error handling on nulls - spaces- or missing keys. As of now we wrote tests for each of these scenario’s . How does one go about when using a data file, assuming we are populating a single post request with data from this file?

Hope someone has some good ideas :slight_smile:

Cheers!

Hi, I recently posted a similar question. No responses, so maybe we can figure this out together.

I am also automating a test suite for an API, mine has about 100 endpoints, but they reuse a lot of the parameters.

I believe using a data file is the way to go, but I am also a bit stuck as to how to proceed.

Right now, I formatted the data file as such:
[
{happy path iteration},
{negative iteration}
]

so like:
[
{
“parameter 1”:“positive”,
“parameter 2”:“positive”,
“parameter 3”:“positive”,
“positive”:true
},
{
“parameter 1”:“negative”,
“parameter 2”:“negative”,
“parameter 3”:“positive”,
“positive”:false
}
]

this way, endpoint 1, which only uses parameter 1, will take the good value for the positive iteration, and endpoint 2, which uses parameters 2 and 3, likewise. Then the second iteration, I make them fail.

Then in the tests section. I use the “positive” parameter to split the expected results. i.e:

if(data.positive){
  expect.status.code.to.be(200);
}else{
  expect.status.code.not.to.be(200);
}

This works great for a smoke test. where I am testing 2 scenarios, but I don’t see how to scale this nicely. if I want to test the second endpoint negatively twice (once for each missing parameter), then the first endpoint, which only has one real negative use case, will also be tested twice negatively, due to lack of dynamic iterations.

i.e if my data file looks like:
[
{happy case},
{negative case 1},
{negative case 2}
]

it seems like data files were meant to be used once per endpoint. so I can run the whole test suite through a command line, and inject a separate data file for each endpoint, but this really seems like A LOT of work, and makes the collection runner useless.

Another theoretical solution would be to inject the data into the prerequest script, setting a temp env var with all the cases, then cycling through the cases in the tests section using nextRequest.

so the data file would look like:
[
{
“positive”:true
“parameter 1”:[case 1],
“parameter 2”:[case 1, case 2]
“parameter 3”:[case 1, case 2, case 3]
},
{
“positive”:false
“parameter 1”:[case 1,…],
“parameter 2”:[case 1, case 2,…]
“parameter 3”:[case 1, case 2, case 3,…]
}
]

and in prerequests you would say:
pm.envVar.set("all_test_cases",data.parameter1)

and use
var currentcase = all_test_cases.shift(); (also setting currentcase to an env var)

and then in tests:
while(all_test_cases.length>0){
pm.setnextrequest(“same request name”);
}else{
pm.setnextrequest(); MAKE SURE YOU DONT SET THIS TO NULL, just leave it empty
}

Yes. I believe the third option is the way to go, it keeps the code and file structure the most clean. Lemme know what you think and if this helps!
-Euge

@JJ_k

I have tested the prerequest data injection method and it works like a charm.

One thing is if you want to be able to run it manually, then have an if statement in the prerequest script that checks if(data.parameter != “undefined”) and then only runs the injection if its true. otherwise it will throw errors when testing manually.

-Euge

with data file:

[
    {
        "usernames":["user"],
        "parameter_tests":["1","2","3"],
        "expected":true
    },
    {
        "usernames":["","testing"],
        "parameter_tests":["","not","this","issa test"],
        "expected":false
    }
]

the failed tests in the second iteration is because even with nonsensical parameters the api returns 200 :smiley:

just gonna post my prereq and tests as pics because thats how I learn best

image

image

cheers

1 Like

I would recommend you both to create a collection using a mock API (such as httpbin.org) to represent your needs. It is quite hard to fully understand how your API is working and what are you needs.

I have recently created a tutorial called the “Postman Router”. This is a concept I have invented and tries to address to issues:

  • creating multiple workflows in a single data file
  • optionally attach data to each request

I would be more than happy to better understand your needs and to create a possible solution.