Setting best practices for a team

Hello,

My company starting using postman several years ago and the organization and practices grew organically. I am now taking on the responsibility for cleaning up our account and establishing standards. I have watched and read several tutorials and done some experimenting in Postman. I am hoping to drill down into some specifics around best practices. I know a lot of this is subjective and depends on context but I’m hoping for a gut check on some of concepts from more experienced users.

For additional context, my company has a paid account but I do not think it will be springing for the Enterprise-level any time soon so some options will not be available to us.

  1. Environment variables.

We have several servers (dev, qa, etc.) Because of this we have several different Postman environments (ourapi_dev, ourapi_qa, etc.) where the only difference is the url for the api. On top of this people tend to copy an existing environment to use for the their own personal testing. This creates a lot of duplication.

My feeling is that we should minimize the number of variables and move almost all of the variables to collection level variables. If we go with a minimalist approach, I think we can define a single variable in the environment for host type (‘dev’, ‘qa’, etc.) and in the collection have a pre-request script that sets the API url based off of this host type.

Does this seem like an oversimplification?

  1. Collections for clients
    We only have a few APIs but have several clients for each API. When our postman collections were established, the practice was to setup collections by client. Basically ClientA had its own collection, ClientB had its own collection, etc. However, the clients use the same API endpoints. (This was done to evaluate different workflows in the clients.) As the API has evolved maybe one collection would be updated but not the other. We don’t have automated testing yet so things easily get out of sync.

I assume it is a better practice to create a collection for the API itself. But how do you then build client-specific workflows? How do you keep things in sync? I know you can fork from a collection and watch it but if in the forked collection you are changing the order and folder structure, I don’t see how you can track the state of the original request.

  1. Test types

I know there are several types of tests that can be run on a collection. If I have a request that I want to run in multiple test scenarios, what is the best practice? If I duplicate a request within my collection into different folders, I loose my single source of truth.

Hi @cpaye. Great questions!

Environment Variables: Having different environment variables for different environments(dev, qa) seems like the right approach, but this can be to tailored to meet the needs of your organization as well. The environment variables have Initial values and current values. Initial values when set are shared with the rest of your team and everyone that can view that environment through Postman Cloud. Current values, however, are local to your machine and not shared or synced. Current values always take precedence over initial values when you run a collection. This means that in a team, multiple users can have varying current values and update theirs individually for the same environment(since it’s local to them), but the initial value is shared across teams and once updated, is updated everywhere else. Having your team members modify the current values of shared environments instead of the initial values may be helpful here.

When the environments are copied, where are they copied to? Are they copied to the same workspace, or a different workspace? By “copied”, I am assuming you mean “forked”. Please correct me if my assumption is wrong.

Collections for clients: We have a feature for this called Partner Workspaces. It let’s you invite single or multiple partners into a single workspace and give them controlled access to resources. You can select a Partner Lead that can invite other partners from their organization. You’ll need atleast a professional license to access this feature. Alternatively, a hacky way will be to fork this collection into an entirely new workspace for each client and “pull workspace changes” manually for each client when a change is made.

Test types: Could you share some of the scenerios you’d like to test that may require duplicating a collection?

Cheers!

Thanks for the feedback @gbadebo-bello .

For the environments, people have been in the habit of copying (not forking) an environment and just appending their names (env_bob, env_jane, etc.) I think this is a habit that spread on its own but didn’t have any real value or need. Educating people on the difference between initial and current values makes sense and is a good idea. Another bad habit everyone fell into was having all of the collections and environments in one large shared workspace. This is something else I am looking to change.

For client collections, I wasn’t clear. By “clients” I meant things like websites, desktop applications, etc. We have a few APIs and there are multiple applications (that we also develop & maintain) that use those APIs. As an example, we have a website with multiple pages and there was a postman collection created where each folder in the collection represented a page. The requests in the folder were the API calls made when a user interacted with that page of the website (ex: clicking a submit button). We also have a desktop application that uses the same API. In its collection many of the same API calls are made but they are in a different order because the workflow of the desktop app is different. In this case, I am not sure how best to maintain a “single source of truth”. If an endpoint changes, I have two different places to make updates. If I have a collection that is just all of the API calls and I fork that to make an app collection, the “watch collection” option does not really work because the app collections are going to make calls in a different sequence.

For the testing: As an example, sometimes I may want to simply run all of the requests and sometimes do some scenario tests. If I am just running all of the requests, I don’t care about the order but if I am doing scenario testing I want to run requests in a specific order. If the same request is needed in both cases, how do I prevent having multiple copies of the same request in the collection? It seems like there would be a risk that I could update one request but not the other. (Of course automated testing could catch this.)

Some teams embrace version control for collections to minimize unwanted modifications to the base collection and enforce a review on every modification. This is a bit stricter, but it embraces the principle of least privilege in your workspace and may eradicate this issue. Ensuring that only reviewers have write access and that everyone else has view access to the workspace. Anyone who wants to contribute to the collection has to first fork the collection, make modifications to their forked version, make a pull request and have that pull request reviewed, approved, and merged. This is a workflow developers are already used to with version control systems like Git and Github.


If you’re using an environment variable and the collections are in the same workspace e.g a collection for web and another for desktop. You only need to update the parameters in one single place – the environment variables.

Alternatively, you can experiment with having parent folders in your collection. One folder for desktop, another for web, another for mobile, etc. Each folder will contain workflows specific to just that client itself. When running a collection using the collection runner, you can deselect specific folders/requests and only leave the ones you want to run.


From what I understand:

Scenario 1: You have a bunch of scenario tests that run in a workflow in a specific order. This helps you validate specific workflows.

Scenario 2: You want to validate that all requests return a successful response regardless of the order in which they run.

If in 2, the order does not matter, doesn’t 1 already validate that all the requests are returning a successful response since the workflow is passing?

If you mean that the scenario workflows may not necessarily make use of all the requests, you still want a collection that has all the requests for documentation and onboarding purposes. Typically, what I do here is to create two collections. One that documents all the available endpoints the API exposes and another that demonstrates specific use cases with each use case grouped as a folder in that collection. This workspace, for example, shows what I am referring to. The Use Cases collection contains scenarios and that separate from the actual collection.

@gbadebo-bello I definitely plan on enforcing a stricter git-like workflow to manage collections. This will prevent people from accidentally changing the “gold-standard” collection.

I have considered the parent folder structure as a way of managing cases where you have one folder with all of the possible API calls and other folders where you can have a subset of calls that are specific to an API client.

And you are right that if you have tests that run everything in a collection, then having something that tests a subset of those calls in a certain order doesn’t necessary give you much.

I think what I am struggling with is having an ultimate source of truth and minimizing redundancy. In that scenario workspace you shared, there are two requests that use the same endpoint. The bodies in those requests are different and the test scripts are different so clearly the context is different but what if the request was something generic like a login. If I were to update the login request in one folder (add a new test), there is nothing that automatically tells me “hey, you need to update the other request that uses the same API call”. It seems like the best way to handle this is to instill an awareness in developers that there could be multiple instances of the same request.

@cpaye. I see what you mean, but I still feel it depends on how your collection is structured. Collections and Workspaces are designed in such a way that a lot of the components can be abstracted and reused. For example, any property field(headers, URLs, query parameters, auth credentials, request body, etc) can be stored inside a variable. Your scripts can be stored inside the package library and re-used across multiple requests.

If there is a use case where the exact same request needs to be duplicated in two places(across folders or collections), the most efficient way to handle it AFAIK is to abstract the artifacts that are dynamic an are likely to change and declare them in one central location i.e collection/environments/global variables or the package library. You will still have two requests, but will only ever need to update both in one single place.