I am a Product Manager at Postman, researching and seeking feedback around API Performance Testing- simulating real-world traffic and observing your API Performance.
It would be helpful if you could take 2 mins and fill out this form that will help me understand what problems you are facing and how we can help you out!
Please feel free to comment on this thread to initiate any discussions based off your thoughts or based on the above form.
Assuming that you mean “API Performance Testing- simulating real-world traffic and observing your API Performance … Via Postman” (not just generally) … How exactly would Postman differ from the existing/established tools like K6?
There used to be a repo for a Postman to K6 script converter, would this not be a better route to pursue?
Also, would a Postman performance testing tool be limited in the same way in which you have now limited the collection runner?
Thanks for considering the performance testing functionality in Postman. So in real time, the traffic at one point of time would be so high and to validate if the response time is below the threshold we will have to simulate that in the postman.
To do that we will have to hit the API with take an example of 1000 users at same time. In other words we will have to hit the API concurrently with 1000 users at same time. This is not possible via postman.
So to do this we are currently using the tool called Jmeter. If the same functionality is implemented in postman it will cover all the validation scenarios for all the end points.
Also, we need another kind of functionality to do load testing which is a type of performance testing. Here we gradually increase the load. Example: We start with 50 users hitting the API at once at a particular time and we increase the load to 100 after 5min. So we will have to monitor the response time across various loads. This is also not possible via postman.
@w4dd325 James
Fundamentally Postman would be different from other tools like K6 since Postman is already helping users and teams to solve their Functional Testing needs eg exploratory testing as well as adding assertions and running them manually or automating them via CI or on a schedule. Currently, a lot of Postman users convert their collections and requests to other formats eg Jmeter plans, and use them for load testing or other kinds of Performance Testing. These users have already mentioned that they would like to use Postman for Performance Testing as well.
If you are up for a discussion regarding API Performance Testing, let me know. I am keen to know the problems you currently face while Load Testing (or any other kind of API Performance Testing).
We will announce more about the feature and packaging soon. I
I feel it would be more beneficial for this conversation to remain public so that everyone can benefit from the questions and answers.
Granted Postman is good for functional/exploratory testing, but I fail to see how it would be fundamentally different from tools such as K6/JMeter?
K6 for example also has the ability to include assertions:
And a ready-built GitHub Action for automation etc.
Forgive my candor… but Postman GitHub lists over 1700 open issues with the ‘feature’ tag (some dating back to 2013). Should there not be a focus on working with the users that have raised these and improving your current tool, before branching into a completely new discipline or testing?
I take it from this comment that this will indeed be a chargeable service. Is it likely that (like Blazemeter / k6 etc.) that there would be a cloud (hosted) version that is chargeable to users and a self-hosted CI version that is free to use?
I also refer back to my previous question, is it likely that the limits would be as low as the current limits set on the functional collection runner? From experience I can say right now, limits that low would beyond any doubt, render the tool unusable. (Especially if a user made a silly mistake and say, ran 1000 users instead of 100).
Is it also likely that there will be the ability to simulate network latency or to run against geographically specific locations (eg: running distributed load against servers located close to me in the UK instead of running against servers 4000+ miles away in America)
Also, is there likely to be a built-in HTML reporter?
Your new ‘in-house’ Postman Cli doesn’t currently support HTML reporters where as Newman does.
And will there be integration for established APM tools or would Postman also be looking to build their own APM?
Please bare in mind that I am a massive fan and advocate of Postman. But when discussing the move into performance testing, while there are still a plethora of things that could be improved first, the term ‘Jack of all trades, master of none’ springs to mind.
@w4dd325 Thanks for the note. We appreciate the enthusiasm and the questions.
The direction we take on API performance testing will be guided by our community feedback.
Our goal is not to build what other tools have already built but rather solve the problems for the community. Our roadmap will also reflect the same.
We will also be opening up the feature soon for early access so interested folks can give us more feedback. Would love for you to try it out and let us know your experience. We will keep this thread updated with more news on this.
I think this is a fairly subjective viewpoint, the tools to do performance testing already exist, the only real difference is that a user could stay within one tool and cover different testing disciplines.
In theory, a single app to cover multi-discipline testing is not a bad thing. But in reality, you are about to start challenging the likes of Grafana’s K6, HP/Micro Focus’s Performance Center, or Tricentis’ NeoLoad (to name a few) who already have fully established and incredibly capable tools.
Please let me explain why I feel this is significant… You state above…
Amongst the currently open tickets, you (Postman) have some fairly serious flaws in your current application that really could do with being fixed before you step into a whole new discipline of testing (in my opinion).
For example, I personally open this issue:
I raised this issue 8 months ago and still today (literally today was the most recent update), this is still causing issues for your users who you claim to be ‘guided by’. People like myself are literally losing years’ worth of work because of a flaw with a feature you (fairly) recently implemented and as of yet, haven’t fixed.
Catering to 10-15 people asking for the ability to run concurrent users, when your entire userbase is currently at risk of losing all their workspace data if they encounter the issue mentioned above, seems a little counter-intuitive, don’t you think?
It just feels like whoever is making decisions at Postman has their priorities completely wrong. Surely you would want to make your current application more resilient etc. and help your users who are being significantly affected by current/open issues, before making such drastic changes?
A tool such as Postman should be measured on the quality of the features available, not the quantity.
I know my view on this is probably coming across as intense and disruptive, but please do remember that I really like this tool, and the community that comes with it! I am also interested in the early-access etc. but would very much like to see the current issues resolved before massive changes get implemented.
First of all, performance testing is great addition to Postman.
It really holds a potential to help community members produce top notch products.
One crucial metric in performance testing is error rate.
At the moment error is counted only via respective response code.
Yet, negative outcomes and negative testing are integral part of the game.
At the moment it is impossible to count only actual test errors when running performance tests.
For example, if we make test case such to assert it with 404, and it produces 404, there should be an option not to count it as error.
Therefore please consider option for user to select what is accounted for error in performance test. It could be very helpful for users to pinpoint real problems when system is under load.