User Feedback - Postman Flows

The running via newman :100: what I was thinking.

They are coming in from an API so I’d guess there is JSON somewhere :grinning_face_with_smiling_eyes:

So there are a few more things that I think could make flows AMAZING

  1. Sub-flows with their own iterations
    repeat {X} times for part of a flow
    sign up new user (once, or {iterations} times)
    includes entering payment details
    make {y} purchases

Right now because of the complexity of such tasks, we’re looking at other tools like k6. Having more tools means more time invested. Which sucks for everyone as time, attention and budgets get diluted. There is inevitable re-work and waste.

  1. If timings could be visualized post-run so that the critical path for a flow could be visualised (ideally repeated {N} times to get a confidence metric for p90, p95, p99, with some way of visualising outliers that would be great, even if it was sequential flows like current newman / postman.

  2. Parallelizable runs (Newman can be used like this by spawning separate processes that run newman)

  3. The ability to export flows to some form of documentation / narrative.

While the OpenAPI spec has been transformative in terms of consistency, it often leaves a lot to be desired in terms of logical progression of flow-states.

This is :fire: but it could also be even more amazing.

Hey @altimetry-cosmologi1,

  1. That is a great idea. I think this can be solved with some kind of iteration/repeat block that lets you trigger parts of flows {X} number of times. We have plans for allowing other flows to be used inside a flow like modules, the same block could be used again there as well. Let me see if we can get this block out quickly.

  2. This seems to be a unique use case. Could you give me some ideas about what a good confidence metric would look like for you? Would it be time bases, success bases, etc?

  3. Unlike Newman, everything in Flows inherently runs in parallel. So if you connect two request blocks to the same start block both will run in parallel. I do see a use case where one could set a concurrency count on particular blocks. Could give me some examples of how you would use this so that I can form a good mental model here?

  4. We have a block called Annotations that you can use to document your flows in plain text (markdown support coming soon). Maybe export to png or pdf is something you are looking for here?

Thanks for all the feedback :slight_smile:

1 Like

Wow thanks for fast feedback.

  1. This is about trying to understand flows with multiple systems. Generally we have one entrypoint, but this results in a lot of data being encapsulated. Several requests might come to various services from the one. So One thing I’ve been experimenting with (through legacy postman) is coordinating some tests manually via postman. This then lets us know that, if a request results in 3second response times, that it might be made up of {X} seconds at that service, {x} seconds at downstream A, and <100ms at downstreams B, C, D. So we can know that the majority of effort should be put into improving response times for downstream A.

The way I would see this working is that we use a condition block so if the status is not of the path expected (lets call it the 2XX case for simplicity, but there might be some 3XX for good measure), then we’d like to also add a condition for a maximum latency (right now it’s unfortunately seconds, but I want the ceiling to be 1second eventually).

Again, thank you.

2 Likes

Can we use data files for variables in the flow?

I cannot find the documentation for the new feature Postman Flows. Any leads on how I can find the docs for this?

Coming soon: User Feedback - Postman Flows - #33 by joyce

1 Like

Not yet, but we do want to support that in the future :+1:

@anudeep-postman Do we have Cookies support in flows ?
i.e say cookies created by one send request block gets set/saved and the same gets utilized by a second send request block ? (like when you run chain requests in a collection)

1 Like

@Isuruimesh1, Sorry I was wrong before. You’ll have to use the combine block if you want to combine two data packets and pass them on to a block. We’re seeing some bugs in the combine block and it should be resolved soon (in a day or two)

It could be so great if it was possible to convert Flow into collection with requests or the opposite :slight_smile:

@altimetry-cosmologi1 Thanks for the feedback! Would you be able to raise a feature request on GitHub? Sign in to GitHub · GitHub

“create variables” step only offers variables that are explicitly used in the request. If I have “auth inherit from parent” on request, and parent uses a variable, that variable is not offered in the box “create variables”

This is absolutely amazing. Already using it to build some neat reports. One feature I’d like is a simple ‘STOP’ and ‘PAUSE’ button. This would be helpful whenever nested loops with large data sets have to be debugged. I would also love to see an option to export terminal/output/variables as a CSV file.

Again, this is an absolutely amazing feature, thank you so much!

Another comment - sorry for the spam, but I’m loving this. Would love to be able to give each step a name or a label or a comment to keep track of what each step does. Color coded boxes would also be nice. A ‘calculation’ block would be great for when we need to generate output based on something such as adding two number fields together and then doing validation on that. Finally, would really like it if we were able to create object variables. That way you can have an object that populates as the process goes along - ending in a final object ready for output.

Again, thank you for doing this amazing work!

Hi @shu56134! Welcome to the community :slight_smile:

Thanks for bringing this to our notice. We will be looking into resolving this in the future releases!

Thats a fair ask and we’re working on making that possible. Stopping a flow is right around the corner, but pausing for debugging is something we want to polish more and build, so any inputs there are welcome :smiley:

You can try using the “Annotate” block. It lets you write freely anywhere on the canvas right now

We’ll definitely be adding this block very soon!

This seems interesting, but could you expand on this a bit more? Probably give an example of the usecase in your mind?

1 Like

So excited to follow up on the development process for this - thanks for responding.

As for the final point, I’ll explain with an example. I work with student data that makes three calls - The first gets a list of students. The second gets the courses for each student. The third gets the grades for each course. My goal is to build an object variable that contains the student name, course name, grades, and then print it out to the terminal. I’ve been able to mock this by using the ‘Combine’ block and grabbing variables in the first/second step.

Essentially I’m trying to make an object made up of variables that were collected during the process. Again, this is kind of doable with the ‘Combine’ block, however, it feels a tad clunky, since those blocks can only accept 2 data inputs at a time, which means they have to be chained if more than 2 sets of data needs to be combined.

Hope my explanation helps! Thank you again. Going to spend the remainder of the week playing around with this. Would you recommend I use the online version, or stick to the Desktop version as I am now?

I can consistently make Flows crash by defining a Variable block with a Constant value of [\"test\"] and assigning to a variable. And unfortunately, when flows crash there is no way to recover them, as they crash every time they are opened. Having to start from scratch sucks, so maybe a way to undo/recover from outside the flows (in addition to fixing the string bug).

And additionally, there does not seem to be any way to define Arrays in variables as far as I can tell. Any json structure I define in a Variables block is of type Any, even though the value is Repeated. So For-Each blocks will not accept type Any

Additionally, there does not seem to be a way to validate json null/undefined values: image