Danny,
Thanks for the response. My example is very amateurish at present as I’m currently still stepping thru newman.on to determining what data is available when. Right now the only things I am unable to collect are all Postman App Test/Assertion Results (only the failed results) and the Request Body (only because I did not get that far.)
In the Postman App I have a set of collections which are set up in such a way as to drive all Requests within one collection from a random database record selected by the first Request/Response. Each item in the collection uses a set of passed variables to configure the Request URL and then collects and creates the variables to pass to the next item.
My goal is to be able to run these collections at any time, log and save all the results, then run the exact set of tests in another environment (i.e., using that same randomly selected record from the original run). Once done in both environments, a simple compare of files should provide the results I want.
To do that I want the following to match for each “item” from both executions:
• Request URL
• Request Body
• Response
• Postman App Test/Assertion Results
The initial run of the collection will captured, format, package and save the information related to the specific Collection/Request Name/<PreRequest/Request> . A later compare run will use a directives file to initiate the same collection in a new environment and pick up the initial results to compare against. The initial data collection run and the compare run may executed manually via nodejs/newman or related to a new build in Team City. The original postman app collections may be run via the postman app or as part of the collect/compare scripts.
Thanks for your interest and any advice you may have.
Jill
const newman = require(‘newman’);
const fs = require(‘fs’);
var tcName;
newman.run({
collection: require("./PostmanCollections/newmanWriteTest.json"),
globals: require("./QA_Workspace.postman_globals.json"),
environment: require("./A1Env.postman_environment.json"),
reporters: “cli”
}).on(‘prerequest’, function (error, summary) {
console.log(" “);
console.log(”/n");
console.log(" ** ON PREREQUEST**");
if (error) {
console.error(error);
console.error(‘collection run encountered an error.’);
}
else {
console.log(" ** ON PREREQUEST ELSE *********************");
console.log(“summary Globals :”);
console.log(summary.item.name);
tcName = summary.item.name;
console.log(JSON.stringify(summary.item.name));
console.log("STATS run.executions 0 ");
}
}).on(‘request’, function (error, data) {
console.log("/n");
console.log(" ** ON REQUEST **");
if (error) {
console.error(error);
}
else {
console.log("/n");
console.log(" ** ** ON REQUEST **");
console.log("TCNAME is : ", tcName);
var responseBody = data.response.stream;
var responseStr = responseBody.toString();
fs.writeFile(response.txt
, JSON.stringify(responseStr), function (error) {
if (error) {
console.error(“request FUNCTION ERROR *******************”);
}
});
}
}).on(‘beforeRequest’, function (error, args) {
if (error) {
console.error(error);
return;
}
else {
console.log(" ");
console.log("/n");
console.log(" ** RESPONSE ARGS **");
fs.writeFile(`requestURL.txt`, args.request.url.toString(), function (error3) {
if (error3) {
console.error("beforeRequest FUNCTION ERROR *******************");
return;
}
});
}
}).on(‘console’, function (error, summary) {
if (error) {
console.error("ON ITEM FUNCTION ERROR *******************");
}
else {
console.log("ON console : ");
console.log("/n");
console.log('*********************** ON CONSOLE n************************************');
console.log(summary.toString());
console.log(JSON.stringify(summary));
}
});