Test Suite Speedrun: Agent Mode Generated 18 Assertions Across 3 APIs in Under 20 Minutes
What I Built: A complete 3-API test suite covering:
Weather data (Open-Meteo API)
Cryptocurrency pricing (Coinbase API)
Random data validation (Cat Facts API)
Phase 1: Collection Setup
Created a fresh collection called “3-API Test Suite Speedrun” with three public API endpoints.
Phase 2: Agent Mode Test Generation
Request #1: Weather API
My Prompt:
Write comprehensive automated tests for this weather API request including:
- Status code validation
- Response time under 2000ms
- Schema validation for temperature_2m and wind_speed_10m fields
- Check that temperature is a number
- Check that wind_speed is a number
Result: Agent Mode instantly generated 6 comprehensive assertions validating status codes, response structure, data types, and performance benchmarks.
Request #2: Bitcoin Price API
My Prompt:
Create automated tests that verify:
- 200 status code
- Response time under 1500ms
- JSON structure has data.amount field
- Amount is a valid number greater than 0
- Currency field exists and equals "USD"
Result: Agent Mode generated 6 more assertions with proper nested JSON validation and edge case checks (amount > 0).
Request #3: Cat Fact API
My Prompt:
Add tests to validate:
- Status is 200
- Response has "fact" and "length" properties
- Fact is a non-empty string
- Length matches the actual fact length
- Response time is under 1000ms
Result: Agent Mode generated 6 assertions… but here’s where it got interesting.
Beta Reality Check: The Bug
Two tests failed:
Property assertion had incorrect syntax
Response time exceeded 1000ms (network variability)
What I Did:
- Analysed the generated code and identified the assertion syntax error
- Fixed the property validation logic manually
- Adjusted response time threshold to 2000ms for real-world reliability
- Re-ran tests successfully
The Takeaway: This showed Agent Mode’s current beta limits AND gave me hands-on experience with the test code structure. Having 95% accuracy that saves 20 minutes is still a massive win. The error was educational, not blocking.
Phase 3: Failure Diagnosis Test
To test Agent Mode’s diagnostic capabilities, I intentionally broke the Weather API by changing the endpoint to /WRONG_ENDPOINT.
My Prompt:
Analyze these test failures. What went wrong? Provide a detailed diagnosis with root causes and suggested fixes.
Agent Mode Response: Immediately identified the 404 error, explained the root cause (invalid endpoint), and provided step-by-step instructions to fix the URL.
After restoring the correct endpoint, all tests passed again.
Phase 4: Collection Runner Results
Ran the complete collection through Postman’s Collection Runner to validate the entire test suite end-to-end.
The Results
18 total assertions across 3 diverse APIs
~120 lines of test code generated automatically
25+ minutes saved vs. manual test writing
Smart error diagnosis when things broke
95% accuracy (1 syntax bug out of 18 assertions—fixed in 2 minutes)
Real beta testing experience with honest feedback
What Impressed Me The Most
Context awareness: Agent Mode understood the differences between weather data, financial APIs, and random text endpoints, generating appropriate validation logic for each without me specifying data types or structures.
Conversational workflow: Instead of Googling “Postman assertion syntax” or copying boilerplate, I described what I needed in plain English and Agent Mode handled the implementation.
Error diagnosis: When I broke the Weather API intentionally, Agent Mode didn’t just say “tests failed”, it pinpointed the 404, explained why schema validation broke, and gave actionable fixes.
What Could Be Better
Edge case suggestions: Would love Agent Mode to proactively suggest additional validations I might have missed (e.g., “Should we also check for negative temperatures?” or “Want to add rate limit handling?”)
Code explanation toggle: An option to have Agent Mode explain why it chose certain assertion patterns would be great for learning.
Syntax validation: The Cat Facts test had a syntax error—a pre-generation validation check could catch these before inserting code.
Real Talk
This wasn’t about getting perfect AI-generated code. It was about speed, learning, and workflow transformation.
Before Agent Mode: 40 minutes of writing boilerplate, debugging syntax, cross-referencing docs.
With Agent Mode: 20 minutes of conversation + 2 minutes of fixing one bug = production-ready test suite.
Even with one syntax error, Agent Mode turned tedious test writing into an interactive experience. The bug became a teaching moment, not a blocker.
For a beta product, this is exactly the kind of assistant I want—fast, mostly accurate, and honest about its limits.
Final Thoughts
Agent Mode feels like pair programming with an AI that knows Postman inside-out. It’s not replacing my testing skills—it’s amplifying them by handling the repetitive parts so I can focus on strategy.
If Agent Mode adds proactive suggestions and self-validation in future updates, it’ll be unstoppable.
Can’t wait to see where this beta goes! 
~ Sagnik