🤖 $250 Community Challenge – Agent Mode Prompts | 5 days

This week’s challenge is about putting Agent Mode prompts to work on your workflow.

We’ve pulled together a small set of ready-to-run prompts for the challenge, and we’ve also published a full Agent Mode Prompt Gallery where you can browse and run many more prompts directly.

Pick one, run it on something real, and share what the agent handled for you.

The challenge

  1. Pick one of the prompts below
  2. Run it on a real collection, spec, or workflow
  3. Share what changed in your workflow

Starter prompts

You can use one of the prompts below, or explore the full Agent Mode Prompt Gallery and choose one that fits your work.

(If you’re using the gallery, just let us know which prompt you ran.)

  • Fix broken requests: Fix all the failing requests in this collection
  • Generate tests: Generate tests for each API in this collection and validate response schemas
  • Create documentation: Generate comprehensive documentation describing each endpoint in this collection, including purpose, parameters, and example responses
  • Debug end-to-end: Run this collection and fix all the failing tests
  • From requirements to APIs: Turn this product requirements document into an OpenAPI spec, a collection, a mock server, and tests
  • ** More options here.

How to enter

Reply to this thread with:

  • The prompt you ran
  • What you ran it on
  • What the agent handled
  • What you didn’t have to do anymore

Screenshots or GIFs are welcome but optional.

Prize and timing

  • Prize: $250 Visa Gift Card
  • Runs: Monday 1/26 to Friday 1/30
  • Deadline: Friday at 10:00am BST / 2:30pm IST
  • Winner announced: Friday

We’re looking for real workflows and real outcomes.

3 Likes

Prompt I Ran
“Create complete API endpoint documentation for all the workout management
endpoints in my collection, including request/response examples,
authentication requirements, and error codes.”

What I Ran It On

Collection: “My Collection” - A Fitness Workout Tracker API
The endpoints found in the API were approximately 14 in number and some of them include:

POST Register, Login, Create workouts
GET Get users, Get predefined workouts, Get reminders
PATCH Mark items done/undone, Edit workout
DELETE Delete workouts
Plus reminder and paystack dummy payment endpoints

What The Agent Handled

  • Automatically generated documentation for all 14 endpoints including the
    HTTP methods and full endpoint URLs,
    Request body structures with field types,
    Authentication requirements (detected from existing requests),
    Response formats and status codes, and
    Relationships between endpoints (e.g., “Create workout” → “Get workouts”) and notes.

  • Intelligently organized endpoints by their feature:

User Management (Register, Login)
Workout management
Reminder
Payment

  • Added contextual details I would’ve definitely missed:

Form-data structure for file uploads
Query parameters and filters
Error handling scenarios

What I Didn’t Have to Do Anymore

  1. No manual documentation writing - Saved ~3-4 hours of writing descriptions for each endpoint(huge relief because my teammates are always breathing down my neck for docs I detest writing :sweat_smile:)
  2. No copy-pasting request/response examples - Agent extracted everything from actual API calls
  3. No maintaining separate docs - Documentation updates automatically when API changes
  4. No guessing at field requirements - Agent analyzed actual requests to determine required vs optional fields
  5. No formatting headaches - Professional documentation structure generated instantly

2 Likes

That’s great @etengannabel50 :trophy:

Who else has been using Agent Mode to improve their Collections? :heart:


I created a thread with some Agent Mode tips, that I was using on a project to help add in all the different important elements to take it up a level.

3 Likes

prompt i ran

“Generate Postman tests for every request to validate status codes and ensure responses match their expected schemas.”

what i ran it on?

i created a collection called trello api that allows you to create a board, get all boards, create action items (todo, doing, done), delete a board, and so on.

this collection for the endpoints works as it uses an api key and token for authentication and the authentication is supplied via query parameters on each request.

what the agent handled?

the agent did its job by generating and updating the post response of each request in the collection ensuring consistency and outputting reliable tests. the results give a valid pass in the test results tab.

what i didn’t have to do anymore?

postman offers example logic snippets for writing tests but it is a lot easier when AI does it for you automatically with agent mode. with agent mode, you do not need to worry about manually writing and looking at each endpoint to determine each test script for the request.

also, before agent mode, i usually do copy-paste example test snippets across requests.

1 Like

Sharing a real life experince : ) ,

The Investor Demo Miracle: From “We’re Still Building” to “Here’s a Working Platform.”

Prompt I ran:

“Turn this PRD into an OpenAPI spec, a Postman collection, a mock server, and tests.”

What I ran it on:
Tivra Platform a B2B marketplace for biomass, briquettes & biodiesel trading.

Our PRD covered: multi-role auth (Buyer/Seller/Transporter/Admin), catalog + commodity categorization, real-time enquiry + negotiation, logistics + route tracking, messaging + notifications.

The catch? Django backend was ~60% complete, React was still wireframes, and investors wanted a working demo in 4 days.


What Agent Mode delivered in 15 minutes

:white_check_mark: OpenAPI 3.0 spec with 47 endpoints across 8 resource groups
(Auth, Products, Enquiries, Orders, Routes, Messages, Users, Transactions) + JWT + role-based schemas + proper negotiation/logistics models + enums (Biomass/Briquettes/Biodiesel)

:white_check_mark: Postman collection with JWT scripts, role-based env variables, full catalog CRUD, negotiation flows (Enquiry → Counter-offer → Accept → Order), and route assignment/tracking sequences

:white_check_mark: Mock server with realistic data: Rice Husk/Bagasse/Wood Chips listings, price-history negotiations, full order lifecycle, pickup/delivery coordinates

:white_check_mark: Validation tests: auth for all roles, commodity filtering, negotiation rule (counter-offer can’t exceed ask by >20%), order transitions (Pending → Confirmed → In Transit → Delivered), RBAC (buyers can’t create products, sellers can’t accept routes)


What I didn’t have to do

:cross_mark: Sell “design mockups” and ask for 6 more weeks
:cross_mark: Build throwaway demo endpoints
:cross_mark: Run panic syncs between frontend/backend/product
:cross_mark: Manually craft 47 JSON responses
:cross_mark: Watch our CTO improvise API behavior mid-demo
:cross_mark: Risk funding because core mechanics weren’t provable


What actually happened

:white_check_mark: Frontend started instantly using the mock server
:white_check_mark: Demo showed full journeys:
Seller lists 500 tons bagasse briquettes → Buyer enquires → Negotiation ₹8,500/ton → ₹8,200/ton → Accepted → Transporter route Pune → Chennai → Order tracking pickup → delivery
:white_check_mark: Investors saw a platform — not Figma
:white_check_mark: Backend used OpenAPI as the contract (no API debates)
:white_check_mark: QA ran tests as real endpoints shipped — caught 12 breaking changes before demo day


The moment that mattered

Investor asked: “What happens if a buyer negotiates below seller’s minimum?”
We clicked through. Mock server returned:
422 Unprocessable Entity — “Counter-offer ₹7,000 is below seller’s minimum ₹7,500.”
Investor nodded.

For this challenge, I used my API Reliability Intelligence collection, originally built for the Built With Agent Mode – December Contest and applied Agent Mode to a problem that most teams underestimate: documentation decay.

APIs change, errors repeat, context gets lost and documentation quietly becomes outdated. My goal was to show how Agent Mode can turn documentation into a living system, not a static page.

Agent Mode prompt I ran

I ran a single, intentional Agent Mode workflow composed of four prompts:

  • Generate endpoint documentation
    Creates clean, consistent documentation per request (purpose, parameters, examples, errors)
  • Share documentation with a Postman Notebook
    Converts documentation into a single, collaborative, shareable artifact
  • Generate a changelog of documentation updates
    Tracks how documentation evolves automatically as the API changes
  • Evaluate errors against previous ones and their resolutions
    Adds real failure history and resolution context to the documentation

These prompts are designed to work together as a documentation lifecycle, not as isolated outputs.

I ran Agent Mode on my API Reliability Intelligence Postman collection, which contains 8 production-style requests used to analyze:

  • latency and performance signals
  • failure and error patterns
  • reliability scorecards
  • historical execution data

The collection already worked well functionally, but its documentation relied heavily on the original author’s context, making it harder to share, onboard others, or revisit later.

What the agent handled autonomously

Agent Mode independently handled the following:

  • Endpoint documentation preparation
    Generated structured Markdown documentation for all 8 requests, including:
    • request purpose and usage context
    • endpoint details and parameters
    • example responses
    • common error scenarios and best practices
      (Prepared cleanly for manual attachment due to a tooling limitation.)
  • Postman Notebook creation
    Created a Notebook titled “API Reliability Intelligence – Complete Guide” that:
    • explains the collection architecture
    • walks through the four analysis modules
    • demonstrates key requests
    • explains how to interpret reliability scorecards
      This became the primary sharing and onboarding artifact.
  • Documentation changelog system
    Implemented structured changelog tracking using collection variables:
    • versioned entries
    • timestamps
    • change summaries
    • initial v2.0 entry documenting the documentation system itself
  • Error resolution intelligence
    Added automatic error tracking that:
    • captures 4xx, 5xx, and test failures
    • stores historical occurrences
    • detects recurring patterns
    • suggests resolutions based on past fixes

This connects runtime behavior directly back into the documentation layer.

Manual work eliminated

With Agent Mode in place, I no longer need to:

  • manually write or standardize endpoint documentation
  • maintain documentation changelogs by hand
  • re-investigate recurring errors from scratch
  • explain the collection live every time it’s shared

The agent preserves and reuses knowledge that would otherwise be lost.

Before vs after

Before

  • Documentation was static and minimal
  • Error knowledge was tribal and transient
  • Sharing required live walkthroughs
  • No visibility into documentation changes

After

  • Every request has structured documentation
  • A Notebook acts as a single source of truth
  • Documentation changes are tracked automatically
  • Error history and resolutions are embedded
  • The collection improves as it is used



1 Like

Prompt I ran
From requirements to APIs: Turn this product requirements document into an OpenAPI spec, a collection, a mock server, and tests
What I ran it on
I used a rough Google Doc where I had written product requirements for a college event management system (features like student registration, event listing, admin approvals, and QR-based attendance). The document was unstructured, half-baked, and written in plain language.
What the agent handled
The agent:
Converted my messy requirements into a proper OpenAPI spec
Auto-generated API endpoints with correct request/response schemas
Created a Postman collection and a mock server so I could test flows instantly
Added basic tests to validate responses without me writing assertions manually
What I didn’t have to do anymore
I didn’t have to manually design API routes
I didn’t have to guess request/response formats
I didn’t have to set up mock data by hand
I didn’t have to write repetitive test cases
What changed in my workflow
Earlier, turning an idea into working APIs took me days.
With Agent Mode, I went from requirements → testable APIs in under an hour.
Now I start backend projects by letting the agent create the foundation, and I focus only on business logic and edge cases.

1 Like

Using Postman Agent Mode to Add Error Handling to Medicotest API

The Prompt I Ran

Prompt: “Add detailed 4xx and 5xx error responses to each endpoint in this collection (Get clear and consistent error responses for your API endpoints)”


What I Ran It On

Project Context: I’m building Medicotest - a healthcare platform where patients in India (especially Punjab) can upload medical reports (blood tests, X-rays, ultrasounds) and receive:

  • AI-powered health diagnoses
  • Affordable doctor recommendations (₹400-₹1500 consultation fees)
  • Treatment plans with medicine costs
  • Location-based doctor search
  • Appointment booking

The API has 37 endpoints across 7 categories:

  • User Management (5 endpoints)
  • Medical Reports Upload (7 endpoints)
  • AI Diagnosis (5 endpoints)
  • Doctor Discovery (7 endpoints)
  • Treatment Plans (3 endpoints)
  • Appointment Booking (7 endpoints)
  • Health Tips (3 endpoint)

What the Agent Handled

Before Agent Mode

I had built all 37 endpoints with basic functionality:

app.get('/api/users/:id', (req, res) => {
    const user = users.find(u => u.id === parseInt(req.params.id));
    
    if (!user) {
        return res.status(404).json({
            success: false,
            message: 'User not found'
        });
    }
    res.json({ success: true, data: user });
});

My error handling was:

  • Inconsistent across endpoints
  • Missing detailed error messages
  • No validation error details
  • Generic 500 errors
  • No input validation messages
  • Missing error codes

After Running the Prompt

The Postman Agent Mode automatically:

  1. Added Comprehensive 4xx Errors for all 18 endpoints:

    • 400 Bad Request - Missing required fields with field-specific messages
    • 401 Unauthorized - Authentication errors (for future auth implementation)
    • 403 Forbidden - Permission denied scenarios
    • 404 Not Found - Resource not found with specific resource type
    • 409 Conflict - Duplicate resource conflicts
    • 422 Unprocessable Entity - Validation failures with details
  2. Added Detailed 5xx Errors:

    • 500 Internal Server Error - With proper error logging
    • 503 Service Unavailable - For database connection issues
  3. Standardized Error Response Format:

{
  "success": false,
  "error": {
    "code": "RESOURCE_NOT_FOUND",
    "message": "User with ID 999 not found",
    "statusCode": 404,
    "timestamp": "2026-01-29T10:30:00.000Z",
    "path": "/api/users/999"
  }
}
  1. Added Validation Error Details:
{
  "success": false,
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "Invalid input data",
    "statusCode": 400,
    "details": [
      {
        "field": "email",
        "message": "Invalid email format"
      },
      {
        "field": "phone",
        "message": "Phone number must be 10 digits"
      }
    ]
  }
}
  1. Enhanced Specific Endpoints:

    User Registration (POST /api/users):

    • Added: Missing email validation
    • Added: Duplicate email check (409 Conflict)
    • Added: Invalid phone format validation
    • Added: Age range validation

    Medical Report Upload (POST /api/reports):

    • Added: File size limit errors
    • Added: Unsupported file type errors (422)
    • Added: Missing userId validation
    • Added: Invalid reportType validation

    Doctor Search (GET /api/doctors):

    • Added: Invalid query parameter errors
    • Added: Invalid maxFee format validation
    • Added: Location not supported errors

    Appointment Booking (POST /api/appointments):

    • Added: Date validation (past date errors)
    • Added: Time slot unavailable (409 Conflict)
    • Added: Doctor not available errors
    • Added: Double booking prevention
  2. Added Error Examples to each endpoint’s documentation with:

    • Sample error request
    • Expected error response
    • HTTP status code
    • Error code reference

What I Didn’t Have to Do Anymore

Manual Work Eliminated

Before Agent Mode (Manual Process):

  1. Identify all possible error scenarios for 37 endpoints (2-3 hours)
  2. Write error handling code for each scenario (4-5 hours)
  3. Create a consistent error response format (1 hour)
  4. Add validation logic for each field (3-4 hours)
  5. Document error responses in Postman (2-3 hours)
  6. Test each error scenario manually (3-4 hours)
  7. Update error messages for consistency (1-2 hours)

Total Manual Time: ~16-22 hours

With Agent Mode: 15 minutes

  • Type the prompt
  • Agent analyzes all 37 endpoints
  • Agent adds comprehensive error handling
  • Agent updates documentation
  • Ready to test!

Why This Matters for Medicotest

Medicotest will serve vulnerable populations who need:

  • Clear error messages in understandable language
  • Guidance when upload fails (file size, format)
  • Helpful feedback for form validation
  • Confidence that their data is being handled properly

Without Proper Errors:

User uploads 15MB X-ray image
Response: "Error 500"
Result: User confused, thinks platform is broken, gives up

With Agent Mode-Enhanced Errors:

User uploads 15MB X-ray image
Response: {
  "error": {
    "code": "FILE_TOO_LARGE",
    "message": "Image size exceeds 10MB limit. Please compress your image.",
    "statusCode": 413,
    "details": {
      "maxSize": "10MB",
      "receivedSize": "15MB",
      "suggestion": "Try using image compression tools or reduce image quality"
    }
  }
}

Result: User understands the issue, compresses the image, and successfully uploads


Final Thoughts

As a solo developer building Medicotest to help people access affordable healthcare, Agent Mode has been a game-changer.

Agent Mode didn’t just save time - it elevated the quality of my API to enterprise standards. For a healthcare platform where clear communication can literally help people get the care they need, this is invaluable.

3 Likes

Problem

API contract drift is one of the most expensive and silent failures in production.

Teams usually:

  • write tests manually

  • test environments separately

  • compare responses with specs by hand

  • update OpenAPI files manually

  • wire CI/CD themselves

This is slow, error-prone, and doesn’t scale.

What I Built

A Self-Healing API Contract Monitoring workflow using Postman Agent Mode.

This goes beyond basic test generation.
The agent acts as a contract auditor + repair advisor, not just a test runner.

It:

  • detects contract drift across environments

  • classifies severity

  • suggests exact OpenAPI schema fixes

  • generates validation scripts

  • outputs CI/CD-ready workflows

All with minimal human input.

Prompt Used (Chained Agent Mode Workflow)

Base prompt:

“Generate tests for each API in this collection and validate response schemas”

Extended through a chained Agent Mode workflow to:

  • run the full collection

  • compare responses against the OpenAPI contract

  • detect schema + performance drift

  • compare multiple environments

  • generate spec patches, validation scripts, documentation, and CI workflows

What I Ran It On

Public REST API: JSONPlaceholder
Base URL: https://jsonplaceholder.typicode.com

Setup included:

  • 8 requests

    • 5 standard endpoints

    • 3 edge-case / negative endpoints

  • 3 simulated environments

    • Dev

    • Staging

    • Production

Environments were intentionally configured to simulate real Dev/Staging/Prod differences and drift scenarios.


What the Agent Handled Automatically

Collection & Test Generation

  • Created 8 requests

  • Generated 39 schema-aware tests

    • status codes

    • field types

    • array/object validation

    • required vs optional fields

Initial run: 39 / 39 tests passing

Multi-Environment Drift Detection

Agent executed the same collection across environments and compared results.

Detected performance drift automatically:

  • Dev → 142ms

  • Staging → 356ms (2.5Ă— slower)

  • Prod → 148ms

Edge-Case & Negative Testing

Agent added and evaluated negative scenarios:

  • /users/99999

  • /posts/99999

  • /comments on missing resources

Detected issues and classified severity:

  • HIGH: Returned 200 instead of 404

  • MEDIUM: Undocumented response field

  • LOW: Performance degradation without schema break

Severity rules were inferred from:

  • status code correctness

  • schema compliance

  • backward compatibility impact

OpenAPI Spec Auto-Patch (Key Feature)

Instead of just reporting drift, the agent generated concrete OpenAPI fixes by comparing live responses with the contract.

Example schema patch generated:

admin:
  type: boolean

role:
  type: string
  enum: [user, admin, moderator]

lastLogin:
  type: string
  format: date-time

Production-Ready Validation Scripts

Agent generated real Postman test code, ready to run:

const validRoles = ['user', 'admin', 'moderator'];
pm.expect(validRoles).to.include(user.role);

Included:

  • enum validation

  • date-time format checks

  • optional field handling

  • clear failure messages

CI/CD Integration

Agent generated a GitHub Actions workflow that:

  • runs tests across environments

  • fails the build on breaking drift

  • produces a drift report artifact

Ready to drop into existing pipelines.

What I Didn’t Have to Do Anymore

  • Write 39 tests manually
  • Test 3 environments separately
  • Compare responses vs OpenAPI
  • Patch specs by hand
  • Write validation logic
  • Document findings
  • Create CI workflows

Agent Mode orchestrated the workflow end-to-end with minimal human input.

1 Like

The Prompt I ran: Analyze the request body schema and generate aggressive fuzz-testing payloads to identify memory corruption and logic flaws

What I ran it on: My Capstone Project: A High-Performance Payment Gateway built in C++ from scratch using raw sockets. I built this to handle 10,000 requests/sec for my thesis, but because I’m manually parsing JSON and managing memory with malloc/free, the server is incredibly fragile. I was terrified that during my final live demo next week, one bad input would cause a Segmentation Fault in front of the external examiners.

What the agent handled: I asked it to act as a “Black Hat Hacker” to break my server. It didn’t just send random text; it sent calculated attacks that I never thought to write:

  1. The “Stack Smash”: It generated a JSON object nested 50 layers deep ({{{{...}}}}). My recursive C++ parser ran out of stack memory and crashed immediately. (I fixed this by adding a depth limit).
  2. The “Integer Wrap” Heist: It sent a transaction amount of 2147483648(Max 32-bit Int + 1). My C++ code overflowed this to a negative number (-2147483648), which would have allowed a user to credit their account instead of paying.
  3. Buffer Overflow: It sent a 5MB “User-Agent” string full of 0xA (newline) characters. My hardcoded 4KB header buffer couldn’t handle it, and the server Segfaulted.

What I didn’t have to do anymore: I didn’t have to write hundreds of curl scripts or learn advanced Fuzzing tools like AFL++ overnight. **The Result:**The Agent found 3 Critical Exploits, I patched them, and now my server handles the load without flinching.

1 Like

Prompt I Ran

Create complete API endpoint documentation for all the workout management endpoints in my collection, including request/response examples, authentication requirements, and error codes.


What I Ran It On

Collection: My Collection — a Fitness Workout Tracker API

The collection contains ~14 endpoints, including:

  • POST

    • Register

    • Login

    • Create workout

  • GET

    • Get users

    • Get predefined workouts

    • Get reminders

  • PATCH

    • Mark workout items done/undone

    • Edit workout

  • DELETE

    • Delete workouts
  • Additional endpoints for:

    • Workout reminders

    • Paystack dummy payment integration

This is a real API I’m working on and actively iterating with teammates.


What the Agent Handled

Agent Mode automatically generated complete, structured API documentation for all 14 endpoints, including:

  • HTTP methods and full endpoint URLs

  • Request body structures with field names and data types

  • Authentication requirements, inferred from existing requests

  • Response formats, sample payloads, and status codes

  • Clear relationships between endpoints
    (e.g., “Create Workout” → “Get Workouts” → “Edit Workout”)

The agent also intelligently grouped endpoints by feature, making the docs much easier to navigate:

  • User Management (Register, Login)

  • Workout Management

  • Reminder System

  • Payments


Extra Details the Agent Caught (That I Would’ve Missed)

  • Form-data structures for file uploads

  • Query parameters and filtering options

  • Common and edge-case error handling scenarios

  • Notes and usage tips based on how the APIs are actually called

These details added a lot of clarity and saved multiple review cycles.


What I Didn’t Have to Do Anymore

  • :cross_mark: Manually write endpoint documentation
    → Saved ~3–4 hours of writing (huge relief :sweat_smile: — docs are always the last thing my teammates ask for)

  • :cross_mark: Copy-paste request/response examples
    → Agent extracted everything from real API calls

  • :cross_mark: Maintain separate documentation files
    → Docs stay in sync as the API evolves

  • :cross_mark: Guess required vs optional fields
    → Agent analyzed actual requests to infer this correctly

  • :cross_mark: Worry about formatting
    → Clean, professional documentation generated instantly


Overall Impact

Agent Mode turned documentation from a chore into a one-prompt task.
It produced accurate, well-structured docs that my teammates could immediately use, while saving hours of manual work and follow-up explanations.

Prompt used:

“Generate a Terraform module that provisions an S3 bucket and aligns it with the API defined in the spec.”


What I ran it on

I started with a small OpenAPI definition in Postman for a Media Storage API that wraps S3, not just a generic bucket:

  • PUT /upload/{key} to upload media objects, with Content-Type restricted to image/jpeg, image/png, or video/mp4.

  • GET /{key} and DELETE /{key} to read and delete objects from a given bucket.

  • The server URL and bucket parameter encode a naming convention like media-[a-z0-9]+-(dev|staging|prod), so the bucket name pattern is part of the contract, not a comment.

This gave Agent Mode a clear description of how the bucket should behave and how it would be used by clients.


What the agent built for me

From that API spec and a single prompt, Agent Mode produced a production-ready S3 Terraform module, not just a single resource:

  • main.tf

    • aws_s3_bucket with force_destroy, standard tags (Name, ManagedBy = "Terraform") plus custom tags.

    • aws_s3_bucket_versioning controlled by a versioning_enabled variable.

    • aws_s3_bucket_server_side_encryption_configuration that supports AES256 by default and optionally KMS with kms_master_key_id and bucket_key_enabled.

    • aws_s3_bucket_public_access_block and aws_s3_bucket_ownership_controls wired to variables and defaulting to secure settings.

    • aws_s3_bucket_lifecycle_configuration built from a high-level lifecycle_rules variable using nested dynamic blocks for transitions, expirations, and noncurrent versions.

    • Optional aws_s3_bucket_logging driven by logging_enabled, logging_target_bucket, and logging_target_prefix.

  • variables.tf

    • A bucket_name variable with length and regex validation that matches AWS naming rules, turning my spec’s naming convention into real guardrails.

    • Switches for versioning, force destroy, encryption mode, KMS key, public access flags, ownership mode, lifecycle rules, and logging, so teams can tune behavior without touching HCL

  • outputs.tf

    • All the identifiers other stacks need: bucket ID, ARN, domain names, region, hosted zone ID.

    • Outputs for versioning_enabled, encryption_algorithm, and kms_key_id, plus helper ARN patterns for bucket‑ and object‑level IAM policies.

  • README.md

    • A clear module description, input/output tables, and a copy‑pasteable example:

      module "s3_bucket" {
        source             = "./terraform-aws-s3-bucket"
        bucket_name        = "my-unique-bucket-name"
        versioning_enabled = true
      
        tags = {
          Environment = "production"
        }
      }
      


How I tested it with Terraform

As a follow-up, I asked Agent Mode to “Test the module with terraform init and terraform plan?”, and it generated a test.tf file to exercise the module in isolation:

  • It configured the AWS provider with skip_credentials_validation, skip_metadata_api_check, and skip_requesting_account_id, plus mock access keys, so I could run plan locally without real AWS credentials.

  • It added a random_string resource to generate a unique suffix for the bucket name.

  • It instantiated my module as module "test_s3_bucket" with:

    bucket_name = “test-bucket-${random_string.bucket_suffix.result}”
    
    tags = {
    Environment = “test”
    Project     = “terraform-module-test”
    ManagedBy   = “terraform”
    }
    
  • It exposed test_bucket_id and test_bucket_arn outputs for quick verification.

With that file in place, I ran: terraform init and terraform plan and was able to validate that the generated module plans cleanly end‑to‑end without needing to hand-write any test harness.


What changed in my workflow

Normally, taking an API spec for S3-backed media storage all the way to a reusable, tested Terraform module means hours of work: designing the bucket config, encoding encryption and public access best practices, wiring lifecycle rules and logging, exposing useful outputs, writing documentation, and then building a separate test configuration to run plan.

With Agent Mode in Postman, I stayed in a single workspace: I defined the API once, used one prompt to get a production-grade S3 module, and a follow-up prompt to get a ready-to-run test.tf for terraform init and terraform plan. My role shifted from writing Terraform and test harnesses to reviewing them - which is exactly the kind of workflow upgrade I want from an AI agent.

The Exact Prompt
Agent Mode Prompt: Meaning & Intent Integrity Analyser

You are an API Meaning & Intent Integrity Analyst.

Your task is NOT to test correctness, availability, or schema validity.

Your goal is to determine whether each API endpoint still behaves in a way
that matches the original intent encoded in its name, version, path, and schema.

For each request:

  1. Infer the original intent from names, paths, versions, parameters, and tests.
  2. Observe actual behavior from existing runs (status, response type, errors, latency).
  3. Identify meaning drift where behavior no longer matches intent.
  4. Classify drift as Semantic, Contractual, Behavioral, or Temporal.
  5. Assign drift severity (Low / Medium / High).
  6. Identify silent risks that may emerge in the next 3–6 months.

Output:

  • A Meaning Drift Heatmap (visual table)
  • Intent vs Reality comparison
  • Drift risk timeline with future impact

Constraints:

  • Do NOT normalize errors as valid domain behavior.

  • Do NOT rewrite APIs or add tests.

  • Focus on preserving conceptual truth, not passing tests.

    The Problem Being Solved

    APIs can be “working” and still be wrong.
    Tests pass or fail, but they don’t tell you when an endpoint has slowly stopped doing what developers think it does. That’s how bad assumptions, brittle workarounds, and long-term confusion creep in.


    What the Agent Handled Automatically

    The agent:

    • Inferred the original intent of each endpoint

    • Compared it with real runtime behavior

    • Identified semantic, contractual, and behavioral drift

    • Summarized the risk visually (tables/heatmap)

    All without sending new requests or manual analysis.


    Tangible Productivity Gains

    • Saved hours of manual “is this expected or broken?” reasoning

    • Prevented normalizing auth errors or invalid paths as real API behavior

    • Gave early clarity on risks that usually show up months later

    Agent Mode didn’t just help me fix APIs; it helped me detect when APIs had quietly stopped meaning what they were designed to mean.

Problem:

I had always wished for an Open in Postman button for the FastAPI application. The primary reason is interactive UI, collection-based, saving the results, and comprehensive documentation support.

Prompt:

If I provide any reachable FastAPI endpoint, you have to help me create a collection with all the different types of requests it supports. Also, need proper documentation and everything for faster progress in hackathons. Use: http://localhost:8000

Then it came back, saying as an agent I couldn’t access the docs, share the O/P of http://localhost:8000/openapi.json :

After which, whatever it did was amazing:

Created a structured collection with proper documentation for the entire application and every endpoint, with example documentation. Next steps are straightforward, I can create API tests and validate how my code handles. Additionally, I can share this collection with the public, allowing them to easily navigate through the APIs without any intervention or support.

:locked: Submissions are closed

Thanks to everyone who shared thier Agent Mode workflows this week. We’re reviewing entries now. Winner announced soon.

5 Likes

Challenge winner :trophy:

Congrats to @aryanGarg for showing how Agent Mode handled adding consistent error handling and structured responses across 37 API endpoints in minutes instead of hours.

See the winning entry here.

2 Likes