The weekly community challenge is live! One winner gets $150 cash.
The State of the API Report found most teams arenât designing with AI agents in mind. So, whatâs the biggest thing holding your team back from getting started with machine-readable APIs?
Maybe itâs a legacy issue in your stack. Maybe itâs org buy in. Or maybe the concept just feels early and undefined. Whatever the blocker, we want to hear how youâre thinking about this shift.
Prize: Winner gets $150 cash (via Visa gift card) Deadline: Thursday, Oct 16 at 10 am BST / 2:30 pm IST
How to enter:
Reply to this thread within the 24-hour window with your thoughts.
Short, sharp takes are welcome but feel free to go deeper if youâve got stories from the trenches.
This contest is open worldwide to participants 18+. By entering, you grant Postman the right to feature your submission on our website, blog, and social channels.
Honestly, I think the biggest blocker for many teams is legacy architecture and most of the APIs are built for developers not AI agents, they lack clear machine readable contracts like OpenAIs API specs or consistent schema definitions.
As a backend developer, Iâve seen how teams focus on delivering features fast, and proper API design or documentation gets treated like ânice to haveâ . That mindset makes it hard to adopt machine readable standards later.
In my country, the concept of machine-readable APIs isnât well known yet. Teams are open to innovation, but this practice is still new and not widely discussed. Many companies already have established systems, so rewriting everything to make APIs machine-readable can seem like too much effort for too little short-term benefit. Some organizations are also heavily dependent on external services, which limits how much they can change their API design. Itâs not that people arenât ready for the challenge, itâs that awareness, flexibility, and practical incentives are still catching up.
From my perspective, the biggest thing holding us back from fully embracing machine-readable APIs is the transition from human-centric design to AI-centric design.
Most of our existing systems were built for developers people who can read the docs, understand patterns, and handle a little ambiguity. But AI agents? They donât guess or improvise they need structured clarity. Every endpoint, every schema, and every response has to be predictable and machine-interpretable.
The real challenge isnât just updating code itâs changing how we think about APIs. Weâre shifting from writing for humans to communicating with intelligent systems. Thatâs a huge mindset shift, especially when legacy stacks and inconsistent documentation slow things down.
Still, Iâm genuinely excited about this change. Itâs not just another tech trend itâs the start of a new era where APIs become the language that connects humans and AI together.
I think the biggest blocker isnât legacy tech or org buy-in. Itâs that we genuinely donât know what âagent-readyâ actually means yet.
When an AI agent hits your API at 3am and something breaks, what does it need? A retry strategy? Different error codes? More structured metadata? Weâre making educated guesses at best.
And hereâs the uncomfortable part: leadership keeps asking âwhy?â Agent traffic is basically zero compared to regular users. Hard to justify a rewrite for hypothetical robots that might use our API differently⌠someday.
So most teams (mine included) are in wait-and-see mode. We know agents are coming. We just donât know how theyâll actually use our APIs once theyâre here.
Itâs not that the concept feels early. Itâs that nobody wants to be the first one to get it wrong.
We built APIs for people, not for ai .
agents canât read between the lines, but our docs and code are full of unspoken rules only we can understand .
until we turn those hidden rules into clear, ai agent friendly instructions, the agents will keep hitting the same wall
older APIs werenât built with agent readability in mind. adding changes to them feels risky
without standardized schemas, error codes, or response formats, agents struggle to interact confidently
So picture this, half the eng team thinks our APIs are fine as-is. âTheyâre RESTful, theyâve got OpenAPI specs, what more do you want?â The other half has watched Claude or GPT try to use our endpoints and seen it confidently make the same wrong assumptions three times in a row.
Hereâs the thing that keeps me up at night: our API technically works for agents. It just requires the same tribal knowledge weâve spent years drilling into new developers. Like how you need to poll /status after POST-ing to /process because we return 200 immediately but the job runs async. Or how pagination cursors expire after 5 minutes (itâs in the docs! page 47!). An AI reads our OpenAPI spec and thinks it understands everything. It does not.
But when I bring this up in planning, I get blank stares. âSo⌠we need better docs?â No, we need APIs that donât require docs at all for the happy path. We need endpoints that tell you what to do next, not just what they did.
The real blocker isnât technical. Itâs that I canât point to a competitor whoâs eating our lunch because their APIs are âagent-native.â Until that happens, this stays in my ideas.txt file gathering dust.
For me, the biggest thing holding back machine-readable APIs is trust.
Too often, the API docs say one thing while the live API does another.After a few of those moments, itâs hard to believe in any spec again.
I once tried fixing it with OpenAPI. It looked clean at first, but the backend moved faster than the docs could keep up.I gave up and went back to manual testing, slower but at least I knew what was real.
Then I wrote a small script that compared the spec with real responses and pinged me whenever something didnât match.That tiny check did more than any meeting or new tool. It helped me trust the API again not because it was perfect, but because it stayed honest.
Machine-readable APIs donât fail because theyâre complicated.They fail when people stop trusting them.Once they prove they can stay truthful on their own, teams like mine wonât need convincing, weâll be the first to adopt them.
Iâm new to Postman, API Design and MCP Servers, apologies if this is already possible or covered by the AI Agent builder, but it seems like API testing for AI Agent usage could be improved.
It would be helpful if you could use Postman to reliably test whether your API is machine-readable and AI Agent friendly, while also checking for vulnerabilities and new risks caused by AI Agents:
Check APIs for predictable schemas, typed errors, and clear behavioral rules
Test endpoints with different AI Agent roles to simulate bad actors and prevent misuse
While parts of this might be possible with existing templates and scripts, having a dedicated template or integration with the AI Agent builder could make it much easier for teams to validate their APIs are agent-ready.
Actually, the biggest thing thatâs holding me back is the transition from building APIs for humans to building them for AI. We often build APIs for people who can read documentation, figure things out, and make some guesses. AI agents donât do that. They need to be clear, structured, and predictable. Every endpoint, every response needs to make sense to a machine. The challenge isnât just changing the code. Itâs changing the way we think. Weâre learning to write APIs that talk not just to humans, but to AI. With legacy systems and messy documentation, itâs a slow process. But honestly, itâs exciting. It feels like the beginning of a new era where APIs become a true bridge between humans and AI.
Our blocker isnât docs, itâs undo.
If an agent makes a bad call, it can repeat it fast. We wonât give write access until we have real guardrails.
Preview first: every write supports a dry-run that shows what will change and the cost.
Easy to reverse: idempotency + cancel/rollback built into the API (not tribal knowledge).
Machine-friendly errors: structured errors with clear hints: retry, back off, or stop.
Hard limits: per-action budgets and scopes so âexplorationâ canât harm prod.
Shadow mode: run with trace IDs and observe before enabling writes.
Hidden pain: feature flags. Different users see slightly different shapes. Humans cope but agents donât.
So we run spec-vs-live checks in CI to keep responses stable.
Small idea i tried was an Action Manifest next to OpenAPI that lists what each action does, when itâs allowed, and how to undo it.
Make APIs safe to try, and theyâll be ready for agents.
For me, the biggest thing holding teams back from machine-readable APIs isnât documentation or tooling â itâs the lack of continuous synchronization between human intent and machine interpretation.
APIs often evolve faster than their specs. Developers update endpoints, rename fields, or tweak logic, but the machine-readable layer (like OpenAPI or JSON Schema) lags behind. That silent drift kills trust â both for humans and AI agents.
What Iâd love to see is a âSpec-as-a-Serviceâ layer:
a lightweight middleware that sits between the codebase and the docs, automatically validating that every live endpoint still matches its declared schema.
The biggest blocker for our team is definitely legacy systems. Our existing APIs do their job just fine for current apps, so without a clear internal AI project driving change, itâs hard to justify the big engineering lift to modernize them for a future that still feels a bit early.
A full rewrite isnât realistic, so weâre taking a phased approach: no new legacy, and weâll slowly âstrangleâ high-value endpoints with modern, machine-readable versions. Itâs an evolution, not a revolution especially for teams carrying a lot of legacy baggage.
designing APIs for humans and written by humans, not AI is one of the best ways to connect with your users. i believe one of the reasons AI agents wonât be sufficient enough to do this task is the bias and hallucination that comes with creating rate-limiting APIs and other features like pagination without errors.
lot of teams have ended up with systems so fragmented that even the devs who built them canât figure out how to expose clean, meaningful interfaces for ai agents. Instead of one solid endpoint that wraps the business logic, theyâve got a handful of microservices that all need to be orchestrated just right. Turning that mess into something agent friendly takes way more architectural brainpower than most teams can handle
problem isnât old tech, missing docs, or programming languages itâs the slow loss of consistency in how systems behave and communicate. As schemas drift and APIs change without discipline, trust breaks down and automation stops working. Most companies donât notice it until their integrations start failing.
In a Reddit thread with 60+ upvotes from data engineers, developers report that âfrequent disruptions in pipelinesâ occur because âthe schema we initially set up doesnât align with the updated schemaâ in incoming API responses. One engineer noted: "Schema changes from external APIs is one of the leading problems
While teams focus on adding new features, they lack robust monitoring or enforcement for contract compliance over time. This silent degradation of API integrity undermines trust and breaks AI agent automation. Most companies donât even realize this is happening until itâs too late.
Honestly, the biggest thing holding most teams back isnât technology, itâs mindset.
For years, APIs were built for humans â for developers to read, test, and integrate manually. But machine-readable APIs require a shift in culture: designing not just for people, but for agents that consume, interpret, and act autonomously. That demands a new design philosophy â clarity over cleverness, context over control, and structure over shortcuts.
Our challenge isnât a missing tool, itâs a missing âmental model.â Teams still see documentation as an afterthought, not as a living interface contract. Until we design APIs with intentional semantics, consistent metadata, and predictable behaviors, AI agents will struggle to reason about them â no matter how advanced the LLMs become.
At EcoAgric Tech, weâre tackling this by embedding schema generation and semantic tags early in our API lifecycle. The goal: make our endpoints âself-describingâ enough for both humans and agents to trust them without manual handholding.
Machine-readable APIs arenât the future â theyâre the next frontier. The real blocker is convincing teams that readability for machines is the new usability for humans.
Crispin Oigara Data Scientist | EcoAgric Tech | Building sustainable AI systems for agriculture
I think our biggest challenge wasnât technical, it was ownership. Three teams all claimed to be the âsource of truth,â but no one wanted to be on the hook when things broke. When we built a new scheduling API, everything was ready code, infrastructure, and documentation but it sat unused for months because no one would take responsibility. The stalemate only ended when our CTO formally assigned a âData Steward.â Once ownership was clear, we shipped in three weeks.
The lesson? APIs donât fail because of REST vs. SOAP debates they fail because no one owns them. Technology is easy. Accountability isnât.
While legacy code and the human-centric design mindset are significant hurdles, I believe the deepest blocker is more subtle: The âWho Pays for the Sidewalk?â Problem.
âRight now, machine-readable APIs are treated like a public sidewalk. Everyone agrees it would be great to have a clean, well-defined path for AIs to walk on, but no single team has a direct, immediate incentive to build and maintain it.
âThink about the incentives in most organizations:
âThe Backend Developer: Their primary KPI is shipping a functional endpoint for a specific feature by the end of the sprint. Adding rich, semantic metadata and perfect OpenAPI specs is seen as âextra workâ that slows down their core task.
âThe Product Manager: They are measured on user-facing feature adoption and business metrics. The âAI-readinessâ of the underlying API is an abstract, long-term benefit that doesnât directly move their numbers this quarter.
âThe AI/ML Team: They are the consumers who desperately need these clean APIs, but they are often downstream and donât have the mandate or resources to force changes in other teamsâ roadmaps.
âThis creates a classic âTragedy of the API Commons.â The entire organization would benefit massively in the long run from a coherent, machine-readable API ecosystem. But in the short term, every individual team is incentivized to take shortcutsâto âcut across the grassââto meet their immediate deadlines.
âUntil we change how we measure successâby making API quality and machine-readability a first-class, shared KPI for product and engineering teams alikeâweâll be stuck in a cycle of treating it as a ânice-to-haveâ rather than the critical infrastructure itâs becoming. The problem isnât the technology; itâs the organizational and economic framework we build around it.