🤖 What’s Holding Your Team Back from Machine-Readable APIs? – $150 | 24 Hours Only

The weekly community challenge is live! One winner gets $150 cash.

The State of the API Report found most teams aren’t designing with AI agents in mind. So, what’s the biggest thing holding your team back from getting started with machine-readable APIs?

Maybe it’s a legacy issue in your stack. Maybe it’s org buy in. Or maybe the concept just feels early and undefined. Whatever the blocker, we want to hear how you’re thinking about this shift.

Prize: Winner gets $150 cash (via Visa gift card)
Deadline: Thursday, Oct 16 at 10 am BST / 2:30 pm IST

How to enter:
Reply to this thread within the 24-hour window with your thoughts.
Short, sharp takes are welcome but feel free to go deeper if you’ve got stories from the trenches.

This contest is open worldwide to participants 18+. By entering, you grant Postman the right to feature your submission on our website, blog, and social channels.

Winner announced Friday. Go! :rocket:

7 Likes

Honestly, I think the biggest blocker for many teams is legacy architecture and most of the APIs are built for developers not AI agents, they lack clear machine readable contracts like OpenAIs API specs or consistent schema definitions.

As a backend developer, I’ve seen how teams focus on delivering features fast, and proper API design or documentation gets treated like “nice to have” . That mindset makes it hard to adopt machine readable standards later.

6 Likes

In my country, the concept of machine-readable APIs isn’t well known yet. Teams are open to innovation, but this practice is still new and not widely discussed. Many companies already have established systems, so rewriting everything to make APIs machine-readable can seem like too much effort for too little short-term benefit. Some organizations are also heavily dependent on external services, which limits how much they can change their API design. It’s not that people aren’t ready for the challenge, it’s that awareness, flexibility, and practical incentives are still catching up.

6 Likes

From my perspective, the biggest thing holding us back from fully embracing machine-readable APIs is the transition from human-centric design to AI-centric design.

Most of our existing systems were built for developers people who can read the docs, understand patterns, and handle a little ambiguity. But AI agents? They don’t guess or improvise they need structured clarity. Every endpoint, every schema, and every response has to be predictable and machine-interpretable.

The real challenge isn’t just updating code it’s changing how we think about APIs. We’re shifting from writing for humans to communicating with intelligent systems. That’s a huge mindset shift, especially when legacy stacks and inconsistent documentation slow things down.

Still, I’m genuinely excited about this change. It’s not just another tech trend it’s the start of a new era where APIs become the language that connects humans and AI together.

3 Likes

I think the biggest blocker isn’t legacy tech or org buy-in. It’s that we genuinely don’t know what “agent-ready” actually means yet.

When an AI agent hits your API at 3am and something breaks, what does it need? A retry strategy? Different error codes? More structured metadata? We’re making educated guesses at best.

And here’s the uncomfortable part: leadership keeps asking “why?” Agent traffic is basically zero compared to regular users. Hard to justify a rewrite for hypothetical robots that might use our API differently… someday.

So most teams (mine included) are in wait-and-see mode. We know agents are coming. We just don’t know how they’ll actually use our APIs once they’re here.

It’s not that the concept feels early. It’s that nobody wants to be the first one to get it wrong.

5 Likes

We built APIs for people, not for ai .
agents can’t read between the lines, but our docs and code are full of unspoken rules only we can understand .
until we turn those hidden rules into clear, ai agent friendly instructions, the agents will keep hitting the same wall
older APIs weren’t built with agent readability in mind. adding changes to them feels risky
without standardized schemas, error codes, or response formats, agents struggle to interact confidently

6 Likes

So picture this, half the eng team thinks our APIs are fine as-is. “They’re RESTful, they’ve got OpenAPI specs, what more do you want?” The other half has watched Claude or GPT try to use our endpoints and seen it confidently make the same wrong assumptions three times in a row.

Here’s the thing that keeps me up at night: our API technically works for agents. It just requires the same tribal knowledge we’ve spent years drilling into new developers. Like how you need to poll /status after POST-ing to /process because we return 200 immediately but the job runs async. Or how pagination cursors expire after 5 minutes (it’s in the docs! page 47!). An AI reads our OpenAPI spec and thinks it understands everything. It does not.

But when I bring this up in planning, I get blank stares. “So… we need better docs?” No, we need APIs that don’t require docs at all for the happy path. We need endpoints that tell you what to do next, not just what they did.

The real blocker isn’t technical. It’s that I can’t point to a competitor who’s eating our lunch because their APIs are “agent-native.” Until that happens, this stays in my ideas.txt file gathering dust.

4 Likes

For me, the biggest thing holding back machine-readable APIs is trust.
Too often, the API docs say one thing while the live API does another.After a few of those moments, it’s hard to believe in any spec again.
I once tried fixing it with OpenAPI. It looked clean at first, but the backend moved faster than the docs could keep up.I gave up and went back to manual testing, slower but at least I knew what was real.
Then I wrote a small script that compared the spec with real responses and pinged me whenever something didn’t match.That tiny check did more than any meeting or new tool. It helped me trust the API again not because it was perfect, but because it stayed honest.
Machine-readable APIs don’t fail because they’re complicated.They fail when people stop trusting them.Once they prove they can stay truthful on their own, teams like mine won’t need convincing, we’ll be the first to adopt them.

1 Like

I’m new to Postman, API Design and MCP Servers, apologies if this is already possible or covered by the AI Agent builder, but it seems like API testing for AI Agent usage could be improved.

It would be helpful if you could use Postman to reliably test whether your API is machine-readable and AI Agent friendly, while also checking for vulnerabilities and new risks caused by AI Agents:

  • Check APIs for predictable schemas, typed errors, and clear behavioral rules
  • Test endpoints with different AI Agent roles to simulate bad actors and prevent misuse

While parts of this might be possible with existing templates and scripts, having a dedicated template or integration with the AI Agent builder could make it much easier for teams to validate their APIs are agent-ready.

1 Like

Actually, the biggest thing that’s holding me back is the transition from building APIs for humans to building them for AI. We often build APIs for people who can read documentation, figure things out, and make some guesses. AI agents don’t do that. They need to be clear, structured, and predictable. Every endpoint, every response needs to make sense to a machine. The challenge isn’t just changing the code. It’s changing the way we think. We’re learning to write APIs that talk not just to humans, but to AI. With legacy systems and messy documentation, it’s a slow process. But honestly, it’s exciting. It feels like the beginning of a new era where APIs become a true bridge between humans and AI.

Our blocker isn’t docs, it’s undo.
If an agent makes a bad call, it can repeat it fast. We won’t give write access until we have real guardrails.

  • Preview first: every write supports a dry-run that shows what will change and the cost.

  • Easy to reverse: idempotency + cancel/rollback built into the API (not tribal knowledge).

  • Machine-friendly errors: structured errors with clear hints: retry, back off, or stop.

  • Hard limits: per-action budgets and scopes so “exploration” can’t harm prod.

  • Shadow mode: run with trace IDs and observe before enabling writes.

Hidden pain: feature flags. Different users see slightly different shapes. Humans cope but agents don’t.
So we run spec-vs-live checks in CI to keep responses stable.

Small idea i tried was an Action Manifest next to OpenAPI that lists what each action does, when it’s allowed, and how to undo it.
Make APIs safe to try, and they’ll be ready for agents.

For me, the biggest thing holding teams back from machine-readable APIs isn’t documentation or tooling — it’s the lack of continuous synchronization between human intent and machine interpretation.

APIs often evolve faster than their specs. Developers update endpoints, rename fields, or tweak logic, but the machine-readable layer (like OpenAPI or JSON Schema) lags behind. That silent drift kills trust — both for humans and AI agents.

What I’d love to see is a “Spec-as-a-Service” layer:
a lightweight middleware that sits between the codebase and the docs, automatically validating that every live endpoint still matches its declared schema.

1 Like

The biggest blocker for our team is definitely legacy systems. Our existing APIs do their job just fine for current apps, so without a clear internal AI project driving change, it’s hard to justify the big engineering lift to modernize them for a future that still feels a bit early.

A full rewrite isn’t realistic, so we’re taking a phased approach: no new legacy, and we’ll slowly “strangle” high-value endpoints with modern, machine-readable versions. It’s an evolution, not a revolution especially for teams carrying a lot of legacy baggage.

designing APIs for humans and written by humans, not AI is one of the best ways to connect with your users. i believe one of the reasons AI agents won’t be sufficient enough to do this task is the bias and hallucination that comes with creating rate-limiting APIs and other features like pagination without errors.

1 Like

lot of teams have ended up with systems so fragmented that even the devs who built them can’t figure out how to expose clean, meaningful interfaces for ai agents. Instead of one solid endpoint that wraps the business logic, they’ve got a handful of microservices that all need to be orchestrated just right. Turning that mess into something agent friendly takes way more architectural brainpower than most teams can handle

problem isn’t old tech, missing docs, or programming languages it’s the slow loss of consistency in how systems behave and communicate. As schemas drift and APIs change without discipline, trust breaks down and automation stops working. Most companies don’t notice it until their integrations start failing.

In a Reddit thread with 60+ upvotes from data engineers, developers report that “frequent disruptions in pipelines” occur because “the schema we initially set up doesn’t align with the updated schema” in incoming API responses. One engineer noted: "Schema changes from external APIs is one of the leading problems
While teams focus on adding new features, they lack robust monitoring or enforcement for contract compliance over time. This silent degradation of API integrity undermines trust and breaks AI agent automation. Most companies don’t even realize this is happening until it’s too late.

Honestly, the biggest thing holding most teams back isn’t technology, it’s mindset.

For years, APIs were built for humans — for developers to read, test, and integrate manually. But machine-readable APIs require a shift in culture: designing not just for people, but for agents that consume, interpret, and act autonomously. That demands a new design philosophy — clarity over cleverness, context over control, and structure over shortcuts.

Our challenge isn’t a missing tool, it’s a missing “mental model.” Teams still see documentation as an afterthought, not as a living interface contract. Until we design APIs with intentional semantics, consistent metadata, and predictable behaviors, AI agents will struggle to reason about them — no matter how advanced the LLMs become.

At EcoAgric Tech, we’re tackling this by embedding schema generation and semantic tags early in our API lifecycle. The goal: make our endpoints “self-describing” enough for both humans and agents to trust them without manual handholding.

Machine-readable APIs aren’t the future — they’re the next frontier. The real blocker is convincing teams that readability for machines is the new usability for humans.

Crispin Oigara :globe_showing_europe_africa:
Data Scientist | EcoAgric Tech | Building sustainable AI systems for agriculture

I think our biggest challenge wasn’t technical, it was ownership. Three teams all claimed to be the “source of truth,” but no one wanted to be on the hook when things broke. When we built a new scheduling API, everything was ready code, infrastructure, and documentation but it sat unused for months because no one would take responsibility. The stalemate only ended when our CTO formally assigned a “Data Steward.” Once ownership was clear, we shipped in three weeks.
The lesson? APIs don’t fail because of REST vs. SOAP debates they fail because no one owns them. Technology is easy. Accountability isn’t.

While legacy code and the human-centric design mindset are significant hurdles, I believe the deepest blocker is more subtle: The “Who Pays for the Sidewalk?” Problem.

​Right now, machine-readable APIs are treated like a public sidewalk. Everyone agrees it would be great to have a clean, well-defined path for AIs to walk on, but no single team has a direct, immediate incentive to build and maintain it.

​Think about the incentives in most organizations:

  • ​The Backend Developer: Their primary KPI is shipping a functional endpoint for a specific feature by the end of the sprint. Adding rich, semantic metadata and perfect OpenAPI specs is seen as “extra work” that slows down their core task.

  • ​The Product Manager: They are measured on user-facing feature adoption and business metrics. The “AI-readiness” of the underlying API is an abstract, long-term benefit that doesn’t directly move their numbers this quarter.

  • ​The AI/ML Team: They are the consumers who desperately need these clean APIs, but they are often downstream and don’t have the mandate or resources to force changes in other teams’ roadmaps.

​This creates a classic “Tragedy of the API Commons.” The entire organization would benefit massively in the long run from a coherent, machine-readable API ecosystem. But in the short term, every individual team is incentivized to take shortcuts—to “cut across the grass”—to meet their immediate deadlines.

​Until we change how we measure success—by making API quality and machine-readability a first-class, shared KPI for product and engineering teams alike—we’ll be stuck in a cycle of treating it as a “nice-to-have” rather than the critical infrastructure it’s becoming. The problem isn’t the technology; it’s the organizational and economic framework we build around it.