Matt Asay
Contributing Writer

Building a golden path to AI

opinion
Oct 27, 20256 mins

Developers arenโ€™t waiting while leadership dithers over a standardized, official AI platform. Better to treat a platform as a set of services or composable APIs to guide developer innovation.

Light trail flash, neon yellow and orange golden glow path trace effect
Credit: SMS iStock

Itโ€™s clear your company needs to accelerate its AI adoption. Whatโ€™s less clear is how to do that without it being a free-for-all. After all, your best employees arenโ€™t waiting on you to establish standards; theyโ€™re already actively using AI. Yes, your developers are feeding code into ChatGPT regardless of any policy you may be planning. Recent surveys suggest developers are adopting AI faster than their leaders can standardize it; that gap, not developer speed, is the real risk.

This creates what Phil Fersht calls an โ€œAI velocity gapโ€: the chasm between teams frantically adopting AI to win and central leadership dithering over the risk of getting started. Sound familiar? Itโ€™s โ€œshadow ITโ€ all over again, but this time itโ€™s powered by your data.

Iโ€™ve written about the hidden costs of tech sprawl, whether it was unfettered developer freedom leading to unmanageable infrastructure or the lure of multicloud turning into a morass of interoperability nightmares and cost overruns. When every developer and every team picks their own cloud, their own database, or their own SaaS tool, you donโ€™t get innovationโ€”you get chaos.

This may be the status quo, but itโ€™s a recipe for failure. Whatโ€™s the alternative?

The problem with official platforms

The temptation for a platform team is to see this chaos and react by building a gate. โ€œStop! No one moves forward until we have built the official enterprise AI platform.โ€ Theyโ€™ll then spend 18 months evaluating vendors, standardizing on a single large language model (LLM), and building a monolithic, prescribed workflow.

Good luck with that.

By the time they launch that one true platform to rule them all, it will be hopelessly obsolete. Heck, at the current pace of AI, it risks obsolescence before adoption. The model they standardized on will have been surpassed five times over by newer, cheaper, and more powerful alternatives. Their developers, long since frustrated, will have routed around the platform entirely, using their personal credit cards to access the latest APIs, creating a massive, unsecured, unmonitored blind spot right in the heart of the business.

Trying to build a single, monolithic gate for AI wonโ€™t work. The landscape is moving too fast. The needs are too diverse. The model that excels at summarizing legal documents is terrible at writing Python. The model thatโ€™s great for marketing copy canโ€™t be trusted with financial projections. Even within engineering, the model thatโ€™s brilliant at refactoring Java is useless for writing K8s manifests.

The problem, however, isnโ€™t the desire for a platform; itโ€™s the definition of one.

From prescribed platforms to composable products

Bryan Ross recently wrote a great post on โ€œgolden pathsโ€ that perfectly captures this dilemma. (It builds on other, earlier arguments for these so-called golden paths, like this one on the Platform Engineering blog.) He argues that we need to shift our thinking from โ€œgatesโ€ to โ€œguardrails.โ€ The problem, as he sees it, is that platform teams often miss the mark on what developers actually need.

As Ross writes: โ€œMost platform teams think in terms of โ€˜the platformโ€™โ€”a single, cohesive offering that teams either use or donโ€™t. Developers think in terms of capabilities they need right now for the problem theyโ€™re solving.โ€ So how do you balance those competing interests? His suggestion: โ€œPlatform-as-product thinking means offering composable building blocks. The key to modular adoption is treating your platform like a product with APIs, not a prescribed workflow.โ€

Ross nails the problem. Now what do we do about it?

Instead of asking a committee to pick the model, platform teams should instead build a set of services or composable APIs that channel developer velocity. In practice, this all starts with a de facto interface standard. One de facto standard is the OpenAI-style API, now supported by multiple back ends (e.g., vLLM). This doesnโ€™t mean you bless a single provider; it means you give teams a common contract, probably fronted by an API gateway, so they can swap engines without rewriting their stack.

That gateway is also the perfect place to enforce structured outputs as a rule. โ€œJust give me some textโ€ is fine for a demo but wonโ€™t work in production. If you want durable integrations, standardize on JSON-constrained outputs enforced by schema. Most modern stacks support this, and itโ€™s the difference between a cute demo and a production-ready system.

This same gateway becomes your control plane for observability and cost. Donโ€™t invent a new โ€œAI logโ€; instead use something like OpenTelemetryโ€™s emerging genAI semantic conventions so prompts, model IDs, tokens, latency, and cost are traceable in the same tools site reliability engineers already run. This visibility is precisely what enables effective cost guardrails.

The critical bedrock of all this is data access governance. This is an area where you need to be resolute, keeping identity and secrets where they already live. Require runtime secret retrieval (no embedded keys) and unify authorization to your enterprise identity and access management. The goal is to minimize new attack surfaces by absorbing AI into existing, hardened patterns.

Finally, allow exits from the golden path but with obligations: extra logging, a targeted security review, and tighter budgets. As Ross recommends, build the override into the platform, such as a โ€œproceed with justificationโ€ flag. Log these exceptions, review them weekly, and use that data to evolve the guardrails.

Platform as product, not police

Why does this โ€œguardrails over gatesโ€ posture work so well for AI? Because AIโ€™s moving target makes centralized prediction a losing strategy. Committees canโ€™t approve what they donโ€™t yet understand, and vendors will change from under your standards document anyway. Guardrails make room to safely learn by doing. This is what smart enterprises already learned from cloud adoption: Productive constraints beat imaginary control.

As Iโ€™ve argued, carefully limiting choices enables developers to focus on innovation instead of the glue code that becomes necessary after development teams build in diverse directions. This is doubly true with AI. The cognitive load of model selection, prompt hygiene, retrieval patterns, and cost management is high; the platform teamโ€™s job is to lower it.

Golden paths let you move at the speed of your best developers while protecting the enterprise from its worst surprises. Most importantly, this approach meets your organization where it is. The individuals already experimenting with AI get a safe, fast on-ramp that doesnโ€™t feel like a checkpoint. Platform teams get the compliance, visibility, and cost controls they need without feeling stymied by process. And leadership gets the one thing enterprises are starved for right now: a way to turn a thousand disconnected experiments into a coherent, measured, and governable program.

Matt Asay

Matt Asay runs developer marketing at Oracle. Previously Asay ran developer relations at MongoDB, and before that he was a Principal at Amazon Web Services and Head of Developer Ecosystem for Adobe. Prior to Adobe, Asay held a range of roles at open source companies: VP of business development, marketing, and community at MongoDB; VP of business development at real-time analytics company Nodeable (acquired by Appcelerator); VP of business development and interim CEO at mobile HTML5 start-up Strobe (acquired by Facebook); COO at Canonical, the Ubuntu Linux company; and head of the Americas at Alfresco, a content management startup. Asay is an emeritus board member of the Open Source Initiative (OSI) and holds a JD from Stanford, where he focused on open source and other IP licensing issues. The views expressed in Mattโ€™s posts are Mattโ€™s, and donโ€™t represent the views of his employer.

More from this author