Developers arenโt waiting while leadership dithers over a standardized, official AI platform. Better to treat a platform as a set of services or composable APIs to guide developer innovation.
Itโs clear your company needs to accelerate its AI adoption. Whatโs less clear is how to do that without it being a free-for-all. After all, your best employees arenโt waiting on you to establish standards; theyโre already actively using AI. Yes, your developers are feeding code into ChatGPT regardless of any policy you may be planning. Recent surveys suggest developers are adopting AI faster than their leaders can standardize it; that gap, not developer speed, is the real risk.
This creates what Phil Fersht calls an โAI velocity gapโ: the chasm between teams frantically adopting AI to win and central leadership dithering over the risk of getting started. Sound familiar? Itโs โshadow ITโ all over again, but this time itโs powered by your data.
Iโve written about the hidden costs of tech sprawl, whether it was unfettered developer freedom leading to unmanageable infrastructure or the lure of multicloud turning into a morass of interoperability nightmares and cost overruns. When every developer and every team picks their own cloud, their own database, or their own SaaS tool, you donโt get innovationโyou get chaos.
This may be the status quo, but itโs a recipe for failure. Whatโs the alternative?
The problem with official platforms
The temptation for a platform team is to see this chaos and react by building a gate. โStop! No one moves forward until we have built the official enterprise AI platform.โ Theyโll then spend 18 months evaluating vendors, standardizing on a single large language model (LLM), and building a monolithic, prescribed workflow.
Good luck with that.
By the time they launch that one true platform to rule them all, it will be hopelessly obsolete. Heck, at the current pace of AI, it risks obsolescence before adoption. The model they standardized on will have been surpassed five times over by newer, cheaper, and more powerful alternatives. Their developers, long since frustrated, will have routed around the platform entirely, using their personal credit cards to access the latest APIs, creating a massive, unsecured, unmonitored blind spot right in the heart of the business.
Trying to build a single, monolithic gate for AI wonโt work. The landscape is moving too fast. The needs are too diverse. The model that excels at summarizing legal documents is terrible at writing Python. The model thatโs great for marketing copy canโt be trusted with financial projections. Even within engineering, the model thatโs brilliant at refactoring Java is useless for writing K8s manifests.
The problem, however, isnโt the desire for a platform; itโs the definition of one.
From prescribed platforms to composable products
Bryan Ross recently wrote a great post on โgolden pathsโ that perfectly captures this dilemma. (It builds on other, earlier arguments for these so-called golden paths, like this one on the Platform Engineering blog.) He argues that we need to shift our thinking from โgatesโ to โguardrails.โ The problem, as he sees it, is that platform teams often miss the mark on what developers actually need.
As Ross writes: โMost platform teams think in terms of โthe platformโโa single, cohesive offering that teams either use or donโt. Developers think in terms of capabilities they need right now for the problem theyโre solving.โ So how do you balance those competing interests? His suggestion: โPlatform-as-product thinking means offering composable building blocks. The key to modular adoption is treating your platform like a product with APIs, not a prescribed workflow.โ
Ross nails the problem. Now what do we do about it?
Instead of asking a committee to pick the model, platform teams should instead build a set of services or composable APIs that channel developer velocity. In practice, this all starts with a de facto interface standard. One de facto standard is the OpenAI-style API, now supported by multiple back ends (e.g., vLLM). This doesnโt mean you bless a single provider; it means you give teams a common contract, probably fronted by an API gateway, so they can swap engines without rewriting their stack.
That gateway is also the perfect place to enforce structured outputs as a rule. โJust give me some textโ is fine for a demo but wonโt work in production. If you want durable integrations, standardize on JSON-constrained outputs enforced by schema. Most modern stacks support this, and itโs the difference between a cute demo and a production-ready system.
This same gateway becomes your control plane for observability and cost. Donโt invent a new โAI logโ; instead use something like OpenTelemetryโs emerging genAI semantic conventions so prompts, model IDs, tokens, latency, and cost are traceable in the same tools site reliability engineers already run. This visibility is precisely what enables effective cost guardrails.
The critical bedrock of all this is data access governance. This is an area where you need to be resolute, keeping identity and secrets where they already live. Require runtime secret retrieval (no embedded keys) and unify authorization to your enterprise identity and access management. The goal is to minimize new attack surfaces by absorbing AI into existing, hardened patterns.
Finally, allow exits from the golden path but with obligations: extra logging, a targeted security review, and tighter budgets. As Ross recommends, build the override into the platform, such as a โproceed with justificationโ flag. Log these exceptions, review them weekly, and use that data to evolve the guardrails.
Platform as product, not police
Why does this โguardrails over gatesโ posture work so well for AI? Because AIโs moving target makes centralized prediction a losing strategy. Committees canโt approve what they donโt yet understand, and vendors will change from under your standards document anyway. Guardrails make room to safely learn by doing. This is what smart enterprises already learned from cloud adoption: Productive constraints beat imaginary control.
As Iโve argued, carefully limiting choices enables developers to focus on innovation instead of the glue code that becomes necessary after development teams build in diverse directions. This is doubly true with AI. The cognitive load of model selection, prompt hygiene, retrieval patterns, and cost management is high; the platform teamโs job is to lower it.
Golden paths let you move at the speed of your best developers while protecting the enterprise from its worst surprises. Most importantly, this approach meets your organization where it is. The individuals already experimenting with AI get a safe, fast on-ramp that doesnโt feel like a checkpoint. Platform teams get the compliance, visibility, and cost controls they need without feeling stymied by process. And leadership gets the one thing enterprises are starved for right now: a way to turn a thousand disconnected experiments into a coherent, measured, and governable program.


