The Great Agentic AI Swindle: Why Most "Agents" Being Built Today Are a Mistake

The Great Agentic AI Swindle: Why Most "Agents" Being Built Today Are a Mistake

The tech world is experiencing yet another gold rush, and the current vein of gold is "agentic AI." Every vendor, every consulting firm, and nearly every enterprise IT department is chasing the dream of autonomous systems that can manage complex tasks, make decisions, and drive efficiency without human intervention. The marketing hype is deafening, painting a picture of a revolutionary new paradigm that will erase existing workflows and infrastructure overnight.

But let's pull back the curtain. From my vantage point as someone who has spent decades navigating the realities of enterprise technology adoption, the current frenzy around agentic AI is built on a shaky foundation of hype, misrepresentation, and a profound lack of architectural discipline. The truth is, most of the agentic AI systems being built today should not be built using an agentic approach at all. We are collectively sleepwalking into an "AI failure trap," driven by fear of missing out (FOMO) rather than sound business logic and a clear understanding of the technology's actual, rather than promised, capabilities.

The problem is not that agentic AI lacks potential—it has niche, valuable applications. The problem is its current, widespread misapplication. Enterprises are making fundamental mistakes that lead to bloated costs, unmanageable complexity, and ultimately, project failure to deliver measurable business value. We need an immediate, industry-wide reevaluation of what we are building, why we are building it, and, most importantly, if it is the right way to build it.

The Problem with Hype: A Lack of Foundational Understanding

The relentless hype has created an environment where critical thinking is often suspended. I've observed a dozen-plus different, often contradictory, definitions of "agentic AI" in the industry. If we, as technologists, cannot even agree on what we are building, how can we possibly expect to succeed at scale?

This lack of a common definition allows for the current wave of misapplications. The market pressure to incorporate "agentic" capabilities into product roadmaps is immense, leading to solutions that are more about buzzwords than substance. The data bears this out: a recent report indicates that only 5% of enterprises have AI agents operating in real production environments, and most of those are "immature, fragile, and failing to deliver the stability required for meaningful impact at scale".

The focus has shifted from solving specific business problems to deploying the latest technology for its own sake. This mirrors past hype cycles—think of the early, overpromised days of ERP and CRM systems. The pattern is always the same: Hype, Chaos, Integration (where consultants profit), and finally, quiet utility. We are currently deep in the "Chaos" phase for agentic AI, and something is going to break, leading to a wave of disillusionment unless we correct course now.

Mistake 1: Agentic in Name Only (The Deception)

The most glaring and, frankly, dishonest trend I'm seeing is the simple rebranding of existing, conventional software as "agentic." These systems use traditional, monolithic, and tightly coupled mechanisms behind the scenes, yet they are marketed with all the flair of cutting-edge autonomy.

This is a lie.

These "agents" are essentially old wine in new bottles. They operate on a command-and-control structure, lacking any genuine ability to set goals, interact with dynamic environments autonomously, or course-correct without explicit instruction. The architecture has not changed; only the terminology has.

Why do companies do this? To ride the hype train, attract investment, and pressure competitors. This practice erodes trust in the industry and confuses enterprise leaders who are trying to make sound technology investments. When you peel back the layers of these systems, you often find nothing more than a complex web of prompts and conditional logic, a "mechanical turk pretending to be autonomous intelligence". An Excel macro is not a junior analyst, and a complex prompt chain is not a true AI agent. Honesty in engineering and marketing is desperately needed.

Mistake 2: Loosely Defined "Agents" and Centralized Control

The second mistake is more subtle but equally detrimental. In this scenario, developers build systems where components are called "agents," but they are still fundamentally bound to a central process and lack true autonomy.

The ideal of agentic architecture is decentralized decision-making, where independent software entities work toward a common goal but manage their own immediate tasks and interactions. What is often built, however, is a slightly more modular version of a microservices architecture, where "agents" are just blobs of software that execute predefined functions under strict, central orchestration.

These components do not have the autonomy that defines a true agentic system. They are not self-directed; they are directed. Building a system this way, when a simpler and more robust microservices or event-driven architecture would suffice, introduces unnecessary complexity and potential failure points. This approach adds overhead in management and orchestration without delivering the core benefits of true agent autonomy—namely, resilience and adaptive behavior in dynamic environments. It is a technical compromise made to satisfy a marketing requirement, not a genuine architectural need.

Mistake 3: The Prohibitive Cost of Unnecessary Autonomy

The final, and perhaps most financially damaging, mistake involves those rare, genuine agentic systems being built for use cases that simply do not demand them. Here, the technology itself is sound, but the business application is misguided.

True agentic systems are expensive. They cost significantly more to build, require extensive training data, substantial compute resources (GPUs are often a prohibitive expense), and are far more complex to operate, manage, and govern. The overhead in ensuring reliability, observability, and human oversight in a decentralized system is immense, especially in regulated industries where AI stacks are changing every few months, making long-term stability a fantasy.

Enterprises are deploying these high-cost, complex systems for problems that could be solved with a conventional, ten-times-cheaper automation script or a simple set of APIs. Why? Purely to chase the hype. We are wasting vast amounts of capital just to say we have "agentic AI" running in our environment.

The cost versus value proposition is wildly out of alignment in most of these cases. While agentic AI has merit in thin, well-bounded use cases—perhaps for specific, high-variability logistics problems or complex financial modeling—stretching it into a universal solution for every business task makes the entire industry look foolish when reality eventually catches up. A successful AI strategy focuses on delivering measurable business value, not forcing trendy tech for its own sake.

A Pragmatic Path Forward: Time for Discipline and Reevaluation

We are at a critical juncture. The current path, driven by hype and misapplication, is unsustainable and risks a significant "AI winter" if disillusionment sets in across the C-suite. We need a more pragmatic, disciplined approach.

  1. Prioritize Business Problems First: Stop with the "solutionism." Start by identifying specific, measurable business problems that need solving. Then determine the best architectural approach.
  2. Choose the Right Tool for the Job: Do not force agentic AI where a simpler, cheaper, and more established solution (like microservices, conventional automation, or even human oversight) is sufficient. Understand the limitations and costs.
  3. Demand Honesty from Vendors and Consultants: Push back against the hype. Demand clear definitions and proof of production-level stability and ROI. If they cannot provide it, walk away.
  4. Focus on Value, Not Buzzwords: Success in AI is defined by how effectively the technology delivers insights, generates efficiencies, and improves decision-making—not by the hardware or the buzzword that fuels it.

The agentic AI architecture has a place, but that place is far smaller than the current hype suggests. We need to step back from the hype, apply rigorous technical guidance and architectural logic to our decisions, and stop building systems that serve no purpose other than chasing a trend. It's time to get real about AI.

Alexander Golev

Microsoft is auditing you? Call us ASAP. | We are Independent - we don’t sell Microsoft stuff, on purpose | Partner @ SAMexpert

4d

I mean, tech corporations need a way to sustain their revenue growth, so if a rebranding of "automation" to "agents" helps that, even without any substance, they will 100% rebrand.

Like
Reply
Scott Hebner

Principal Analyst, Agentic AI & Decision Intelligence | CMO Advisor | Host, Next Frontiers of AI Podcast

4d

David Linthicum Kudos to you for writing this article. It’s right on. Our research and that of Boston Consulting Group (survey of 1,265) shows only about 15-20% have truly enabled AI agents and even fewer agentic workflows. The vast majority of “agentic AI” claims are just good old generative AI (LLM) with maybe some RAG. Agentic AI is about AI that can help people plan, problem-solve, make better decisions …. And the ability to act autonomously when warranted. The harsh reality is that LLMs and GenAI and RAG alone cannot enable these capabilities. It requires new AI reasoning and decision intelligence capabilities, such as knowledge graphs, semantic layers, causal AI, decision chains, etc. A prediction is not a judgment, and data correlation is know knowledge. And correction doesn’t imply causation. With out these, AI agents (or workflows) are unable to plan, problem solve or help make decisions (weighing consequences). We will get there … but few are being true to the value of this next frontier of AI. Good job!

Like
Reply
Alex Christ

Technical Director / Lead Infrastructure Consultant at Elysium DC Solutions

4d

David Linthicum Completely agree. AI is a term being thrown around so much now and being used to re-brand products, solutions with features which existing for the most part "before" AI was all over the global IT media. This AI bubble is surely going to pop at some point.

Like
Reply
Anders Mikkelsen

Helping CIOs Save Up to 50% on Data & Voice Costs, Fueling Innovation While Enhancing Budget Visibility and Control

5d

I did talk to someone who said they needed help and then he learned they only wanted a solution if it was Agentic AI.

Jagdish Karira

Board Member | Gen AI | DevOps | DevSecOps | DevEx | Cloud | Digital Transformation Leader

5d

Thank you for sharing your insights, David Linthicum!

To view or add a comment, sign in

More articles by David Linthicum

Others also viewed

Explore content categories