Agent startups are still solving the wrong problem. They’re building agents. They should be fixing workflows. Most enterprise processes were never designed for autonomy. They were designed for humans: approvals, emails, handoffs, multi-layer signoffs. Bolt LLM agents onto these legacy flows, and you get chaos, not acceleration. If I were starting an agent company today, I would not start with the agent. I would start with the system design. 1. Map the real workflow, not the imagined one Find the high-frequency processes that drain hours daily: invoice matching, vendor onboarding, document QA. Map every step. Most are artifacts of old tools or compliance folklore, not true necessities. 2. Redesign for agent-native execution Autonomy requires new architectures. Agents don’t wait for emails or chase approvals. They act. So the workflow must shift: • Replace approvals with policy-based validation. • Convert serial handoffs into parallel, traceable states. • Use state machines, not inboxes, as the backbone. 3. Build observability before autonomy Logging, rollback, human escalation paths, and clear state tracking must be there from day one. You are not deploying a chatbot. You are deploying a system that must earn trust in production environments. 4. Deploy agents like interns, not replacements Start narrow. Let the agent handle three steps in a ten-step process. Let humans intervene when judgment or context is required. Expand scope only after reliability is proven. 5. Integrate where work actually happens Agents should operate inside ServiceNow, Jira, shared drives, compliance tools. Not in separate demo sandboxes. You drive adoption by being in the operational loop, not beside it. 6. Optimize for predictability, not flash An agent that completes 25 percent of tasks with high explainability and zero surprises will beat one that is 95 percent capable but erratic. The real game is not building smarter agents for broken processes. It is building smarter processes where agents can thrive. This is how you get durable ROI from agentic AI. Not in hackathons. Not in pitch decks. In production.
Key Elements of Agentic Workflows
Explore top LinkedIn content from expert professionals.
Summary
Key elements of agentic workflows focus on designing AI systems that integrate autonomy and adaptability into structured processes, enabling them to perform tasks dynamically and efficiently. Unlike static systems, agentic workflows emphasize the importance of intelligent decision-making, collaboration between agents and tools, and process redesign to support autonomous execution.
- Redesign workflows for autonomy: Shift legacy processes toward agent-friendly systems by reducing manual approvals, enabling parallel tasks, and using policy-based validations to support autonomous decision-making.
- Incorporate safety mechanisms: Build trust in intelligent systems by implementing human-in-the-loop processes, clear state tracking, and robust error management strategies.
- Start simple and scale gradually: Deploy agents with limited tasks initially, expand their responsibilities only after proving reliability, and tailor their role to suit specific workflows or needs.
-
-
As we transition from traditional task-based automation to 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, understanding 𝘩𝘰𝘸 an agent cognitively processes its environment is no longer optional — it's strategic. This diagram distills the mental model that underpins every intelligent agent architecture — from LangGraph and CrewAI to RAG-based systems and autonomous multi-agent orchestration. The Workflow at a Glance 1. 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗶𝗼𝗻 – The agent observes its environment using sensors or inputs (text, APIs, context, tools). 2. 𝗕𝗿𝗮𝗶𝗻 (𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗘𝗻𝗴𝗶𝗻𝗲) – It processes observations via a core LLM, enhanced with memory, planning, and retrieval components. 3. 𝗔𝗰𝘁𝗶𝗼𝗻 – It executes a task, invokes a tool, or responds — influencing the environment. 4. 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (Implicit or Explicit) – Feedback is integrated to improve future decisions. This feedback loop mirrors principles from: • The 𝗢𝗢𝗗𝗔 𝗹𝗼𝗼𝗽 (Observe–Orient–Decide–Act) • 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 used in robotics and AI • 𝗚𝗼𝗮𝗹-𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 in agent frameworks Most AI applications today are still “reactive.” But agentic AI — autonomous systems that operate continuously and adaptively — requires: • A 𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗹𝗼𝗼𝗽 for decision-making • Persistent 𝗺𝗲𝗺𝗼𝗿𝘆 and contextual awareness • Tool-use and reasoning across multiple steps • 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 for dynamic goal completion • The ability to 𝗹𝗲𝗮𝗿𝗻 from experience and feedback This model helps developers, researchers, and architects 𝗿𝗲𝗮𝘀𝗼𝗻 𝗰𝗹𝗲𝗮𝗿𝗹𝘆 𝗮𝗯𝗼𝘂𝘁 𝘄𝗵𝗲𝗿𝗲 𝘁𝗼 𝗲𝗺𝗯𝗲𝗱 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 — and where things tend to break. Whether you’re building agentic workflows, orchestrating LLM-powered systems, or designing AI-native applications — I hope this framework adds value to your thinking. Let’s elevate the conversation around how AI systems 𝘳𝘦𝘢𝘴𝘰𝘯. Curious to hear how you're modeling cognition in your systems.
-
Most people think of chatbots as glorified question-and-answer systems. AI agents go much further—they’re autonomous workflows that plan, act, and self-verify across multiple tools. Here’s a deeper dive into their anatomy: 1. 𝗧𝗵𝗲 𝗖𝗼𝗿𝗲 𝗟𝗟𝗠 “𝗕𝗿𝗮𝗶𝗻.” At the heart is a large language model fine-tuned for planning and decision-making rather than just completion. This model maintains an internal state—tracking subgoals, partial outputs, and confidence scores—to decide the next action. It uses techniques like retrieval-augmented generation (RAG) to pull in fresh data at each step. 2. 𝗧𝗼𝗼𝗹 𝗜𝗻𝘃𝗼𝗰𝗮𝘁𝗶𝗼𝗻 𝗟𝗮𝘆𝗲𝗿. Agents don’t hallucinate API calls. They generate structured “action intents” (JSON payloads) that map directly to external tools—CRMs, databases, web scrapers, or even robotic controls. A runtime router then executes these calls, captures the outputs, and feeds results back into the agent’s context window. 3. 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹 & 𝗩𝗲𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗦𝘁𝗮𝗰𝗸. Each action passes through safety filters: 𝗜𝗻𝗽𝘂𝘁 𝘀𝗮𝗻𝗶𝘁𝗶𝘇𝗲𝗿𝘀 remove PII or malicious payloads. 𝗢𝘂𝘁𝗽𝘂𝘁 𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗼𝗿𝘀 assert type, range, and schema (e.g., “quantity must be an integer > 0”). 𝗛𝘂𝗺𝗮𝗻-𝗶𝗻-𝘁𝗵𝗲-𝗹𝗼𝗼𝗽 𝗴𝗮𝘁𝗲𝘀 kick in for high-risk operations—refund approvals, contract signatures, or critical infrastructure commands a-practical-guide-to-bu…. 4. 𝗧𝗵𝗼𝘂𝗴𝗵𝘁–𝗔𝗰𝘁𝗶𝗼𝗻–𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗟𝗼𝗼𝗽. The agent repeats: “Think” (plan next steps), “Act” (invoke tool), “Verify” (check output), then “Reflect” (adjust plan). This mirrors classic AI planning algorithms—STRIPS-style planners or hierarchical task networks—embedded within a neural substrate. 5. 𝗦𝘁𝗼𝗽 𝗖𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝘀 𝗮𝗻𝗱 𝗠𝗲𝗺𝗼𝗿𝘆. Agents use dynamic termination logic: they monitor goal-fulfillment metrics or timeout thresholds to decide when to halt. Persistent memory modules archive outcomes, letting future sessions build on past successes and avoid redundant work. 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 • 𝗥𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Formal tool contracts and validators slash error rates compared to naive LLM prompts. • 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Modular design lets you plug in new services—whether a robotics API or a financial ledger—without rewiring your agent logic. • 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Structured reasoning traces can be audited step-by-step, enabling compliance in regulated industries. If you’re evaluating “agent platforms,” ask for these components—model orchestration, secure toolchains, and human-override paths. Without them, you’re back to trophy chatbots, not true autonomous agents. Curious how to architect an agent for your own workflows? Always happy to chat.
-
Anthropic brings clarity to the fuzzy definition of AI Agents by introducing a critical architectural distinction: workflows are systems with predefined code paths, while agents dynamically direct their own processes. After building agents for a year, they identified five fundamental patterns that drive successful agentic implementations: (1) Prompt chaining - breaking tasks into sequential steps, useful for complex operations like content generation and translation (2) Routing - directing inputs to specialized handlers, perfect for customer service and model optimization (3) Parallelization - running subtasks simultaneously through sectioning or voting, ideal for code review and content moderation (4) Orchestrator-workers - using a central LLM to coordinate task delegation, essential for complex coding projects (5) Evaluator-optimizer - implementing feedback loops for iterative refinement, perfect for improving search results Success isn't about building the most sophisticated system - it's about choosing the right pattern for your specific needs. Start simple, measure performance, and only add complexity when simpler solutions fall short. More on AI Agents in my blog https://lnkd.in/gpeDupnj Anthropic’s post (highly recommend reading it) https://lnkd.in/g-dsQhdZ
-
Working on AI Agents as well? Key learnings? Anthropic has collaborated with numerous teams across industries to develop LLM-based agents. As per their latest white paper (link in first comment), success often does not come from using complex frameworks but from adopting simple, composable patterns. AI Agents in fact range from fully autonomous systems to those following predefined workflows. ▶️ Workflows: Predefined paths where LLMs and tools are orchestrated programmatically. Best for well-defined, predictable tasks with clear requirements. 🔄 Autonomous Agents: Dynamic systems where LLMs independently decide on processes and tools to accomplish tasks. Ideal for dynamic, model-driven decision-making that benefit from autonomy but trade off latency and cost. With this, what are the Building Blocks of Agentic Systems? 1️⃣ The Augmented LLM: LLMs enhanced with retrieval, tools, and memory. Key considerations include tailoring capabilities to specific use cases and ensuring seamless tool integration. 2️⃣ Workflows have Common Patterns: (a) Prompt Chaining (tasks are broken into sequential steps, improving accuracy at the cost of latency); (b) Routing (input is classified and directed to specialized tasks or prompts; (c) Parallelization (tasks are subdivided and handled simultaneously, either by sectioning or voting; (d) Orchestrator-Workers (central LLM orchestrates subtasks dynamically, delegating work to worker LLMs); (e) Evaluator-Optimizer (feedback loop where one LLM generates output, and another evaluates and improves it). 3️⃣ Agents: Agents operate autonomously, planning and executing tasks independently, gaining ground truth from their environment, and interacting with humans when necessary. Typical use cases are open-ended tasks with uncertain steps. Curious to hear about specific applications in the advertising and media space, as well as companies that have expertise in these domains #advertising #media #tech #AI
-
You don’t need to be an AI agent to be agentic. No, that’s not an inspirational poster. It’s my research takeaway for how companies should build AI into their business. Agents are the equivalent of a self-driving Ferrari that keeps driving itself into the wall. It looks and sounds cool, but there is a better use for your money. AI workflows offer a more predictable and reliable way to sound super cool while also yielding practical results. Anthropic defines both agents and workflows as agentic systems, specifically in this way: 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀: systems where predefined code paths orchestrate the use of LLMs and tools 𝗔𝗴𝗲𝗻𝘁𝘀: systems where LLMs dynamically decide their own path and tool uses For any organization leaning into Agentic AI, don’t start with agents. You will just overcomplicate the solution. Instead, try these workflows from Anthropic’s guide to effectively building AI agents: 𝟭. 𝗣𝗿𝗼𝗺𝗽𝘁-𝗰𝗵𝗮𝗶𝗻𝗶𝗻𝗴: The type A of workflows, this breaks a task down into sequential tasks organized and logical steps, with each step building on the last. It can include gates where you can verify the information before going through the entire process. 𝟮. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: The multi-tasker workflow, this separates tasks across multiple LLMs and then combines the outputs. This is great for speed, but also collects multiple perspectives from different LLMs to increase confidence in the results. 𝟯. 𝗥𝗼𝘂𝘁𝗶𝗻𝗴: The task master of workflows, this breaks down complex tasks into different categories and assigns those to specialized LLMs that are best suited for the task. Just like you don’t want to give an advanced task to an intern or a basic task to a senior employee, this find the right LLM for the right job. 𝟰. 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗼𝗿-𝘄𝗼𝗿𝗸𝗲𝗿𝘀: The middle manager of the workflows, this has an LLM that breaks down the tasks and delegates them to other LLMs, then synthesizes their results. This is best suited for complex tasks where you don’t quite know what subtasks are going to be needed. 𝟱. 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗼𝗿-𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗲𝗿: The peer review of workflows, this uses an LLM to generate a response while another LLM evaluates and provides feedback in a loop until it passes muster. View my full write-up here: https://lnkd.in/eZXdRrxz
-
𝐌𝐚𝐬𝐭𝐞𝐫𝐢𝐧𝐠 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬: 𝐓𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐀𝐮𝐭𝐨𝐧𝐨𝐦𝐨𝐮𝐬 𝐀𝐈 𝗪𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬 Agentic patterns are transforming how we design, build, and scale intelligent systems. Whether you're building an AI agent to summarize articles, generate code, or coordinate complex tasks — understanding how agents interact is key to efficiency and innovation. The Top 6 Agentic Patterns you need to know: 1. Deterministic Flows – Perfect for structured workflows where each agent handles a step in a fixed order. 2. Handoffs and Routing – Dynamically assign tasks based on expertise using leader-follower, hierarchical, or peer-to-peer communication. 3. Agents as Tools – Equip agents with APIs, databases, or scrapers for real-time capabilities. 4. Guardrails – Add validation layers to ensure secure and reliable agent behavior. 5. LLMs-as-a-Judge – Create feedback loops using one LLM to generate and another to critique, improving output quality. 6. Parallelization – Speed up processing by running independent tasks simultaneously. These patterns form the foundation of scalable, agentic systems - especially in fields like autonomous research, data processing, and task automation. If you're an AI builder, engineer, or just curious about the future of intelligent systems, this is your blueprint. Swipe through to learn how each pattern works - and follow for more deep dives into AI architectures and workflows. Follow Nikhil Kassetty for more content ! #AI #AgenticPatterns #LLM #OpenAI #AIWorkflows #AutonomousAgents #AIArchitecture #TechInnovation #MachineLearning #NLP #FutureOfWork #AIEngineering
-
Anthropic just dropped an incredible guide on "How To Build Effective Agents" 2025 will be the year of AGENTS 🤖 Here's everything you need to know: 🧵 Simple > Complex When building LLM agents, the most successful implementations use basic composable patterns. My take: agentic frameworks are great for not needing to reinvent the wheel while building agent patterns. 🔄 Two main types of agentic systems: • Workflows: Predefined paths • Agents: Dynamic, self-directed systems 🛠️ Start simple! Only add complexity when needed. Many applications work fine with single LLM calls + retrieval 🔍 Key Workflow Patterns: • Prompt chaining • Routing • Parallelization • Orchestrator-workers • Evaluator-optimizer Explained below 👇 💡 Prompt chaining Sequential LLM calls where output of one feeds into another - like writing content then translating it Best for tasks with clear subtasks, like: • Writing + translating content • Creating outlines then full documents 🔀 Routing An initial LLM determines which specialized model handles the task, perfect for sorting queries by complexity Shines when handling different types of inputs: • Customer service queries • Difficulty-based task distribution ⚡️ Parallelization Breaking tasks into parallel subtasks or using multiple LLMs to vote on answers This has two key forms: • Sectioning: Breaking into subtasks • Voting: Multiple attempts for confidence 🎯 Orchestrator-Workers Think of this as a central conductor leading an orchestra of specialized AI workers. The orchestrator: • Dynamically breaks down complex tasks • Delegates to worker LLMs • Synthesizes their results into cohesive output Perfect for: Complex coding projects needing changes across multiple files 🎭 Evaluator-Optimizer This pattern creates a feedback loop where: • One LLM generates responses • Another LLM evaluates and provides feedback • Process repeats until quality targets are met Ideal for: Literary translations and complex search tasks needing multiple refinement rounds 🎯 Agents are best for: • Open-ended problems • Tasks needing flexibility • Situations requiring autonomous decision-making ⚠️ Remember: Agents trade higher costs and potential errors for autonomy. Always test extensively in sandboxed environments 🎮 Tool design is crucial! Treat your agent-computer interface (ACI) with the same care as human interfaces ✅ Three core principles: • Keep it simple • Stay transparent • Design clear tool documentation 🎯 Final takeaway: Success isn't about sophistication—it's about building the right system for your needs. Start simple, measure, then scale up only when needed Check out the full blog post here: https://lnkd.in/gdyyqXan And check out my full video breakdown here: https://lnkd.in/ggeTdfut
-
Are you struggling to build AI agents that work beyond the demo? I’ve spent the past year building and stress-testing agentic systems And what I’ve found is that most of the pain can be solved with 7 principles: 1️⃣ Structured Workflows > Clever Prompts Agents need a structured loop: reason → act → reflect → retry → escalate Loose, one-off prompts won’t sustain multi-step tasks 2️⃣ Context Handling is Core Architecture What the agent remembers — and how it recalls it — defines its range Summaries, scoped retrieval, and structured files work. Dumping full context doesn’t 3️⃣ Planning is a Must Agents need a built-in planning process to break down tasks and recover from failure Plan → execute → review is the backbone of reliable behavior 4️⃣ Real-world Agents Use Real Tools Terminal access, Git, APIs — without system interaction, it’s all talk Execution turns intent into impact 5️⃣ Reasoning Patterns Must be Enforced in the System Chain-of-Thought, ReAct — they only work when embedded in the system's logic Prompting for “step-by-step” isn’t enough on its own 6️⃣ Autonomy Needs Boundaries Without guardrails, agents can break things quickly Scoped actions, fallback logic, and safety checks are essential 7️⃣ The Magic is in Orchestration Great agents aren’t just smart — they manage memory, tools, decisions, and recovery Orchestration is what makes scaling multi-agent systems possible If you’re serious about building functional agents, these principles are non-negotiable Building better agents shouldn’t be gatekept If this helped you, pass it on 💾♻️