From Scripted Flows to Autonomous Agents: The Future of Conversational AI Design
Over the past decade, Conversational AI has evolved from rigid, intent-based scripts to far more dynamic systems.
With the arrival of LLM-powered agents, we are entering a new design paradigm that challenges the foundational principles that guided voice and chatbot UX until now.
In this article, I explore what this shift means for designers, and how Conversation Analysis, ethnographic research, and sociocultural design can help us keep these increasingly autonomous systems predictable, safe, and genuinely helpful.
From Matching Intents to Pursuing Goals
Traditional conversational systems worked like structured obstacle courses: detect the user’s intent, follow the matching script, and deliver an answer or transaction.
Agentic AI changes everything. Instead of scripting every path, we now design agents that pursue clearly defined goals, orchestrate tools under strict permissions, and manage memory and context as persistent UX components.
The designer’s question has changed. It is no longer “What will the bot say next?” but “How will the agent act, given its goals, tools, and guardrails?”
This is a shift from dialogue as output to action as process, and it requires us to think in terms of strategic decision-making rather than predefined turns.
Rethinking Metrics for Autonomous Agents
CSAT and containment still matter, but they are not enough.
We now need to measure:
- Task success rate
- Accuracy of plans and tool calls
- Safety compliance and robustness against failure
- Autonomy quality (predictable, aligned decision-making)
- Cost–latency–quality trade-offs per solved task
These are measurable against established benchmarks such as SWE-bench, WebArena, and HAL, bringing transparency and accountability to agent performance.
Governance as a Design Layer
In earlier IVR and chatbot work, I learned that compliance cannot be an afterthought.
With self-directed agents, governance must be designed in from the start, aligning with EU AI Act requirements, applying TRiSM principles, and safeguarding human agency with consent, explainability, and rollback mechanisms.
The goal is not to restrict autonomy, but to ensure it unfolds within predictable and socially acceptable boundaries.
Recommended by LinkedIn
Conversation Analysis as an Anchor
In my book The Future of Talk, I show how Conversation Analysis (CA) offers a framework for moving beyond scripted flows.
CA treats conversation as a structured co-construction of meaning. Concepts like adjacency pairs, repair sequences, and topic management give us ways to constrain the freedom of LLMs without sacrificing adaptability.
Agents designed this way can detect hesitation, navigate topic shifts, and recover from misunderstandings. These capabilities are not “nice-to-haves” but preconditions for trust.
Multilingual and Cross-Channel Realities
In global deployments, language is never just translation.
We must preserve semantic intent, brand voice, and interactional appropriateness across languages, dialects, and channels. A message that works in a chat can feel abrupt in voice, where timing and prosody matter.
Best practice combines localized training data, cultural QA by native speakers, retrieval-augmented responses, and orchestration layers that keep context consistent across channels.
The Evolving Role of the Designer
In the next decade, AI agent designers will go far beyond flow creation.
We will orchestrate goal-driven, autonomous systems, integrating reasoning, planning, and multi-modal capabilities. Our work will span reasoning-pattern design, autonomy calibration, continuous evaluation pipelines, and governance integration from day one.
We will bridge technical architecture, UX, and ethics, acting as strategic orchestrators of both capability and responsibility.
Final Thought
When I transitioned from Linguistics to the design of conversational agents, I discovered that structure, cultural grounding, and context-awareness are fundamental to creating usable and engaging interactions.
As we move into the era of autonomous AI, these principles remain essential, but they must now be applied to systems capable of thinking, planning, and acting with significantly less human oversight.
The challenge is to keep these agents helpful, predictable, and aligned, not by limiting what they can do, but by shaping the spaces in which they can act.
I invite fellow designers, researchers, and AI practitioners to share how they are addressing these challenges in their own work. What approaches have you found most effective in ensuring both autonomy and alignment in AI systems?
Well written, Carmen. I look forward to seeing examples of these new types of conversational agents.