Conversational AI is transforming customer support, but making it reliable and scalable is a complex challenge. In a recent tech blog, Airbnb’s engineering team shares how they upgraded their Automation Platform to enhance the effectiveness of virtual agents while ensuring easier maintenance. The new Automation Platform V2 leverages the power of large language models (LLMs). However, recognizing the unpredictability of LLM outputs, the team designed the platform to harness LLMs in a more controlled manner. They focused on three key areas to achieve this: LLM workflows, context management, and guardrails. The first area, LLM workflows, ensures that AI-powered agents follow structured reasoning processes. Airbnb incorporates Chain of Thought, an AI agent framework that enables LLMs to reason through problems step by step. By embedding this structured approach into workflows, the system determines which tools to use and in what order, allowing the LLM to function as a reasoning engine within a managed execution environment. The second area, context management, ensures that the LLM has access to all relevant information needed to make informed decisions. To generate accurate and helpful responses, the system supplies the LLM with critical contextual details—such as past interactions, the customer’s inquiry intent, current trip information, and more. Finally, the guardrails framework acts as a safeguard, monitoring LLM interactions to ensure responses are helpful, relevant, and ethical. This framework is designed to prevent hallucinations, mitigate security risks like jailbreaks, and maintain response quality—ultimately improving trust and reliability in AI-driven support. By rethinking how automation is built and managed, Airbnb has created a more scalable and predictable Conversational AI system. Their approach highlights an important takeaway for companies integrating AI into customer support: AI performs best in a hybrid model—where structured frameworks guide and complement its capabilities. #MachineLearning #DataScience #LLM #Chatbots #AI #Automation #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gFjXBrPe
AI in Customer Support: Overcoming Common Challenges
Explore top LinkedIn content from expert professionals.
Summary
AI in customer support is transforming the way businesses interact with customers by automating responses, improving resolution times, and ensuring efficiency. However, overcoming challenges like lack of contextual understanding, implementing safeguards, and identifying where AI adds value are key to ensuring its success.
- Build a robust knowledge base: Create and maintain detailed, AI-friendly resources, such as FAQs, troubleshooting guides, and change logs, to ensure your AI can access comprehensive information when assisting customers.
- Prioritize context management: Equip AI systems with relevant customer data, situational awareness, and product/service information, enabling them to deliver accurate and personalized responses.
- Implement structured workflows: Develop clear frameworks and guardrails for your AI, ensuring it is guided through reasoning processes and monitored for quality, ethical responses, and error minimization.
-
-
I have been working with AI in customer support for a while now. And lately, one thing is becoming clear. This space is getting crowded. Every vendor claims their AI is the magic wand. Just plug it in, and your support problems disappear. But the reality is different. AI isn’t magic. It’s a strategy. It has to be planned, adapted, and rolled out based on: 🔹 Your goals 🔹 Your current challenges 🔹 And your team’s capacity Most support leaders we speak with aren’t confused about the tech. They are confused about where to use it. That’s the real challenge. So we created a simple matrix to help teams make better AI decisions. It’s built on just two questions: 1. What’s the risk if AI gets this wrong 2. How complex is the task When you map support work using this lens, things get clearer: - Use AI fully for low risk, repetitive tasks like tagging, triaging, or summarising. - Use AI as a helper for pattern based tasks like routing, recommending actions, or drafting replies. - Keep humans in control for high risk, complex issues like escalations, complaints, or anything tied to revenue. And here’s the other mindset shift: Don’t think of support AI as one giant bot. Think of it as a system of specialised agents: 🔹 Analyzers – Understand queries, profiles, logs 🔹 Orchestrators – Manage workflows, routing 🔹 Reasoners – Diagnose problems 🔹 Recommenders – Suggest next steps 🔹 Responders – Write or send replies Each agent plays a specific role, just like your support team does. Done right, AI doesn’t replace humans. It supports them, speeds them up, and helps them focus where it matters most. This approach is also being recognised by the front-runners in the space. At a recent ServiceNow event I attended, many speakers echoed the same thought: AI is not one size fits all. It must be tailored to each organisation’s structure, systems, and bandwidth. Let’s stop using AI for the sake of it. Let’s start using it where it actually makes a difference. If you are building or evaluating AI for support and want to walk through the matrix, Feel free to drop me a message. Always happy to exchange notes.
-
70% of customers assume support teams already have their full context. But only 22% of companies actually do. Here’s why AI agents are fumbling even the “simple” issues: AI agents don’t fail because they’re “bad”. AI fails because it doesn’t know enough. Even basic support issues turn complex when your AI agent can’t see the full picture. And 90% of vendors out there are only feeding it surface-level stuff: - Product info - Help articles - Basic intent Every support resolution requires at least three components to see the full picture: CUSTOMER DATA - CRM data - Financial data - Warranty information - EMR data - ERP data SERVICES & PRODUCT INFORMATION - Knowledge articles - Product availability - Company processes - Pricing SITUATIONAL AWARENESS - Customer intent - Patient symptoms - Customer sentiment - Task urgency If your AI agent doesn’t have that, it’s not resolving — it’s guessing. EXAMPLE - Patient chats: “My stomach hurts.” - A basic AI Agent says: “Here are 5 causes of stomach pain.” - A context-aware AI Agent says: “You’ve had digestive issues recently. Dr. Patel is free at 3:30PM. Insurance is approved — want to book?” One leads to churn. The other builds trust. — Context isn’t a nice-to-have. It’s the foundation of resolution. And if your AI doesn’t have it — don’t expect it to work. Your customers deserve more than guesswork. #CustomerExperience #AIagents #SupportAutomation
-
Two weeks ago I said AI Agents are handling 95% of our sales and support and I replaced $300k of salaries with a $99/mo Delphi clone. 25+ founders DM’d me… “HOW?” Here’s the 6 things you MUST do if you want to run your entire customer-facing business with AI: 1. Create a truly excellent knowledge base. Your AI is only as good as the content you feed it. If you’re starting from zero, aim for one post per day. Answer a support question by writing a post, reply with the post. After 6mo you have 180 posts. 2. Have Robb’s CustomGPT edit the posts to be consumed by AI. Robb created a GPT (link below) that tweaks posts according to Intercom’s guidance for creating content for Fin. The content is still legible to humans, but optimized for AI. 3. Eliminate recursive loops - because pissed off customers won’t buy If your AI can’t answer a question but sends the customer to an email address which is answered by the same AI, you are in trouble. Fin’s guidance feature can set up rules to escalate appropriately, eliminate loops, and keep customers happy. 4. Look at every single question every single day (yes, EVERY DAY). Every morning Robb looks at every Fin response and I look at every Delphi response. If they aren’t as good as they could possibly be, we either revise the response, or Robb creates a support doc to properly handle the question. 5. Make sure you have FAQs, Troubleshooting, and Changelogs. FAQs are an AI’s dream. Bonus points if you create FAQ’s written exactly how your customers ask the question. We have a main FAQ, and FAQs for each sub section of our support docs. Detailed troubleshooting gives the AI the ability to handle technical questions. Fin can solve 95% of script install issues because of our Troubleshooting section. Changelogs allow the AI to stay on top of what’s changed in the app to give context to questins about features and UI as it changes. 6. Measure your AI’s performance and keep it improving. When we started using Fin over 1y ago, we were at 25% positive resolutions. Now we’re above 70%. You can actively monitor positive resolutions, sentiment, and CSAT to make sure your AI keeps improving and delivering your customers an increasingly positive experience. TAKEAWAY: Every Founder wants to replace entire teams with AI. But nobody wants to do the actual work to make it happen. Everybody expects to flip a switch and have perfect customer service. The reality? You need to treat your AI like your best employee. Train it daily. Give it the resources it needs. Hold it accountable for results. Here’s the truth that the LinkedIn clickbait won't tell you… The KEY to successfully running entire business units with AI? Your AI is only as good as the content you feed it. P.S. Want Robb's CustomGPT? We just launched 6-part video series on how RB2B trained its agents well enough to disappear for a week and let AI run the entire business. Access it + get all our AI tools: https://www.rb2b.com/ai