Utilizing AI in Customer Support

Explore top LinkedIn content from expert professionals.

  • View profile for Beth Kanter
    Beth Kanter Beth Kanter is an Influencer

    Trainer, Consultant & Nonprofit Innovator in digital transformation & workplace wellbeing, recognized by Fast Company & NTEN Lifetime Achievement Award.

    521,190 followers

    Article from NY Times: More than two years after ChatGPT's introduction, organizations and individuals are using AI systems for an increasingly wide range of tasks. However, ensuring these systems provide accurate information remains an unsolved challenge. Surprisingly, the newest and most powerful "reasoning systems" from companies like OpenAI, Google, and Chinese startup DeepSeek are generating more errors rather than fewer. While their mathematical abilities have improved, their factual reliability has declined, with hallucination rates higher in certain tests. The root of this problem lies in how modern AI systems function. They learn by analyzing enormous amounts of digital data and use mathematical probabilities to predict the best response, rather than following strict human-defined rules about truth. As Amr Awadallah, CEO of Vectara and former Google executive, explained: "Despite our best efforts, they will always hallucinate. That will never go away." This persistent limitation raises concerns about reliability as these systems become increasingly integrated into business operations and everyday tasks. 6 Practical Tips for Ensuring AI Accuracy 1) Always cross-check every key fact, name, number, quote, and date from AI-generated content against multiple reliable sources before accepting it as true. 2) Be skeptical of implausible claims and consider switching tools if an AI consistently produces outlandish or suspicious information. 3) Use specialized fact-checking tools to efficiently verify claims without having to conduct extensive research yourself. 4) Consult subject matter experts for specialized topics where AI may lack nuanced understanding, especially in fields like medicine, law, or engineering. 5) Remember that AI tools cannot really distinguish truth from fiction and rely on training data that may be outdated or contain inaccuracies. 6)Always perform a final human review of AI-generated content to catch spelling errors, confusing wording, and any remaining factual inaccuracies. https://lnkd.in/gqrXWtQZ

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    153,382 followers

    Came back from vacation Monday. Inbox? On fire.šŸ”„ Buried in the chaos: a customer story that stopped me in my tracks (and made me so happy). A Customer Support leader at a fast-growing financial services company used AI to transform his team - in just a few weeks. This leader works for a financial services company that’s in high-growth mode. Great news, right? Yes! For everyone except his Customer Support team… As the business grew faster, they were bombarded with repetitive questions about simple things like loan statuses and document requirements. Reps were overwhelmed. Customers faced longer response times. The company has been a HubSpot customer for nearly 10 years. They turned to Customer Agent, HubSpot’s AI Agent, and got to work: - Connected it to their knowledge base → accurate, fast answers - Set smart handoff rules → AI handles the simple, reps handle the complex - Customized the tone → sounds like them, not a generic bot (you know the type) In a short space of time, things changed dramatically: - Customer Agent now resolves more tickets than any rep - 94.9% of customers report being happy with the experience - For the first time, the team can prioritize complex issues and provide proactive support to high-value customers It’s exciting to see leaders using Customer Agent to not just respond to more tickets, but to increase CSAT and empower their teams to drive more impact. 2025 is the year of AI transformed Customer Support. I am stunned by how quickly that transformation is playing out!

  • View profile for Lindsay Rosenthal šŸ’«

    Founder | Creator | Strategist | Building AI, Leaders, & Ideas That Move Markets

    40,079 followers

    How I saved +14 hours a week using AI. (the sales efficiency playbook I wish I had years ago) Here’s the truth: You need a really good AI sales assistant / notetaker. But most AI meeting summaries are 50% fluff. They tell you what you already know and miss what you actually need. So instead of saving time, you still end up copy-pasting notes, searching for deal context, and scrambling before calls. Not helpful one bit! That was me.. …until I rebuilt my workflow using Sybill. (especially their new summaries + briefs features). Here’s the 5-step playbook I now follow: 1. Build your own summaries. Drag + drop sections, add custom AI prompts, hide the noise. No more generic ā€œtranscript dumps.ā€ 2. Match the meeting. Demo ≠ Discovery ≠ Proposal. Let your briefs auto-adjust so prep always fits the call type. 3. Prep in minutes, not hours. Pull prospect research, past calls, CRM notes, and calendar context into one brief. You walk in fully prepped after a 2-minute skim. 4. Highlight what matters most. Want risks at the top? Competitor mentions? Action items? Make the critical info impossible to miss. 5. Close the loop fast. Turn summaries into instant follow-ups and clean CRM updates. No more hopping between ChatGPT, email, and CRM to get deals moving. The result? 14 hours a week back. Less scrambling. More selling. Make your AI tools give you YOUR time back!!

  • View profile for Ion Moșnoi

    8+y in AI / ML | increase accuracy for genAI apps | fix AI agents | RAG retrieval | continuous chatbot learning | enterprise LLM | Python | Langchain | GPT4 | AI ChatBot | B2B Contractor | Freelancer | Consultant

    8,313 followers

    Recently, a client reached out to us expressing frustration with the RAG (Retrieval-Augmented Generation) application they had implemented for customer support emails by a different AI agency. Despite high hopes of increased efficiency, they were facing some significant problems: The RAG model frequently provided wrong answers by pulling information from the wrong types of emails. For example, it would respond to a refund request email with details about changing an order - simply because those emails contained some similar wording. Instead of properly classifying the emails by type and intent, it seemed to just perform a broad embedding search across all emails. This created a confusing mess where customers were receiving completely irrelevant and nonsensical responses to their inquiries. Rather than streamlining operations, the RAG implementation was actually making customer service much worse and more time-consuming for agents. The client's team had tried tuning the model parameters and changing the training data, but couldn't get the RAG application to accurately distinguish between different contexts and email types. They asked us to take a look and help get their system operating reliably. After analyzing their setup, we identified a few key issues that were derailing the RAG performance: Lack of dedicated email type classification The RAG model needed an initial step to explicitly classify the email into categories like refund, order change, technical support, etc. This intent signal could then better focus the retrieval and generation steps. Noisy, inconsistent training data The client's original training set contained a mix of incomplete email threads, mislabeled samples, and inconsistent formats. This made it very difficult for the model to learn canonical patterns. Retrieval without context filtering The retrieval stage wasn't incorporating any context about the classified email type to filter and rank relevant information sources. It simply did a broad embedding search. To address these problems, we took the following steps with the client: Implemented a new hierarchical classification model to categorize emails before passing them to the RAG pipeline Cleaned and expanded the training data based on properly labeled, coherent email conversations Added filtered retrieval based on the email type classification signal Performed further finetuning rounds with the augmented training set After deploying this updated system, we saw an immediate improvement in the RAG application's response quality and relevance. Customers finally started getting on-point information addressing their specific requests and issues. The client's support team also reported a significant boost in productivity. With accurate, contextual draft responses provided by the RAG model, they could better focus on personalizing and clarifying the text - not starting responses completely from scratch.

  • View profile for Sri Elaprolu

    Director, AWS Generative AI Innovation Center

    11,265 followers

    🧵 Real Stories of Generative AI in Action (Feature 6 of a multi-part series, you can access the full series at #AWSGenAIinAction) šŸ—£ļø For all organizations with a customer service function, efficient classification and routing of queries is critical to ensure customers receive a speedy and accurate service experience. Recently, the AWS Generative AI Innovation Center (#GenAIIC) collaborated with leading property and casualty insurance carrier Travelers to address this challenge. šŸ·ļø Classify to accelerate: Travelers receives millions of emails a year with agent or customer requests to service policies, with 25% of emails containing attachments (e.g. ACORD insurance forms as PDFs). Requests involve areas like address changes, coverage adjustments, payroll updates, or exposure changes. The main challenge was classifying emails received by Travelers into service request categories. šŸ‘©šŸ’» Designing an efficient system: To achieve the optimal balance of cost and accuracy, the solution employed prompt engineering on a pre-trained Foundation Model (FM) with few-shot prompting to predict the class of an email, all built on Amazon #Bedrock using Anthropic #Claude models. The teams manually analyzed 4K+ email texts and consulted with business experts to understand the differences between categories to provide sufficient explanations for the FM, including explicit instructions on how to classify an email. Additional instructions showed the model how to identify key phrases that help distinguish an email’s class from the others. The workflow starts with an email, then, given the email’s text and any PDF attachments, the email is given a classification of 13 defined classes by the model. āœ… Getting it right, and saving time: The Travelers solution (with prompt engineering, category condensing, document processing adjustments, and improved instructions) yielded classification accuracy to 91%, a 23 pt improvement vs. the original solution with just a pre-trained FM. Using the predictive capabilities of FMs to classify complex, and sometimes ambiguous, service request emails, the system will save tens of thousands of hours of manual processing and allow Travelers to redirect that time toward more complex tasks. 🌟 What I find most inspiring about this project is how it demonstrates the practical application of #GenAI to augment human capabilities. It's a great example of how organizations can use technology to optimize operations while maintaining focus on customer experience. Read more here: https://lnkd.in/et3P5XYp #AWS #CustomerService #Innovation #DigitalTransformation #Insurance #TechInnovation #AmazonBedrock

  • View profile for Martin Crowley

    You don't need to be technical. Just informed.

    51,113 followers

    Manual sales follow-ups are officially dead. AI eats admin work. Reps should sell - not write notes. Grant Hushek shows you how to automate it all: 1. Capture calls with Fathom Automatically record and transcribe every call. Nothing slips through the cracks. 2. Trigger Zapier with ā€œNew Transcriptā€ Launch the workflow the moment the call ends. 3. Analyze tone using OpenAI Run a sentiment check with ChatGPT. Positive? Negative? Neutral? Logged. 4. Extract insights via Claude Use AI to pull: – Action Items – Objections – Questions – Goals – Dates 5. Format it for HubSpot Claude replies in rich text. Bolded. Bullet-pointed. CRM-ready. 6. Auto-update HubSpot Find the contact by email. Create one if it doesn’t exist. 7. Save everything in Google Drive Transcript goes in a shared folder. Google Doc includes summary + links. 8. Notify the team in Slack Slack pings with the full debrief. CRM link. Summary. Transcript. Done. AI handles the busywork. Reps stay focused on closing. Follow-ups go from messy to automatic. P.S. Want to learn more about AI? 1. Scroll to the top 2. Click ā€œView my newsletterā€ 3. Sign-up for our free newsletter.

  • View profile for Robb Clarke

    Head of AI @ RB2B | $0 to $6M+ ARR in 18 months | AI, CX, UX, UI, product, support, design

    3,936 followers

    Using an AI support agent at RB2B has saved us $30k since April. ā€œIs an AI agent worth it?ā€ You bet your ass it is. Here’s the breakdown: BACKGROUND I started using Intercom’s Fin agent with RB2B back in April. The rollout was gradual over April and May. In June I let it off the leash and it started handling email tickets and was one of our two primary support avenues (with self-serve workflows). At the end of September, I mothballed self-serve workflows as an initial means of support and went AI-first. ——— COST Fin costs us $0.99 per resolution. Between April 1st and today Fin was involved in 4998 conversations. It resolved 2773 of them. We paid Intercom $2,745.49 for that. Money well spent. ——— HUMAN EQUIVALENT Let’s assume that the average time spent on a ticket for our human team is 20 minutes. Fin has saved our human team 924 hours of work since April. Those 924 hours would have cost us $37,776.29. ——— SAVINGS Easy math. $35,030.80 saved between April and today. ——— OUTLOOK The figures below are based on the improved AI resolution rate (~60%) between July and today. Estimated over next 12 months… - AI tickets resolved: 6405 - AI cost: $6,341.33 - Human hours saved: 2135.13 The equivalent human cost would be $87,252.83 over the next year. Or, a SAVINGS OF $80,911.50 by this time in 2025. …and that’s at our current resolution rate with our current volume, both of which are consistently increasing. ā€œIs an AI agent worth it?ā€ You tell me.

  • View profile for Anthony Natoli

    Senior Account Executive @ LinkedIn | Helping sellers build systems to perform their best in sales & life | Creator & Speaker

    55,495 followers

    I completely overhauled my discovery call email follow up and it changed everything for me. Here's the template, an example and how you can use AI to help: Less time spent on follow up emails? check Better experience for my prospects & customers? check Better outcomes? check Here it is and why: I never recap the call / takeaways anymore. It’s a waste of space. Instead, I am incredibly intentional with my email. I realized prospects & customers don't need a recap - they need to take action. So, I clearly outline: -Problem (s) we agreed we are looking to solve & timeline -Prospects action items -My action items -Agreed upon next steps & timeline -Any follow up resources they requested & why Usually less than 200 words. That’s it. šŸ¤– Using AI workflow: Copy transcript from call recorder and paste into your favorite (approved by compliance) LLM. IE: CoPilot. The LLM project then creates a follow up using the above template. EXAMPLE: Hey John, thanks for the call today. Below is a recap of the problem we agreed we are solving, both of our action items and resources discussed. Problem: Despite X, ABC Corp still doesn’t have Y, which means Z, as measured by (metric & $ value). Need to tackle this by start of Q3. Your next steps: -figure out when competitor contract ends by Tuesday July 23rd -share back XYZ on pre-demo call My next steps/resources to share: -Loop in SC for next call -(Summary of relevant resource here) Our immediate next step: -Pre-demo prep call on Thursday July 25th Thanks and looking forward to our call on Thursday where we'll prep for the group demo for Friday. -Anthony -- Feel free to steal!

  • View profile for Karn Singh

    Founder DroneX AI | Building AI Solutions That Maximize B2B Growth | AI Educator Empowering Innovation & Sharing Insights

    18,197 followers

    Guardrails are the backbone of production-ready and safe AI applicaions. They are programmable rules that monitor and control LLM behavior. Guardrails ensure outputs are: - Ethical (no harmful or biased language). - Factually accurate. - Secure and compliant with laws like GDPR. - Structured for user needs. Without them, AI can generate offensive, inaccurate, or unsafe responses. With them, you build trust, avoid risks, and improve results. 10 Types of Guardrails Every AI Team Should Know 1. Ethical Guardrails:   - Prevent biased or harmful content.   - Example: Ensuring gender-neutral language in job descriptions. 2. Compliance Guardrails:   - Enforce legal and regulatory standards.   - Example: Blocking content that violates data privacy laws. 3. Content Validation:   - Fact-checks and ensures reliable outputs.   - Example: Correcting logical errors in coding suggestions. 4. Response Format:   - Enforces consistent structure and tone.   - Example: Standardized email templates that reflect brand voice. 5. Contextual Relevance:   - Keeps conversations on-topic and meaningful.   - Example: Redirecting vague questions to focused responses. 6. Logic Validation:   - Assesses accuracy in technical domains.   - Example: Identifying flaws in mathematical solutions. 7. Security Guardrails:   - Prevents data leaks and misuse.   - Example: Masking sensitive data like credit card numbers. 8. Adaptive Guardrails:   - Evolve based on user feedback and changing norms.   - Example: Updating filters for emerging ethical concerns. 9. Agent-Based Monitoring:   - Automates interaction oversight.   - Example: Using a secondary agent to validate all LLM outputs. 10. LLM-in-the-Loop:   - Dual-model setup for double-checking responses.   - Example: Cross-verifying content accuracy before publishing. With guardrails, you get: - Safer AI systems: Minimized harmful outputs.  - Better compliance: Adherence to laws and ethics.  - Improved quality: Accurate, reliable, and structured outputs. How to Implement LLM Guardrails 1. Use open-source tools like Guardrails AI and NVIDIA NeMo.  2. Apply techniques like prompt engineering to guide LLM outputs.  3. Leverage agent-based systems for automated governance. Pro tip: Guardrails aren’t static. They evolve with your needs and feedback—ensuring your AI adapts to new challenges. Guardrails = safer, smarter AI. ------ I share my learning journey here. Join me and let's grow together. Enjoy this? Repost it to your network and follow Karn Singh for more.

  • View profile for Rishab Rege, Executive MBA, PMP

    šŸš€ Driving Digital Innovation and Business Transformation through AI, Strategic Leadership, and Scalable Solutions

    6,496 followers

    šŸš€ Unlocking Multi-Level Email Attachments in Pega: A Practical Solution Dealing with nested or multi-level email content in Pega can pose a unique challenge, especially when your process requires extracting and attaching all underlying attachments from an email to a case. Many in our Pega community, including experts and enthusiasts alike, have encountered this scenario, where attachments within an attached email (.EML file) need to be extracted and appropriately attached to the case for comprehensive processing. Currently, while Pega's Email Channel and NLP capabilities proficiently handle direct attachments to an email, they may not automatically delve into attachments nested within other attachments, such as PDFs attached to an .EML file within the original email. This limitation necessitates a more customized approach to ensure no data is missed during case processing. Here's How to Tackle This: 1. Utilize Custom Activities: Create a custom activity that triggers when an email is received. This activity should check for .EML attachments and, upon finding any, parse these files to identify further attachments within. 2. Implement Java Code: Within the custom activity, leverage Java code designed to read .EML files. This code should extract any attachments found within these files (e.g., PDF2 and PDF3 from the scenario) and prepare them for attachment to the case. 3. Attachment Handling: Once extracted, use Pega APIs to attach these newly extracted files to the relevant case, mimicking the behavior seen with directly attached files. 4. Testing and Iteration: Rigorously test this custom solution across various email scenarios to ensure reliability. Be prepared to iterate based on findings to refine the process. Expert Insight: Addressing this requirement may involve diving deeper into Pega's capabilities and potentially extending them with custom code. It's a testament to the flexibility and extensibility of Pega's platform, allowing solutions to be tailored to specific business needs. While Pega offers robust tools for email processing, certain complex scenarios like nested attachments require a blend of out-of-the-box features and custom development. By adopting a creative approach, leveraging Pega's extensible architecture, and engaging with the vibrant Pega community for insights and support, organizations can effectively meet and exceed their case management requirements. Note: This solution highlights a pathway to extend Pega's native email handling capabilities to meet specialized requirements, showcasing the adaptability of Pega's platform and the value of community collaboration. #PegaCommunity #casemanagement #EmailProcessing #PegaSolutions #digitaltransformation

Explore categories