Caution when using Gen AI to create content (e.g., blogs, whitepapers, podcast scripts). I recently ran a fun experiment. I input the following prompt into GPT 3.5: "Write me a 500 word article on the business value of AI in customer service written in an authoritative tone and providing statistics on ROI" The resulting article was impressive at first glance. It gave compelling information backed with data, including data from Aberdeen Strategy & Research, Gartner, McKinsey & Company & IDC. One of the data points just didn't make sense though. It's from the below paragraph: "Furthermore, AI-driven chatbots have emerged as a game-changer in customer service, providing real-time assistance to customers round the clock. Research by Gartner predicts that by 2025, over 80% of customer service interactions will be handled by AI chatbots. These intelligent virtual agents can efficiently handle routine inquiries, offer personalized recommendations, and even facilitate transactions, all while providing a seamless conversational experience." As an industry analyst who spent over a decade covering the #contactcenter & #cx space, I know 80% of customer interactions will not be handled by AI chatbots in a mere 8 months. #AI is just not ready for that. It's well suited for simple interactions but it can't yet match human critical thinking & empathy required for effective handling of more complex interactions. In fact, Aberdeen's latest research shows that as of February 2024, 49% of firms are using AI in their contact center. So, I did more (traditional online) research on the 80% figure and found that GPT's reference of the Gartner statistic was misrepresented. An August 2023 press release by the firm reports that the company predicts 80% of service organizations will use #GenAI by 2025. (Side note: as of February 2024, Aberdeen's research shows Gen AI adoption in the contact center standing at half that predicted rate: 40%...) This should be a good reminder that AI "hallucinations" are real. In other words, AI can make up things - in this case, misrepresent data while even referencing sources of the data. In fact, when I asked GPT 3.5 to provide me links for the sources of the data in the article it wrote based on my prompt I was provided with a response that it can't provide real-time links but that I can trace sources by following the titles of articles it reported using to generate content. A quick Google search using the source name provided by GPT was how I discovered the actual context of the prediction made by Gartner that was misrepresented in the GPT-created article. #Contentmarketing is changing rapidly. Gen AI is undoubtedly a very powerful tool that'll significantly boost #productivity in the workplace. However, it's not an alternative that can replace humans. Firms aiming to create accurate & engaging content should instead focus on empowering employees with AI capabilities to enjoy human ingenuity paired with computer efficiency.
Limitations of AI in Customer Service
Explore top LinkedIn content from expert professionals.
Summary
AI has transformed customer service by offering speed and efficiency, but its limitations—such as lack of empathy, susceptibility to biases, and potential for spreading misinformation—pose significant challenges that require careful consideration and human oversight.
- Address emotional nuances: AI chatbots lack the ability to fully understand and respond to human emotions. Incorporate human agents for complex or empathy-requiring interactions to build trust and connection with customers.
- Ensure accurate data: AI systems can misinterpret or generate incorrect information, so always verify the training data and outputs to avoid spreading misinformation.
- Prioritize inclusivity: Reduce bias in AI by training systems on diverse and high-quality data sets, ensuring accessibility and avoiding discriminatory practices.
-
-
We've all experienced them: chatbots, those virtual assistants promising a seamless customer experience. But when these AI-powered interactions go wrong, the consequences can be far graver than a frustrating conversation. Air Canada recently learned this the hard way, facing a PR nightmare after their chatbot quoted the wrong prices for which they were held accountable, which raises a crucial question: are chatbots worth the risk, especially without Diversity, Equity, and Inclusion (DEI) expertise at the helm? While chatbots hold immense potential, poorly configured algorithms can perpetuate harmful biases, alienate customers, and ultimately cost your company dearly. Here's why: 1. Algorithmic Bias: Chatbots learn from data, often reflecting societal biases in the training data that can lead to discriminatory language, unfair treatment of specific demographics, and a breakdown in trust. 2. Lack of Empathy: Chatbots, despite advancements, need help understanding the nuances of human emotion. Culturally insensitive responses or an inability to adapt to diverse communication styles can leave customers feeling unheard and frustrated, damaging brand loyalty. 3. Accessibility Gaps: Not everyone interacts with technology in the same way. Chatbots that lack accessibility features for individuals with disabilities can create barriers to customer service and violate legal requirements. #techconsultancy #DEI #AI #genai #biasfree #chatbot #accessibility #economicinclusion
-
GenAI chatbots, despite their advancements, are prone to making mistakes in various ways, stemming from their inherent limitations. Many find chatting with LLMs like ChatGPT offers significant potential in enhancing the speed of delivery and empowering ease-of-use experiences. Many use these tools, without understanding that misinformation and disinformation can arise due to flawed training data or inadequate grounding. These LLMs or foundation models, that are used to create these chat interfaces while extremely useful, lack emotional intelligence, and morality. Recognizing these limitations is essential for designing effective and responsible AI and GenAI chatbot interactions. Let's explore how these limitations manifest in three key areas: Misinformation and Disinformation: Chatting with your LLM interface, otherwise, some call it an AI chatbot can inadvertently propagate misinformation or disinformation due to their reliance on the data they're trained on. If the training data contains biased or incorrect information, the chatbot may unknowingly provide inaccurate responses to users. Additionally, without proper grounding, where prompts are based on high-quality data sets, AI chatbots may struggle to discern between reliable and unreliable sources, leading to further dissemination of false information. For instance, if a chatbot is asked about a controversial topic and lacks access to accurate data to form its response, it might inadvertently spread misinformation. Lack of Emotional Intelligence and Morality: AI chatbots lack emotional intelligence and morality, which can result in insensitive or inappropriate responses. Even with extensive training, they may struggle to understand the nuances of human emotions or ethical considerations. Similarly, in scenarios involving moral dilemmas, AI chatbots may provide responses that overlook ethical considerations, as they lack the ability, or simply cannot perceive right from wrong in a human sense. Limited Understanding and Creativity: Despite advancements in natural language processing, AI chatbots still have a limited understanding of context and may struggle with abstract or complex concepts. This limitation hampers their ability to engage in creative problem-solving or generate innovative responses. Without grounding in diverse and high-quality data sets, AI chatbots may lack the breadth of knowledge necessary to provide nuanced or contextually relevant answers. Consequently, they may provide generic or irrelevant responses, especially in situations that require creativity or critical thinking. When systems like this are pushed to go beyond, or asked to be creative. #genai #AI #chatbots 𝗡𝗼𝘁𝗶𝗰𝗲: The views expressed in this post are my own. The views within any of my posts, or articles are not those of my employer or the employers of any contributing experts. 𝗟𝗶𝗸𝗲 👍 this post? Click 𝘁𝗵𝗲 𝗯𝗲𝗹𝗹 icon 🔔 for more!
-
"The Federal Trade Commission is keeping a close watch on the marketplace and company conduct as more AI products emerge. We are ultimately invested in understanding and preventing harms as this new technology reaches consumers and applying the law. In doing so, we aim to prevent harms consumers and markets may face as AI becomes more ubiquitous" - says FTC in a new blog post Per the FTC, consumers are voicing concerns about harms related to AI—and their concerns span the technology’s lifecycle, from how it’s built to how its applied in the real world. Concerns are around: How AI is Built - Copyright and IP - key concern here is copyright infringement from the scraping of data from across the web, and concern that content posted to the web may be used to train models that could later supplant their ability to make a living by creating content - Biometric and personal data: use of biometric data, particularly voice recordings, being used to train models or generate “voice prints” (discussed by FTC itself: https://lnkd.in/eZJif-xp) How AI Works and Interacts with Users: - Bias and inaccuracies: concerns are re: biases of facial recognition software, including customers being unable to verify their identity because of a lack of demographic representation in the mode and inaccuracies that can lead to /support scams (FTC has discuss in report to Congress: https://lnkd.in/gNje-XTp) - Limited pathways for appeal and bad customer service: complaints include not being able to reach a human and reports by regular users of products who believe they were mistakenly suspended or banned by an AI without the ability to appeal to a human. How AI is Applied in the Real World: - Scams, fraud, and malicious use: concerns re: phishing emails becoming hard to spot as scammers start to write them with generative AI products and previously tell-tale spelling and grammar mistakes disappear; concerns about how generative AI can be used to conduct sophisticated voice cloning scams, in which family members’ or loved ones’ voices are used for financial extortion; romance scams and financial fraud could be turbo-charged by generative Image by jcomp on Freepik #dataprivacy #dataprotection #AIgovernance #AIprivacy #privacyFOMO https://lnkd.in/d_SUWF3N
-
Empathy only works when it's sincere. And sincerity may just be one of those human traits that can't yet be copied by AI. I was recently engaged in a customer service chat with a company. As soon as I typed my question, the agent responded with: "I can understand how you are feeling right now. If I were in your position, I would feel just as you do." I didn't believe her. It quickly became apparent that she was just copying and pasting a script that someone (or something) else wrote for her. This is what happens when we "commit" to being empathetic but don't really understand what it means. Empathy, according to Oxford Languages, is "the ability to understand and share the feelings of another." But we can't just say it; we have to actually mean it. The agent's response telling me that she understood how I was feeling, and that if she were in the same position she would feel the same way, sounds exactly like a computer interpreting "the ability to understand and share the feelings of another." I'm a big fan of AI, and platforms like ChatGPT are just getting started in terms of their potential. But as CX Network clearly states: "Empathy is not about humans versus AI; it’s about using the best of what both have to offer. The future of AI-based decisioning is a combination of AI insights with human supplied ethical considerations." Remember: AI isn't a replacement for humans; it's merely a supplement. #AI #customerexperience #theexperiencemaker (Image created with AI from Adobe)