Over the last year, I’ve seen many people fall into the same trap: They launch an AI-powered agent (chatbot, assistant, support tool, etc.)… But only track surface-level KPIs — like response time or number of users. That’s not enough. To create AI systems that actually deliver value, we need 𝗵𝗼𝗹𝗶𝘀𝘁𝗶𝗰, 𝗵𝘂𝗺𝗮𝗻-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 that reflect: • User trust • Task success • Business impact • Experience quality This infographic highlights 15 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 dimensions to consider: ↳ 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 — Are your AI answers actually useful and correct? ↳ 𝗧𝗮𝘀𝗸 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲 — Can the agent complete full workflows, not just answer trivia? ↳ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 — Response speed still matters, especially in production. ↳ 𝗨𝘀𝗲𝗿 𝗘𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 — How often are users returning or interacting meaningfully? ↳ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗥𝗮𝘁𝗲 — Did the user achieve their goal? This is your north star. ↳ 𝗘𝗿𝗿𝗼𝗿 𝗥𝗮𝘁𝗲 — Irrelevant or wrong responses? That’s friction. ↳ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗗𝘂𝗿𝗮𝘁𝗶𝗼𝗻 — Longer isn’t always better — it depends on the goal. ↳ 𝗨𝘀𝗲𝗿 𝗥𝗲𝘁𝗲𝗻𝘁𝗶𝗼𝗻 — Are users coming back 𝘢𝘧𝘵𝘦𝘳 the first experience? ↳ 𝗖𝗼𝘀𝘁 𝗽𝗲𝗿 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 — Especially critical at scale. Budget-wise agents win. ↳ 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗽𝘁𝗵 — Can the agent handle follow-ups and multi-turn dialogue? ↳ 𝗨𝘀𝗲𝗿 𝗦𝗮𝘁𝗶𝘀𝗳𝗮𝗰𝘁𝗶𝗼𝗻 𝗦𝗰𝗼𝗿𝗲 — Feedback from actual users is gold. ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 — Can your AI 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳 𝘢𝘯𝘥 𝘳𝘦𝘧𝘦𝘳 to earlier inputs? ↳ 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 — Can it handle volume 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 degrading performance? ↳ 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 — This is key for RAG-based agents. ↳ 𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗦𝗰𝗼𝗿𝗲 — Is your AI learning and improving over time? If you're building or managing AI agents — bookmark this. Whether it's a support bot, GenAI assistant, or a multi-agent system — these are the metrics that will shape real-world success. 𝗗𝗶𝗱 𝗜 𝗺𝗶𝘀𝘀 𝗮𝗻𝘆 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗼𝗻𝗲𝘀 𝘆𝗼𝘂 𝘂𝘀𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀? Let’s make this list even stronger — drop your thoughts 👇
Adapting to Customer Feedback
Explore top LinkedIn content from expert professionals.
-
-
Your feedback process should act as a funnel, catching data from all the various sources and bringing it into a centralized location. As you get feedback from various sources, it’s helpful to be consistent in what you collect. Capturing data in a handful of key areas is particularly useful, including: >Touchpoint. What was the touchpoint, or where was the customer in their journey? For example, this could be after a repair, or an interaction with customer service. >Objective. What was the customer’s objective? For example, they wanted to get their cable working again. >Experience. What was the actual experience? The cable got repaired but it happened outside the promised window of time. >Emotional impact. What was the emotional impact of this experience? The range you establish could be very satisfied to very unsatisfied, on a scale. I’ve seen alternatives such as very happy to very frustrated. What words best capture emotion in your setting? These factors give you a solid foundation for comparing both structured and unstructured feedback. UL, a global company that provides product testing and certification, made a push to more completely capture the on-the-fly feedback their employees were hearing. They created a simple feedback form inside their CRM system. The link can be accessed quickly by any employee, anytime. For example, they can easily pull up the form from their phone and enter the customer’s feedback. Nate Brown, who spearheaded the effort, said at the time, “This is a complete game-changer in how UL understands customers.” Find more examples here: https://lnkd.in/e-t5Zs2b #customerfeedback #customerexperience #customerservice
-
I'd like to discuss using Customer Feedback for more focused product iteration. One of the most direct ways to understand customers needs and desires is through feedback. Leveraging tools like surveys, user testing, and even social media can offer invaluable insights. But don't underestimate the power of simple direct communication – be it through emails, chats, or interviews. However, while gathering feedback is essential, ensuring its quality is even more crucial. Start by setting clear feedback objectives and favor open-ended questions that allow for comprehensive answers. It's also pivotal to ensure a diversity in your feedback sources to avoid any inherent biases. But here's a caveat – not all feedback will be relevant to every customer. That's why it's essential to segment the feedback, identify common themes, and use statistical methods to validate its wider applicability. Once you've sorted and prioritised the feedback, the next step is actioning it. This involves cross-functional collaboration, translating feedback into product requirements, and setting milestones for implementation. Lastly, once changes are implemented, the cycle doesn't end. Use methods like A/B testing to gauge the direct impact of the changes. And always, always return to your customers for follow-up feedback to ensure you're on the right track. In the bustling world of tech startups, startups that listen, iterate, and refine based on customer feedback truly thrive. #startups #entrepreneurship #customer #pmf #product
-
Generative AI surveys: where your feedback is interactive, valued, and promptly discarded. But hey, at least it’s efficient! Sorry, I know it’s a bit early to be snarky. Seriously though, closing the loop with your customers on their feedback - solicited or unsolicited - is a game changer. Start by integrating customer signals/data into a real-time analytics platform that not only surfaces key themes, but also flags specific issues requiring follow-up. This is no longer advanced tech. From there, create a workflow that assigns ownership for addressing the feedback, tracks resolution progress, and measures outcomes over time. With most tech having APIs for your CRM, also not a huge lift to set up. By linking feedback directly to improvement efforts, which still requires a human in the loop, and closing the loop by notifying customers when changes are made, you transform a simple data collection tool into a continuous improvement engine. Most companies are not taking these critical few steps though. Does it take time, effort, and money? Yes it does. Can it help you drive down costs and drive up revenue? Also, a hard yes. The beauty of actually closing the loop is that the outcomes can be quantified. How have you seen closing the loop - outer, inner, or both - impact your business? #cx #surveys #ceo
-
I've got a real-world example to share. I noticed that our CSMs and Insight Managers spent the majority of some client calls explaining the same confusing product feature over and over. These were hard-won meetings with key stakeholders, but we wasted precious time on repetitive, low-value conversations. I decided to share recordings of these calls with our executive team, and the response was eye-opening. DISQO rallied to invest in better training, tools, and product enhancements, making that feature more intuitive for our customers. Without AI surfacing these insights from hundreds of hours of calls, we might never have connected the dots. It wasn't a skill issue on the CSM side but a systemic opportunity. That's why we use AI to listen to every single customer conversation. That's how AI elevates customer experiences. #AI #CustomerExperience #CX #Listening #Learning #Value #Opportunity
-
Some of the best decisions I’ve made as a CEO didn’t come from board meetings, strategy offsites or reviewing KPI dashboards. They came from watching how customers work, direct conversations and raw feedback. Watching customers work: One of the most pivotal moments in my career happened when I spent an afternoon shadowing an IT director. I watched as he juggled a ridiculous number of Chrome tabs, constantly copying, pasting, and refreshing just to offboard a single employee from their SaaS applications. That experience sparked the biggest pivot in our company’s history. Direct conversations: Another major turning point came when I set a goal to meet with 100 customers in 100 days. We were growing, but we had quality issues, and our customers were frustrated. Those conversations gave me unfiltered insights into what was working, what wasn’t, and what needed to be fixed. Raw feedback: I made it a habit to personally read every piece of customer feedback that came in from support tickets. This wasn’t about tracking an NPS score. I wanted to see firsthand what customers were saying. I’d regularly follow up with those who left detailed responses, and those conversations often shaped our product strategy in ways data alone never could. Through these experiences and conversations with more than 40 CEOs on Not Another CEO Podcast, I have seen what the best leaders do to stay connected to their customers. Here are three key takeaways. Your personal attention matters. When a CEO speaks directly with customers, it signals that their voice truly matters. No one else at your company carries the same weight in these conversations. Customers will share things with you that they wouldn’t say to an account manager or support rep. These conversations build deep trust and create lifelong evangelists who will champion your company wherever they go. 1:1 conversations beat dashboards. Data and NPS scores are useful, but nothing replaces direct customer conversations. Customers will tell you things in a one-on-one setting that they wouldn’t put in a survey response. Build systems to stay engaged. Staying in front of customers is hard when the process falls entirely on you. The best CEOs work with their teams to ensure they are consistently engaging with customers. Your sales and success teams, EA, and other leaders can help facilitate staying in front of customers through recurring dinners, meetings at events, or dedicated check-ins. Creating a culture where your team actively brings you into customer conversations ensures these interactions happen regularly and at scale. If you want more detail behind each of these, check out the deep dive on the Not Another CEO Substack here: https://lnkd.in/eJNC4tRB
-
User experience surveys are often underestimated. Too many teams reduce them to a checkbox exercise - a few questions thrown in post-launch, a quick look at average scores, and then back to development. But that approach leaves immense value on the table. A UX survey is not just a feedback form; it’s a structured method for learning what users think, feel, and need at scale- a design artifact in its own right. Designing an effective UX survey starts with a deeper commitment to methodology. Every question must serve a specific purpose aligned with research and product objectives. This means writing questions with cognitive clarity and neutrality, minimizing effort while maximizing insight. Whether you’re measuring satisfaction, engagement, feature prioritization, or behavioral intent, the wording, order, and format of your questions matter. Even small design choices, like using semantic differential scales instead of Likert items, can significantly reduce bias and enhance the authenticity of user responses. When we ask users, "How satisfied are you with this feature?" we might assume we're getting a clear answer. But subtle framing, mode of delivery, and even time of day can skew responses. Research shows that midweek deployment, especially on Wednesdays and Thursdays, significantly boosts both response rate and data quality. In-app micro-surveys work best for contextual feedback after specific actions, while email campaigns are better for longer, reflective questions-if properly timed and personalized. Sampling and segmentation are not just statistical details-they’re strategy. Voluntary surveys often over-represent highly engaged users, so proactively reaching less vocal segments is crucial. Carefully designed incentive structures (that don't distort motivation) and multi-modal distribution (like combining in-product, email, and social channels) offer more balanced and complete data. Survey analysis should also go beyond averages. Tracking distributions over time, comparing segments, and integrating open-ended insights lets you uncover both patterns and outliers that drive deeper understanding. One-off surveys are helpful, but longitudinal tracking and transactional pulse surveys provide trend data that allows teams to act on real user sentiment changes over time. The richest insights emerge when we synthesize qualitative and quantitative data. An open comment field that surfaces friction points, layered with behavioral analytics and sentiment analysis, can highlight not just what users feel, but why. Done well, UX surveys are not a support function - they are core to user-centered design. They can help prioritize features, flag usability breakdowns, and measure engagement in a way that's scalable and repeatable. But this only works when we elevate surveys from a technical task to a strategic discipline.
-
1. B2B deal sizes are north of $10,000 - $100,000+ 2. We agree that building the top of the funnel is expensive 3. We agree that 90% of the buying process happens when sellers are NOT in the room (shoutout Nate Nasralla) But we rely on CRM inputs from sellers on why they lost deals 🤔 In any industry, your buyers and customers are the foundation of your success, but B2B is awful at getting feedback. B2C: 🥗 Uber Eats asks us to review our website experience after ordering 🛋 Wayfair asks us to review their furniture delivery 📞 Facebook asked me to review my phone call quality in Messenger 📜 Quora asked me if the content they showed me was relevant 🖇 LinkedIn asked me if the post I was seeing is valuable. B2B: - Buying process: Nothing - Post-sales: product usage + NPS survey So how do we fix this? 1/ Incentives --> First, response rates. People have "survey fatigue" but they don't have "charity fatigue". I love animals, especially dogs (I have a black English lab named Nash), so I've partnered with a local animal shelter to give the option of a gift card or donate to the rescue (win/win) for filling out a survey (drastically increased responses and helps the pups). 2/ Structure --> Strategic questions for each scenario and A LOT of answers to the SAME questions. 1️⃣ Lost deals: Send structured and automated surveys (w/ incentives) asking why they didn't move forward 2️⃣ Won deals: Send a short post-purchase survey asking to rate their experience and why they chose you (make it part of your process). 3️⃣ Won deals (6 months in): Send a survey specifically measuring product-market fit (w/ incentive). 4️⃣ Centralize feedback: Put all this feedback in one place, identify sales, marketing, and product gaps each quarter, and adjust accordingly. 3/ Consistency --> Consistency feedback gives you more data to identify team trends, individual trends, department trends, etc. (sales reps, competitors, product gaps). -------- In a world flooded with AI, automation, and shortcuts, the companies that stay closest to their buyers/customers, have the shortest feedback loops, and take *action* to improve have a better shot at winning. P.S. By far the biggest surprise in pilots has been the deals that can be won back, which we call Rebound Deals (most buyers are no decision and have a specific ask and timing).
-
When I was interviewing users during a study on a new product design focused on comfort, I started to notice some variation in the feedback. Some users seemed quite satisfied, describing it as comfortable and easy to use. Others were more reserved, mentioning small discomforts or saying it didn’t quite feel right. Nothing extreme, but clearly not a uniform experience either. Curious to see how this played out in the larger dataset, I checked the comfort ratings. At first, the average looked perfectly middle-of-the-road. If I had stopped there, I might have just concluded the product was fine for most people. But when I plotted the distribution, the pattern became clearer. Instead of a single, neat peak around the average, the scores were split. There were clusters at both the high and low ends. A good number of people liked it, and another group didn’t, but the average made it all look neutral. That distribution plot gave me a much clearer picture of what was happening. It wasn’t that people felt lukewarm about the design. It was that we had two sets of reactions balancing each other out statistically. And that distinction mattered a lot when it came to next steps. We realized we needed to understand who those two groups were, what expectations or preferences might be influencing their experience, and how we could make the product more inclusive of both. To dig deeper, I ended up using a mixture model to formally identify the subgroups in the data. It confirmed what we were seeing visually, that the responses were likely coming from two different user populations. This kind of modeling is incredibly useful in UX, especially when your data suggests multiple experiences hidden within a single metric. It also matters because the statistical tests you choose depend heavily on your assumptions about the data. If you assume one unified population when there are actually two, your test results can be misleading, and you might miss important differences altogether. This is why checking the distribution is one of the most practical things you can do in UX research. Averages are helpful, but they can also hide important variability. When you visualize the data using a histogram or density plot, you start to see whether people are generally aligned in their experience or whether different patterns are emerging. You might find a long tail, a skew, or multiple peaks, all of which tell you something about how users are interacting with what you’ve designed. Most software can give you a basic histogram. If you’re using R or Python, you can generate one with just a line or two of code. The point is, before you report the average or jump into comparisons, take a moment to see the shape of your data. It helps you tell a more honest, more detailed story about what users are experiencing and why. And if the shape points to something more complex, like distinct user subgroups, methods like mixture modeling can give you a much more accurate and actionable analysis.
-
My role's core focus is to bridge the gap between data and meaningful insights and understand what decision-makers need and how end users prefer to see and digest information. So, I've been experimenting with different ways to visualize sentiment data in Astrato: Diverging Stacked Bar Chart: Displays customer sentiments from negative to positive. It's an effective tool for quickly seeing how customers rate our products. It clearly shows the balance of opinions, helping to identify which products customers favor or do not favor. Separated Bar Charts: Presents each sentiment category (strongly dislike to strongly like) as separate columns for each product. It enables us to compare the sentiment levels across our range of products, making it clear which aspects receive more positive or negative feedback. The choice of visualization depends on business goals and how the audience can best understand it. Using the right visualizations for your data lets the end user see a clear picture and make informed business decisions. Which do you prefer? #DataVisualization #DataAnalysis #BusinessIntelligence