Getting the right feedback will transform your job as a PM. More scalability, better user engagement, and growth. But most PMs don’t know how to do it right. Here’s the Feedback Engine I’ve used to ship highly engaging products at unicorns & large organizations: — Right feedback can literally transform your product and company. At Apollo, we launched a contact enrichment feature. Feedback showed users loved its accuracy, but... They needed bulk processing. We shipped it and had a 40% increase in user engagement. Here’s how to get it right: — 𝗦𝘁𝗮𝗴𝗲 𝟭: 𝗖𝗼𝗹𝗹𝗲𝗰𝘁 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 Most PMs get this wrong. They collect feedback randomly with no system or strategy. But remember: your output is only as good as your input. And if your input is messy, it will only lead you astray. Here’s how to collect feedback strategically: → Diversify your sources: customer interviews, support tickets, sales calls, social media & community forums, etc. → Be systematic: track feedback across channels consistently. → Close the loop: confirm your understanding with users to avoid misinterpretation. — 𝗦𝘁𝗮𝗴𝗲 𝟮: 𝗔𝗻𝗮𝗹𝘆𝘇𝗲 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 Analyzing feedback is like building the foundation of a skyscraper. If it’s shaky, your decisions will crumble. So don’t rush through it. Dive deep to identify patterns that will guide your actions in the right direction. Here’s how: Aggregate feedback → pull data from all sources into one place. Spot themes → look for recurring pain points, feature requests, or frustrations. Quantify impact → how often does an issue occur? Map risks → classify issues by severity and potential business impact. — 𝗦𝘁𝗮𝗴𝗲 𝟯: 𝗔𝗰𝘁 𝗼𝗻 𝗖𝗵𝗮𝗻𝗴𝗲𝘀 Now comes the exciting part: turning insights into action. Execution here can make or break everything. Do it right, and you’ll ship features users love. Mess it up, and you’ll waste time, effort, and resources. Here’s how to execute effectively: Prioritize ruthlessly → focus on high-impact, low-effort changes first. Assign ownership → make sure every action has a responsible owner. Set validation loops → build mechanisms to test and validate changes. Stay agile → be ready to pivot if feedback reveals new priorities. — 𝗦𝘁𝗮𝗴𝗲 𝟰: 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 What can’t be measured, can’t be improved. If your metrics don’t move, something went wrong. Either the feedback was flawed, or your solution didn’t land. Here’s how to measure: → Set KPIs for success, like user engagement, adoption rates, or risk reduction. → Track metrics post-launch to catch issues early. → Iterate quickly and keep on improving on feedback. — In a nutshell... It creates a cycle that drives growth and reduces risk: → Collect feedback strategically. → Analyze it deeply for actionable insights. → Act on it with precision. → Measure its impact and iterate. — P.S. How do you collect and implement feedback?
Collecting Qualitative Feedback From Users
Explore top LinkedIn content from expert professionals.
Summary
Collecting qualitative feedback from users involves gathering in-depth, non-numerical insights to better understand user behaviors, emotions, and needs. This process helps product and UX teams make more informed decisions by uncovering the "why" behind user actions.
- Diversify feedback channels: Use multiple sources like user interviews, surveys, social media, and support tickets to capture a holistic view of user experiences.
- Analyze for patterns: Consolidate data to identify recurring themes, emotional signals, and common challenges that can inform product improvements.
- Be strategic with follow-up: Close the loop with users to confirm findings, test solutions, and ensure the changes address their actual needs.
-
-
If you're a UX researcher working with open-ended surveys, interviews, or usability session notes, you probably know the challenge: qualitative data is rich - but messy. Traditional coding is time-consuming, sentiment tools feel shallow, and it's easy to miss the deeper patterns hiding in user feedback. These days, we're seeing new ways to scale thematic analysis without losing nuance. These aren’t just tweaks to old methods - they offer genuinely better ways to understand what users are saying and feeling. Emotion-based sentiment analysis moves past generic “positive” or “negative” tags. It surfaces real emotional signals (like frustration, confusion, delight, or relief) that help explain user behaviors such as feature abandonment or repeated errors. Theme co-occurrence heatmaps go beyond listing top issues and show how problems cluster together, helping you trace root causes and map out entire UX pain chains. Topic modeling, especially using LDA, automatically identifies recurring themes without needing predefined categories - perfect for processing hundreds of open-ended survey responses fast. And MDS (multidimensional scaling) lets you visualize how similar or different users are in how they think or speak, making it easy to spot shared mindsets, outliers, or cohort patterns. These methods are a game-changer. They don’t replace deep research, they make it faster, clearer, and more actionable. I’ve been building these into my own workflow using R, and they’ve made a big difference in how I approach qualitative data. If you're working in UX research or service design and want to level up your analysis, these are worth trying.
-
Too many product teams believe meaningful user research has to involve long interviews, Zoom calls, and endless scheduling and note-taking. But honestly? You can get most of what you need without all that hassle. 🙅♂️ I’ve conducted hundreds of live user research conversations in early-stage startups to inform product decisions, and over the years my thinking has evolved on the role of synchronous time. While there’s a place for real-time convos, I’ve found async tools like Loom often uncover sharper insights—faster—when used intentionally. 🚀 Let’s break down the ROI of shifting to async. If you want to interview 5 people for 30 minutes each, that’s 150 minutes of calls—but because two people are on the call (you and the participant), you’re really spending 300 minutes of combined time. Now, let’s say you record a 3-minute Loom with a few focused questions, send it to those same 5 people, and they each take 5 minutes to write their feedback. That’s 8 minutes per person and just 5 minutes once for you. 45 total minutes versus 300. That’s an order-of-magnitude reduction in time to get hyper-focused feedback. 🕒🔍 Just record a quick Loom, pair it with 1-3 specific questions designed to mitigate key risks, and send it to the right people. This async, scrappy approach gathers real feedback throughout the entire product lifecycle (problem validation, solution exploration, or post-launch feedback) without wasting your users' time or yours. Quick example: Imagine your team is torn between an opinionated implementation of a feature vs. a flexible/customizable one. If you walk through both in a quick Loom and ask five target users which they prefer and why, you’ll get a solid read on your overall user base’s mental model. No need for endless scheduling or drawn-out Zoom calls—just actionable feedback in minutes. 🎯 As an added benefit: this approach also allows you to go back to users for more frequent feedback because you're asking for less of their team with each interaction. 🍪 Note that if you haven’t yet established rapport with the users you’re sending the Looms to, it’s a good idea to introduce yourself at the start in a friendly, personal way. Plus, always make sure to express genuine appreciation and gratitude in the video—it goes a long way in building a connection and getting thoughtful responses. 🙏 Now, don’t get me wrong—there’s still a place for synchronous research, especially in early discovery calls when it’s unclear exactly which problem or solution to focus on. Those calls are critical for diving deeper. But once you have a clear hypothesis and need targeted feedback, async tools can drastically reduce the time burden while keeping the signal strong. 💡 Whether it’s problem validation, solution validation, or post-launch feedback, async research tools can get you actionable insights at every stage for a fraction of the time investment.
-
Your UX research is lying to you. And no, I'm not talking about small data inconsistencies. I've seen founders blow $100K+ on product features their users "desperately wanted" only to face 0% adoption. Most research methods are fundamentally flawed because humans are terrible at predicting their own behavior. Here's the TRUTH framework I've used to get accurate user insights: T - Test with money, not words • Never ask "would you use this?" • Instead: "Here's a pre-order link for $50" • Watch what they do, not what they say R - Real environment observations • Stop doing sterile lab tests • Start shadowing users in their natural habitat • Record their frustrations, not their feedback U - Unscripted conversations • Ditch your rigid question list • Let users go off on tangents • Their random rants reveal gold T - Track behavior logs • Implement analytics BEFORE research • Compare what users say vs. what they do • Look for patterns, not preferences H - Hidden pain mining • Users can't tell you their problems • But they'll show you through workarounds • Document their "hacks" - that's where innovation lives STOP: • Running bias-filled focus groups • Asking leading questions • Taking feedback at face value • Rushing to build based on opinions START: • Following the TRUTH framework • Measuring actions over words • Building only what users prove they need PS: Remember, Henry Ford said if he asked people what they wanted, they would have said "faster horses." Don't ask what they want. Watch what they do. Follow me, John Balboa. I swear I'm friendly and I won't detach your components.