Incorporating User Feedback From Experience Interviews

Explore top LinkedIn content from expert professionals.

Summary

Incorporating user feedback from experience interviews involves gathering and analyzing insights directly from users to understand their experiences, preferences, and pain points. This approach helps create user-centered products by turning raw feedback into actionable improvements.

  • Understand your audience: Organize feedback into themes like usability issues or feature gaps to identify key areas for improvement.
  • Analyze data thoroughly: Go beyond averages by visualizing data distributions to uncover hidden patterns or diverse user groups.
  • Close the feedback loop: Use feedback insights to refine designs, validate changes with users, and track improvements over time through measurable outcomes.
Summarized by AI based on LinkedIn member posts
  • View profile for Mohsen Rafiei, Ph.D.

    UXR Lead | Assistant Professor of Psychological Science

    10,323 followers

    When I was interviewing users during a study on a new product design focused on comfort, I started to notice some variation in the feedback. Some users seemed quite satisfied, describing it as comfortable and easy to use. Others were more reserved, mentioning small discomforts or saying it didn’t quite feel right. Nothing extreme, but clearly not a uniform experience either. Curious to see how this played out in the larger dataset, I checked the comfort ratings. At first, the average looked perfectly middle-of-the-road. If I had stopped there, I might have just concluded the product was fine for most people. But when I plotted the distribution, the pattern became clearer. Instead of a single, neat peak around the average, the scores were split. There were clusters at both the high and low ends. A good number of people liked it, and another group didn’t, but the average made it all look neutral. That distribution plot gave me a much clearer picture of what was happening. It wasn’t that people felt lukewarm about the design. It was that we had two sets of reactions balancing each other out statistically. And that distinction mattered a lot when it came to next steps. We realized we needed to understand who those two groups were, what expectations or preferences might be influencing their experience, and how we could make the product more inclusive of both. To dig deeper, I ended up using a mixture model to formally identify the subgroups in the data. It confirmed what we were seeing visually, that the responses were likely coming from two different user populations. This kind of modeling is incredibly useful in UX, especially when your data suggests multiple experiences hidden within a single metric. It also matters because the statistical tests you choose depend heavily on your assumptions about the data. If you assume one unified population when there are actually two, your test results can be misleading, and you might miss important differences altogether. This is why checking the distribution is one of the most practical things you can do in UX research. Averages are helpful, but they can also hide important variability. When you visualize the data using a histogram or density plot, you start to see whether people are generally aligned in their experience or whether different patterns are emerging. You might find a long tail, a skew, or multiple peaks, all of which tell you something about how users are interacting with what you’ve designed. Most software can give you a basic histogram. If you’re using R or Python, you can generate one with just a line or two of code. The point is, before you report the average or jump into comparisons, take a moment to see the shape of your data. It helps you tell a more honest, more detailed story about what users are experiencing and why. And if the shape points to something more complex, like distinct user subgroups, methods like mixture modeling can give you a much more accurate and actionable analysis.

  • View profile for Thibaut Nyssens 🐣

    Sr. Solutions Engineer @ Atlassian | founding GTM @ Cycle (acq. by Atlassian) | Early-stage GTM Advisor

    8,996 followers

    I talked with 100+ product over the last months They all had the same set of problems Here's the solution (5 steps) Every product leader told me at least one of the following: "Our feedback is all over the place" "PMs have no single source of truth for feedback" "We'd like to back our prioritization with customer feedback" Here's a step-by-step guide to fix this 1/ Where is your most qualitative feedback coming from? What sources do you need to consolidate? - Make an exhaustive list of your feedback sources - Rank them by quality & importance - Find a way to access that data (API, Zapier, Make, scraping, csv exports, ...) 2/ Route all that feedback to a "database-like" tool, a table of records Multiple options here: Airtable, Notion, Google sheets and of course Cycle App -Tag feedback with their related properties: source, product area customer id or email, etc - Match customer properties to the feedback based on customer unique id or email 3/ Calibrate an AI model Teach the AI the following: - What do you want to extract from your raw feedback? - What type of feedback is the AI looking at and how should it process it? (an NPS survey should be treated differently than a user interview) - What features can be mapped to the relevant quotes inside the raw feedback Typically, this won't work out of the box. You need to give your model enough human-verified examples (calibrate it), so it can actually become accurate in finding the right features/discoveries to map. This part is tricky, but without this you'll never be able to process large volumes of feedback and unstructured data. 4/ Plug a BI tool like Google data studio or other on your feedback database - Start by listing your business questions and build charts answering them - Include customer attributes as filters in the dashboard so you can filter on specific customer segments. Every feedback is not equal. - Make sure these dashboards are shared/accessible to the entire product team 5/ Plug your product delivery on top of this At this point, you have a big database full of customer insights and a customer voice dashboard. But it's not actionable. - You want to convert discoveries into actual Jira epics or Linear projects & issues. - You need to have some notion of "status" sync, otherwise your feedback database won't clean itself and you won't be able to close feedback loops The diagram below gives you a clear overview of how to build your own system. Build or buy? Your choice

  • View profile for Subash Chandra

    Founder, CEO @Seative Digital ⸺ Research-Driven UI/UX Design Agency ⭐ Maintains a 96% satisfaction rate across 70+ partnerships ⟶ 💸 2.85B revenue impacted ⎯ 👨🏻💻 Designing every detail with the user in mind.

    20,371 followers

    We don’t guess what users want we ask… That’s how we build digital products users rely on. Here’s how we make feedback the superpower behind great UX 👇  Step 1: Listen Deeply We run: ‣ 1:1 user interviews ‣ In-app surveys & session recordings ‣ Live usability testing  Step 2: Turn Chaos into Clarity We map raw feedback into themes: ‣ Usability issues (e.g. confusing navigation) ‣ Feature gaps (e.g. missing integrations) ‣ Friction points (e.g. slow checkout) Step 3: Design, Test, Validate We co-create with your team: ‣ Interactive prototypes (Figma) ‣ Real user validation before dev ‣ Accessibility & performance checks  Step 4: Ship Fast, Measure Faster Every improvement is: ✔️ A/B tested ✔️ Backed by analytics ✔️ Tied to measurable ROI Who This Helps ‣ SaaS & Tech → Reduce churn, improve onboarding ‣ Fintech → Simplify UX, boost adoption ‣ Healthcare → Design for clarity & trust ‣ Enterprise tools → Optimize internal workflows What You Get ✅ UX audit + feedback dashboard ✅ High-fidelity mockups & tested flows ✅ Real user insights + recordings ✅ Optional: Monthly UX performance reports 💡 User feedback is the fastest way to build what people love. Let’s make it part of your product growth strategy.

Explore categories