User Experience Feedback Collection

Explore top LinkedIn content from expert professionals.

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    The AI PM Guy 🚀 | Helping you land your next job + succeed in your career

    289,558 followers

    Getting the right feedback will transform your job as a PM. More scalability, better user engagement, and growth. But most PMs don’t know how to do it right. Here’s the Feedback Engine I’ve used to ship highly engaging products at unicorns & large organizations: — Right feedback can literally transform your product and company. At Apollo, we launched a contact enrichment feature. Feedback showed users loved its accuracy, but... They needed bulk processing. We shipped it and had a 40% increase in user engagement. Here’s how to get it right: — 𝗦𝘁𝗮𝗴𝗲 𝟭: 𝗖𝗼𝗹𝗹𝗲𝗰𝘁 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 Most PMs get this wrong. They collect feedback randomly with no system or strategy. But remember: your output is only as good as your input. And if your input is messy, it will only lead you astray. Here’s how to collect feedback strategically: → Diversify your sources: customer interviews, support tickets, sales calls, social media & community forums, etc. → Be systematic: track feedback across channels consistently. → Close the loop: confirm your understanding with users to avoid misinterpretation. — 𝗦𝘁𝗮𝗴𝗲 𝟮: 𝗔𝗻𝗮𝗹𝘆𝘇𝗲 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 Analyzing feedback is like building the foundation of a skyscraper. If it’s shaky, your decisions will crumble. So don’t rush through it. Dive deep to identify patterns that will guide your actions in the right direction. Here’s how: Aggregate feedback → pull data from all sources into one place. Spot themes → look for recurring pain points, feature requests, or frustrations. Quantify impact → how often does an issue occur? Map risks → classify issues by severity and potential business impact. — 𝗦𝘁𝗮𝗴𝗲 𝟯: 𝗔𝗰𝘁 𝗼𝗻 𝗖𝗵𝗮𝗻𝗴𝗲𝘀 Now comes the exciting part: turning insights into action. Execution here can make or break everything. Do it right, and you’ll ship features users love. Mess it up, and you’ll waste time, effort, and resources. Here’s how to execute effectively: Prioritize ruthlessly → focus on high-impact, low-effort changes first. Assign ownership → make sure every action has a responsible owner. Set validation loops → build mechanisms to test and validate changes. Stay agile → be ready to pivot if feedback reveals new priorities. — 𝗦𝘁𝗮𝗴𝗲 𝟰: 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 What can’t be measured, can’t be improved. If your metrics don’t move, something went wrong. Either the feedback was flawed, or your solution didn’t land. Here’s how to measure: → Set KPIs for success, like user engagement, adoption rates, or risk reduction. → Track metrics post-launch to catch issues early. → Iterate quickly and keep on improving on feedback. — In a nutshell... It creates a cycle that drives growth and reduces risk: → Collect feedback strategically. → Analyze it deeply for actionable insights. → Act on it with precision. → Measure its impact and iterate. — P.S. How do you collect and implement feedback?

  • View profile for David Politis

    Building the #1 place for CEOs to grow themselves and their companies | 20+ years as a Founder, Executive and Advisor of high growth companies

    15,260 followers

    One of the best ways to create authentic relationships with your customers, get honest feedback on your product and surface game changing ideas is to create a Customer Advisory Board (CAB). Here are the lessons I’ve learned about how to create and run a successful CAB. Your personal involvement as CEO is critical. If you lead it yourself, customers will engage at a deeper level. They’ll be more honest, more vulnerable, and more likely to become evangelists for your company. No one else can unlock this dynamic the way a CEO can. Be clear on the persona. Is your CAB for buyers, users, or budget holders? At BetterCloud, our sweet spot was Directors of IT. Not the CIO, not the IT admin. Know exactly whose voice you want in the room and tailor everything to them. Skip the compensation, give them “status”. Don’t pay CAB members—it gets messy. Instead, make them feel like insiders. Give them a title, early access to roadmaps, VIP treatment at events, and public recognition. People want to feel valued and influential, not bought. Set a cadence you can maintain. I tried monthly meetings once. That was a mistake. Quarterly is the sweet spot. One in-person gathering per year—ideally tied to an industry event—goes a long way in deepening relationships. Structure matters. CABs aren’t just roundtables. They’re curated experiences. Keep meetings tight (90-120 minutes), show real products that are still in the development process (even rough wireframes or high level ideas), and create space for interaction. Done right, they become the ultimate feedback engine. Build real relationships. Your CAB shouldn’t just exist in meetings. Build one-on-one connections. Text, email, check in at events. Keep it small enough that people feel seen and valued. When they have a direct line to the CEO, they stay engaged—and they speak the truth. Done right, your CAB becomes more than just a feedback mechanism. It becomes a strategic asset. It can shape your roadmap, sharpen your positioning, and strengthen your customer relationships in ways no survey ever could. For a deeper dive and detailed tactics behind each of these, check out the full writeup on the Not Another CEO Substack.

  • View profile for Mohsen Rafiei, Ph.D.

    UXR Lead | Assistant Professor of Psychological Science

    10,323 followers

    Drawing from years of my experience designing surveys for my academic projects, clients, along with teaching research methods and Human-Computer Interaction, I've consolidated these insights into this comprehensive guideline. Introducing the Layered Survey Framework, designed to unlock richer, more actionable insights by respecting the nuances of human cognition. This framework (https://lnkd.in/enQCXXnb) re-imagines survey design as a therapeutic session: you don't start with profound truths, but gently guide the respondent through layers of their experience. This isn't just an analogy; it's a functional design model where each phase maps to a known stage of emotional readiness, mirroring how people naturally recall and articulate complex experiences. The journey begins by establishing context, grounding users in their specific experience with simple, memory-activating questions, recognizing that asking "why were you frustrated?" prematurely, without cognitive preparation, yields only vague or speculative responses. Next, the framework moves to surfacing emotions, gently probing feelings tied to those activated memories, tapping into emotional salience. Following that, it focuses on uncovering mental models, guiding users to interpret "what happened and why" and revealing their underlying assumptions. Only after this structured progression does it proceed to capturing actionable insights, where satisfaction ratings and prioritization tasks, asked at the right cognitive moment, yield data that's far more specific, grounded, and truly valuable. This holistic approach ensures you ask the right questions at the right cognitive moment, fundamentally transforming your ability to understand customer minds. Remember, even the most advanced analytics tools can't compensate for fundamentally misaligned questions. Ready to transform your survey design and unlock deeper customer understanding? Read the full guide here: https://lnkd.in/enQCXXnb #UXResearch #SurveyDesign #CognitivePsychology #CustomerInsights #UserExperience #DataQuality

  • View profile for Wyatt Feaster 🫟

    Designer of 10+ years helping startups turn ideas into products | Founder of Ralee.co

    4,287 followers

    User research is great, but what if you do not have the time or budget for it........ In an ideal world, you would test and validate every design decision. But, that is not always the reality. Sometimes you do not have the time, access, or budget to run full research studies. So how do you bridge the gap between guessing and making informed decisions? These are some of my favorites: 1️⃣ Analyze drop-off points: Where users abandon a flow tells you a lot. Are they getting stuck on an input field? Hesitating at the payment step? Running into bugs? These patterns reveal key problem areas. 2️⃣ Identify high-friction areas: Where users spend the most time can be good or bad. If a simple action is taking too long, that might signal confusion or inefficiency in the flow. 3️⃣ Watch real user behavior: Tools like Hotjar | by Contentsquare or PostHog let you record user sessions and see how people actually interact with your product. This exposes where users struggle in real time. 4️⃣ Talk to customer support: They hear customer frustrations daily. What are the most common complaints? What issues keep coming up? This feedback is gold for improving UX. 5️⃣ Leverage account managers: They are constantly talking to customers and solving their pain points, often without looping in the product team. Ask them what they are hearing. They will gladly share everything. 6️⃣ Use survey data: A simple Google Forms, Typeform, or Tally survey can collect direct feedback on user experience and pain points. 6️⃣ Reference industry leaders: Look at existing apps or products with similar features to what you are designing. Use them as inspiration to simplify your design decisions. Many foundational patterns have already been solved, there is no need to reinvent the wheel. I have used all of these methods throughout my career, but the trick is knowing when to use each one and when to push for proper user research. This comes with time. That said, not every feature or flow needs research. Some areas of a product are so well understood that testing does not add much value. What unconventional methods have you used to gather user feedback outside of traditional testing? _______ 👋🏻 I’m Wyatt—designer turned founder, building in public & sharing what I learn. Follow for more content like this!

  • View profile for Elizabeth Laraki

    Design Partner, Electric Capital

    7,831 followers

    When something feels off, I like to dig into why. I came across this feedback UX that intrigued me because it seemingly never ended (following a very brief interaction with a customer service rep). So here's a nerdy breakdown of feedback UX flows — what works vs what doesn't. A former colleague once introduced me to the German term "salamitaktik," which roughly translates to asking for a whole salami one slice at a time. I thought about this recently when I came across Backcountry’s feedback UX. It starts off simple: “Rate your experience.” But then it keeps going. No progress indicator, no clear stopping point—just more questions. What makes this feedback UX frustrating? – Disproportionate to the interaction (too much effort for a small ask) – Encourages extreme responses (people with strong opinions stick around, others drop off) – No sense of completion (users don’t know when they’re done) Compare this to Uber’s rating flow: You finish a ride, rate 1-5 stars, and you’re done. A streamlined model—fast, predictable, actionable (the whole salami). So what makes a good feedback flow? – Respect users’ time – Prioritize the most important questions up front – Keep it short—remove anything unnecessary – Let users opt in to provide extra details – Set clear expectations (how many steps, where they are) – Allow users to leave at any time Backcountry’s current flow asks eight separate questions. But really, they just need two: 1. Was the issue resolved? 2. How well did the customer service rep perform? That’s enough to know if they need to follow up and assess service quality—without overwhelming the user. More feedback isn’t always better—better-structured feedback is. Backcountry’s feedback UX runs on Medallia, but this isn’t a tooling issue—it’s a design issue. Good feedback flows focus on signal, not volume. What are the best and worst feedback UXs you’ve seen?

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher @ Perceptual User Experience Lab | Human-AI Interaction Researcher @ University of Arkansas at Little Rock

    8,025 followers

    User experience surveys are often underestimated. Too many teams reduce them to a checkbox exercise - a few questions thrown in post-launch, a quick look at average scores, and then back to development. But that approach leaves immense value on the table. A UX survey is not just a feedback form; it’s a structured method for learning what users think, feel, and need at scale- a design artifact in its own right. Designing an effective UX survey starts with a deeper commitment to methodology. Every question must serve a specific purpose aligned with research and product objectives. This means writing questions with cognitive clarity and neutrality, minimizing effort while maximizing insight. Whether you’re measuring satisfaction, engagement, feature prioritization, or behavioral intent, the wording, order, and format of your questions matter. Even small design choices, like using semantic differential scales instead of Likert items, can significantly reduce bias and enhance the authenticity of user responses. When we ask users, "How satisfied are you with this feature?" we might assume we're getting a clear answer. But subtle framing, mode of delivery, and even time of day can skew responses. Research shows that midweek deployment, especially on Wednesdays and Thursdays, significantly boosts both response rate and data quality. In-app micro-surveys work best for contextual feedback after specific actions, while email campaigns are better for longer, reflective questions-if properly timed and personalized. Sampling and segmentation are not just statistical details-they’re strategy. Voluntary surveys often over-represent highly engaged users, so proactively reaching less vocal segments is crucial. Carefully designed incentive structures (that don't distort motivation) and multi-modal distribution (like combining in-product, email, and social channels) offer more balanced and complete data. Survey analysis should also go beyond averages. Tracking distributions over time, comparing segments, and integrating open-ended insights lets you uncover both patterns and outliers that drive deeper understanding. One-off surveys are helpful, but longitudinal tracking and transactional pulse surveys provide trend data that allows teams to act on real user sentiment changes over time. The richest insights emerge when we synthesize qualitative and quantitative data. An open comment field that surfaces friction points, layered with behavioral analytics and sentiment analysis, can highlight not just what users feel, but why. Done well, UX surveys are not a support function - they are core to user-centered design. They can help prioritize features, flag usability breakdowns, and measure engagement in a way that's scalable and repeatable. But this only works when we elevate surveys from a technical task to a strategic discipline.

  • View profile for Shrey Khokhra

    AI agent for user interviews | Co-founder Userology | x- Revolut, Snapdeal | BITS-Pilani

    8,617 followers

    During a Usability test, noticed that sometimes users tend to put on their 'best performance’ when they're being watched? You're likely witnessing the Hawthorne effect in action! Happens with us as well. When working from home, during meetings, you're more attentive, nodding more, and sitting up straighter, not just because you're engaged, but because you're aware that your colleagues can see you. This subtle shift in your behaviour due to the awareness of being observed is a daily manifestation of the Observation bias or Hawthorne effect. In the context of UX studies, participants often alter their behaviour because they know they're being observed. They might persist through long loading times or navigate more patiently, not because that's their natural behaviour, but to meet what they perceive are the expectations of the researcher. This phenomenon can yield misleading data, painting a rosier picture of user satisfaction and interaction than is true. When it comes to UX research, this effect can skew results because participants may alter how they interact with a product under observation. Here are some strategies to mitigate this bias in UX research: 🤝 Building Rapport:  Setting a casual tone from the start can also help, engaging in small talk to ease participants into the testing environment and subtly guiding them without being overly friendly. 🎯 Design Realistic Scenarios: Create tasks that reflect typical use cases to ensure participants' actions are as natural as possible.    🗣 Ease Into Testing: Use casual conversation to make participants comfortable and clarify that the session is informal and observational. 💡Set Clear Expectations: Tell participants that their natural behavior is what's needed, and that there's no right or wrong way to navigate the tasks. ✅ Value Honesty Over Perfection: Reinforce that the study aims to find design flaws, not user flaws, and that honest feedback is crucial. 🛑 Remind Them It's Not a Test: If participants apologise for mistakes, remind them that they're helping identify areas for improvement, not being graded. So the next time you're observing a test session and the participant seems to channel their inner tech wizard, remember—it might just be the Hawthorne effect rather than a sudden surge in digital prowess. Unmasking this 'performance' is key to genuine insights, because in the end, we're designing for humans, not stage actors. #uxresearch #uxtips #uxcommunity #ux

  • View profile for Jon MacDonald

    Turning user insights into revenue for top brands like Adobe, Nike, The Economist | Founder, The Good | Author & Speaker | thegood.com | jonmacdonald.com

    15,537 followers

    Don't make the mistake of thinking your work is done once a new site goes live. Most folks breathe a sigh of relief, ready to relax after months of intense effort. But treating launch day as the end goal is a critical error. To maximize your redesign investment, view it as "day one" of ongoing improvement. This approach allows you to quickly identify opportunities for change. Start by gathering quantitative data. Use tools like heatmaps to understand user behavior and conversion paths. Complement this with qualitative insights. Conduct usability testing, analyze session recordings, and collect direct customer feedback. With data in hand, implement a rapid response protocol: ↳ Get cross-functional teams together to hunt for bugs and UX issues in the first 72 hours post-launch ↳ Compare pre- and post-launch metrics across channels and devices, to highlight where to dig deeper ↳ Refresh your user testing and competitive analysis, even if you did this during redesign ↳ Use all of this information to build a clear optimization roadmap including a 90-day action plan Remember, your website is a living asset, not a static project. Continuous improvement is what separates market leaders from the competition. Don't fall into the "launch and leave" trap.

  • View profile for Ben Erez

    I help PMs ace product sense & analytical interviews | Ex-Meta | 3x first PM | Advisor

    20,018 followers

    Too many product teams believe meaningful user research has to involve long interviews, Zoom calls, and endless scheduling and note-taking. But honestly? You can get most of what you need without all that hassle. 🙅♂️ I’ve conducted hundreds of live user research conversations in early-stage startups to inform product decisions, and over the years my thinking has evolved on the role of synchronous time. While there’s a place for real-time convos, I’ve found async tools like Loom often uncover sharper insights—faster—when used intentionally. 🚀 Let’s break down the ROI of shifting to async. If you want to interview 5 people for 30 minutes each, that’s 150 minutes of calls—but because two people are on the call (you and the participant), you’re really spending 300 minutes of combined time. Now, let’s say you record a 3-minute Loom with a few focused questions, send it to those same 5 people, and they each take 5 minutes to write their feedback. That’s 8 minutes per person and just 5 minutes once for you. 45 total minutes versus 300. That’s an order-of-magnitude reduction in time to get hyper-focused feedback. 🕒🔍 Just record a quick Loom, pair it with 1-3 specific questions designed to mitigate key risks, and send it to the right people. This async, scrappy approach gathers real feedback throughout the entire product lifecycle (problem validation, solution exploration, or post-launch feedback) without wasting your users' time or yours. Quick example: Imagine your team is torn between an opinionated implementation of a feature vs. a flexible/customizable one. If you walk through both in a quick Loom and ask five target users which they prefer and why, you’ll get a solid read on your overall user base’s mental model. No need for endless scheduling or drawn-out Zoom calls—just actionable feedback in minutes. 🎯 As an added benefit: this approach also allows you to go back to users for more frequent feedback because you're asking for less of their team with each interaction. 🍪 Note that if you haven’t yet established rapport with the users you’re sending the Looms to, it’s a good idea to introduce yourself at the start in a friendly, personal way. Plus, always make sure to express genuine appreciation and gratitude in the video—it goes a long way in building a connection and getting thoughtful responses. 🙏 Now, don’t get me wrong—there’s still a place for synchronous research, especially in early discovery calls when it’s unclear exactly which problem or solution to focus on. Those calls are critical for diving deeper. But once you have a clear hypothesis and need targeted feedback, async tools can drastically reduce the time burden while keeping the signal strong. 💡 Whether it’s problem validation, solution validation, or post-launch feedback, async research tools can get you actionable insights at every stage for a fraction of the time investment.

  • View profile for Marina Krutchinsky

    UX Leader @ JPMorgan Chase | UX Leadership Coach | Helping experienced UXers break through career plateaus | 7,500+ newsletter readers

    34,754 followers

    💬 A couple of years ago, I was helping a SaaS startup to make sense of their low retention rates. The real problem? The C-suite hesitated to allow direct conversations with users. Their reasoning was rooted in their desire to maintain strictly "white-glove-level relationships" with their high-paying clients and avoid bothering them with "unnecessary" queries. Not going deeper into the validity of their rationale, but here are some things I did instead to avoid guesswork or giving assumptive recommendations: 1️⃣ Worked with internal teams: Obvious, right? But when each team works in their silo, lots of things fall through the cracks. So I got customer success, support and sales teams in the room together. We had several group discussions and identified critical common pain points they had heard from clients. 2️⃣  Analytics deep-dive: Being a SaaS platform, the startup had extensive analytics built into their product. So we spent days analyzing usage patterns, funnels, and behavior flow charts. The data spoke louder than words in revealing where users spent most of their time and where drop-offs were most common. 3️⃣ Social media as primary feedback channels: We have also started monitoring public forums, review sites, and tracked social media mentions. We collected a lot of useful insights through this unfiltered lens into users' many frustrations and occasional delights. 4️⃣ Support tickets: This part was very tedious, but the support tickets were a goldmine of information. By classifying and analyzing the nature of user concerns, we were able to identify features that users found challenging or non-intuitive. 5️⃣  Competitive analysis: And of course, we looked at the competitors. What were users saying about them? What features or offerings were making them switch or consider alternatives? 6️⃣ Internal usability tests: While I couldn't talk to users directly, I organized usability tests internally.  By simulating user scenarios and tasks, we identified main friction points in the critical user journeys. Ideal? No. But definitely eye-opening for the entire team building the platform. 7️⃣  Listening in on sales demos: Last but not least, by attending sales demos as silent observers, we got to understand the questions potential customers asked, their concerns, and their initial reactions to the software. Nothing can replace solid, well-organized user research. But through these alternative methods, we managed to paint a more holistic picture of the end-to-end product experience without ever directly reaching out to users. And these methods not only helped in pinpointing the issues leading to low retention, but also offered actionable recommendations for improvement. → And the result? A more refined, user-centric product that saw an uptick in retention, all without ruffling a single white glove 😉 #ux #uxr #startupchallenges #userretention   

Explore categories