Conducting User Experience Interviews

Explore top LinkedIn content from expert professionals.

  • View profile for Jesse Zhang
    Jesse Zhang Jesse Zhang is an Influencer

    CEO / Co-Founder at Decagon

    35,907 followers

    "Talk to customers" is classic startup advice. But not enough folks teach you how to talk to users in a way that gets you actual insights. Since launching Decagon and raising $100M over 3 rounds, we’ve learned a lot, especially about GTM. Here's how we've adapted our customer conversations to go beyond surface-level excitement and uncover real signals of value. We benchmark around dollars when discussing product features. Why? Because it’s easy to run a customer interview where the customer seems thrilled about a new idea we have. But excitement alone doesn’t tell you if a piece of feedback is truly valuable. The only way to find out is to ask the hard questions: → Is this something your team would invest in right now? → How much would you pay for it? → What’s the ROI you’d expect? Questions like these don’t allow for generic answers—they'll give you real clarity into a customer's willingness to pay. For example: say you float a product idea past a potential user. They're stoked by it. Then you ask how much they'd pay for said product—and the answer is $50 per person for a 3-person team. Is that worth building? It might be, depending on the outcome you're shooting for. But if your goal is to build an enterprise-grade product, that buying intent (or lack thereof) isn't going to cut it. If you'd stopped the interview at the surface-level excitement, you might have sent yourself on a journey building a product that isn't viable. By assessing true willingness to pay you can prioritize building what users find valuable versus what might sound good in theory. Get to the dollars as quickly as you can. It’s an approach that has helped us align our roadmap with what customers truly need and ensure we’re building a product that has a measurable impact.

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    The AI PM Guy 🚀 | Helping you land your next job + succeed in your career

    289,547 followers

    With research teams shrinking from layoffs, PMs are now expected to do more with less. But without the right approach, PM-led session replays can lead to false insights. Here are the 4 costly mistakes to avoid for real progress: – 𝗖𝗼𝗺𝗺𝗼𝗻 𝗣𝗶𝘁𝗳𝗮𝗹𝗹𝘀 Even experienced teams can fall into these traps. Here’s how to steer clear of them. 𝗠𝗶𝘀𝘁𝗮𝗸𝗲 𝟭 - 𝗙𝗮𝗹𝗹𝗶𝗻𝗴 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗖𝗼𝗻𝗳𝗶𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗕𝗶𝗮𝘀 𝗧𝗿𝗮𝗽 ➔ What it is Confirmation bias happens when we unknowingly seek out evidence that supports our assumptions. It’s easy to go into session replays with a pre-formed theory about what users will do. ➔ What happens Instead of observing objectively, we end up interpreting every click and pause as “proof” of our theory. ➔ How to fix it Watch sessions with other team members who bring fresh perspectives and uncover what’s really happening. 𝗠𝗶𝘀𝘁𝗮𝗸𝗲 𝟮 - 𝗧𝗵𝗲 𝗦𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝗮𝗹 𝗦𝗶𝗴𝗻𝗶𝗳𝗶𝗰𝗮𝗻𝗰𝗲 𝗠𝘆𝘁𝗵 ➔ What it is The assumption that only large-scale patterns or trends are meaningful. The classic “But it’s just one user!” trap. ➔ What happens Valuable insights are overlooked simply because they come from one or two users instead of thousands. ➔ How to fix it Focus on depth, not numbers. A single user’s struggle can reveal a critical friction point. Treat each session as a deep dive into the user experience. 𝗠𝗶𝘀𝘁𝗮𝗸𝗲 𝟯 - 𝗧𝗵𝗲 𝗤𝘂𝗶𝗰𝗸-𝗙𝗶𝘅 𝗧𝗲𝗺𝗽𝘁𝗮𝘁𝗶𝗼𝗻 ➔ What it is Product teams are naturally drawn to solving problems, so when we see an issue in a session replay, we want to fix it right away. ➔ What happens By jumping to a solution too quickly, we might miss underlying patterns or broader issues that would have become clearer with a bit more patience. ➔ How to fix it Slow down and watch at least three more sessions before you act. This will give you a better sense of whether you require a more strategic fix. 𝗠𝗶𝘀𝘁𝗮𝗸𝗲 𝟰 - 𝗢𝘃𝗲𝗿𝗹𝗼𝗼𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗗𝗲𝘁𝗮𝗶𝗹𝘀 ➔ What it is Session replays are only as valuable as the data they capture. Small but essential interactions, like mouse movements, and scroll patterns can reveal a lot about user friction. ➔ What happens If these details aren’t captured, you’re left with an incomplete picture of the user’s journey, missing key friction points. ➔ How to fix it Double-check that your session replay tool is configured to capture all critical interactions. – 𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 The future of product management is about combining quantitative insights, qualitative depth (session replays - using tools like LogRocket), direct feedback (user research), and predictive foresight (AI). – If you want to stay ahead of the curve with advanced techniques and strategies for product management and career growth... Check out the newsletter.

  • View profile for Kritika Oberoi
    Kritika Oberoi Kritika Oberoi is an Influencer

    Founder at Looppanel | User research at the speed of business | Eliminate guesswork from product decisions

    28,732 followers

    Over-structuring a user interview can kill organic discovery. While planning a user interview, keep this balance in mind: 70% structure, 30% exploration. 📐70% is a structured framework Build a discussion guide with clear objectives. Define your must-have insights. Map your key questions. If you’re researching a checkout flow, plan questions about payment methods, form fields, and error states. 📐30% is where the magic happens Leave a little room for the unexpected. A user mentions they switch between mobile and desktop mid-checkout? Follow that thread. That passing comment about why they never use feature X? Dig deeper. It could lead to a breakthrough insight! 💡Pro tip: Start with a 60-minute timer, but only plan for 45. Keep the extra 15 minutes for spontaneous tangents. Have a list of probing questions ready, but choose your own adventure as it happens. Here’s a useful discussion guide template to bookmark: https://lnkd.in/dXcZqJDY I send a newsletter out every fortnight filled with best-in-the-biz tips for researchers. Don’t miss out, sign up here: https://lnkd.in/daufT7SJ

  • View profile for Natalie Nixon, PhD

    The Global Authority on WonderRigor™️ | I help leaders catalyze creativity’s ROI. | Top 50 Keynote Speakers in the World | Creativity Strategist | Advisor | Author

    24,707 followers

    Ensure all voices are heard by leaning into CURIOSITY! Designing inclusive working sessions can start by inviting questions from EVERYONE- for example, the technique below honors introverted voices and fosters diverse perspectives. Try out some of these practical techniques below in your next meeting or collaboration session… Quiet Reflection Time:  ↳ Create an environment where everyone feels comfortable sharing their thoughts. Structured Brainstorming Sessions:  ↳ Ensure each participant has designated speaking time to reduce pressure. Rotating Facilitators:  ↳ Vary leadership styles and ensure diverse voices are heard throughout discussions. One-on-One Discussions or Smaller Group Settings:  ↳ Provide intimate settings where introverts can freely express their ideas. Techniques like this create an environment where everyone feels comfortable sharing their thoughts. This approach isn't just about diversity. It's about harnessing the power of all perspectives. Together, we can foster environments where every voice contributes to success. Let's ensure that every team member feels empowered to bring their best to the table.

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher @ Perceptual User Experience Lab | Human-AI Interaction Researcher @ University of Arkansas at Little Rock

    8,025 followers

    If you're a UX researcher working with open-ended surveys, interviews, or usability session notes, you probably know the challenge: qualitative data is rich - but messy. Traditional coding is time-consuming, sentiment tools feel shallow, and it's easy to miss the deeper patterns hiding in user feedback. These days, we're seeing new ways to scale thematic analysis without losing nuance. These aren’t just tweaks to old methods - they offer genuinely better ways to understand what users are saying and feeling. Emotion-based sentiment analysis moves past generic “positive” or “negative” tags. It surfaces real emotional signals (like frustration, confusion, delight, or relief) that help explain user behaviors such as feature abandonment or repeated errors. Theme co-occurrence heatmaps go beyond listing top issues and show how problems cluster together, helping you trace root causes and map out entire UX pain chains. Topic modeling, especially using LDA, automatically identifies recurring themes without needing predefined categories - perfect for processing hundreds of open-ended survey responses fast. And MDS (multidimensional scaling) lets you visualize how similar or different users are in how they think or speak, making it easy to spot shared mindsets, outliers, or cohort patterns. These methods are a game-changer. They don’t replace deep research, they make it faster, clearer, and more actionable. I’ve been building these into my own workflow using R, and they’ve made a big difference in how I approach qualitative data. If you're working in UX research or service design and want to level up your analysis, these are worth trying.

  • View profile for Ben Erez

    I help PMs ace product sense & analytical interviews | Ex-Meta | 3x first PM | Advisor

    20,018 followers

    I love using Claude projects for rounds of customer research. Here's my workflow: I start by setting the Project Instructions with something like this, adding a bit more context where helpful about my goals: "Most of the chats in this project will start with me providing a PDF transcript of a user interview. I would like you to create a summary of the call, highlighting 2-4 key quotes from the call for problem validation and 2-4 key quotes from the call for solution excitement. I would like the key quotes to be accompanies by a timestamp from the call. Then, I would like you to create an artifact that I can add to the project knowledge summarizing the above for this one research call." After every research call, I export a PDF from Grain for every user research interview and add it to a new chat in the Claude project, hit `enter`, and proceed to copy the generated artifact to the project knowledge. Once I've finished the round of research interviews (days/weeks later), I can start new chats asking questions about the sum of the research such as: ↳ Which research participants were most excited about the solutions we discussed? ↳ If you consider calls in which pain points were validated, which pain points were most common (how many participants validated the pain point and what were their names)? ↳ If I wanted to follow up with participants to alpha test a solution that does [describe functionality], which research participant would be the best person to target for this first and why? I've already run multiple rounds of research, each involving ~20 interviews and this process works like magic. Thinking back to doing all of this manually, using AI in this way has probably reduced the time investment needed for synthesizing findings from a round of research by a solid 80%. This gives even more time for customer conversations. What a time to be building!

  • View profile for Bryan Zmijewski

    Started and run ZURB. 2,500+ teams made design work.

    12,258 followers

    People often say what they think they should say. I had a great exchange with 👋 Brandon Spencer, who highlighted the challenges of using qualitative user research. He suggested that qual responses are helpful, but you have to read between the lines more than you do when watching what they do. People often say what they think they should be saying and do what they naturally would. I agree. Based on my digital experiences, there are several reasons for this behavior. People start with what they know or feel, filtered by their long-term memory. Social bias ↳ People often say what they think they should be saying because they want to present themselves positively, especially in social or evaluative situations. Jakob's Law ↳ Users spend most of their time on other sites, meaning they speak to your site/app like the sites they already know. Resolving these issues in UX research requires a multi-faceted approach that considers what users say (user wants) and what they do (user needs) while accounting for biases and user expectations. Here’s how we tackle these issues: 1. Combine qualitative and quantitative research We use Helio to pull qualitative insights to understand the "why" behind user behavior but validate these insights with quantitative data (e.g., structured behavioral questions). This helps to balance what users say with what they do. 2. Test baselines with your competitors Compare your design with common patterns with which users are familiar. Knowing this information reduces cognitive load and makes it easier for users to interact naturally with your site on common tasks. 3. Allow anonymity  Allow users to provide feedback anonymously to reduce the pressure to present themselves positively. Helio automatically does this while still creating targeted audiences. We also don’t do video. This can lead to more honest and authentic responses. 4. Neutral questioning We frame questions to reduce the likelihood of leading or socially desirable answers. For example, ask open-ended questions that don’t imply a “right” answer. 5. Natural settings Engage with users in their natural environment and devices to observe their real behavior and reduce the influence of social bias. Helio is a remote platform, so people can respond wherever they want. The last thing we have found is that by asking more in-depth questions and increasing participants, you can gain stronger insights by cross-referencing data. → Deeper: When users give expected or socially desirable answers, ask follow-up questions to explore their true thoughts and behaviors. → Wider: Expand your sample size (we test with 100 participants) and keep testing regularly. We gather 10,000 customer answers each month, which helps create a broader and more reliable data set. Achieving a more accurate and complete understanding of user behavior is possible, leading to better design decisions. #productdesign #productdiscovery #userresearch #uxresearch

  • View profile for Wyatt Feaster 🫟

    Designer of 10+ years helping startups turn ideas into products | Founder of Ralee.co

    4,285 followers

    User research is great, but what if you do not have the time or budget for it........ In an ideal world, you would test and validate every design decision. But, that is not always the reality. Sometimes you do not have the time, access, or budget to run full research studies. So how do you bridge the gap between guessing and making informed decisions? These are some of my favorites: 1️⃣ Analyze drop-off points: Where users abandon a flow tells you a lot. Are they getting stuck on an input field? Hesitating at the payment step? Running into bugs? These patterns reveal key problem areas. 2️⃣ Identify high-friction areas: Where users spend the most time can be good or bad. If a simple action is taking too long, that might signal confusion or inefficiency in the flow. 3️⃣ Watch real user behavior: Tools like Hotjar | by Contentsquare or PostHog let you record user sessions and see how people actually interact with your product. This exposes where users struggle in real time. 4️⃣ Talk to customer support: They hear customer frustrations daily. What are the most common complaints? What issues keep coming up? This feedback is gold for improving UX. 5️⃣ Leverage account managers: They are constantly talking to customers and solving their pain points, often without looping in the product team. Ask them what they are hearing. They will gladly share everything. 6️⃣ Use survey data: A simple Google Forms, Typeform, or Tally survey can collect direct feedback on user experience and pain points. 6️⃣ Reference industry leaders: Look at existing apps or products with similar features to what you are designing. Use them as inspiration to simplify your design decisions. Many foundational patterns have already been solved, there is no need to reinvent the wheel. I have used all of these methods throughout my career, but the trick is knowing when to use each one and when to push for proper user research. This comes with time. That said, not every feature or flow needs research. Some areas of a product are so well understood that testing does not add much value. What unconventional methods have you used to gather user feedback outside of traditional testing? _______ 👋🏻 I’m Wyatt—designer turned founder, building in public & sharing what I learn. Follow for more content like this!

  • View profile for Mohsen Rafiei, Ph.D.

    UXR Lead | Assistant Professor of Psychological Science

    10,323 followers

    A good survey works like a therapy session. You don’t begin by asking for deep truths, you guide the person gently through context, emotion, and interpretation. When done in the right sequence, your questions help people articulate thoughts they didn’t even realize they had. Most UX surveys fall short not because users hold back, but because the design doesn’t help them get there. They capture behavior and preferences but often miss the emotional drivers, unmet expectations, and mental models behind them. In cognitive psychology, we understand that thoughts and feelings exist at different levels. Some answers come automatically, while others require reflection and reconstruction. If a survey jumps straight to asking why someone was frustrated, without first helping them recall the situation or how it felt, it skips essential cognitive steps. This often leads to vague or inconsistent data. When I design surveys, I use a layered approach grounded in models like Levels of Processing, schema activation, and emotional salience. It starts with simple, context-setting questions like “Which feature did you use most recently?” or “How often do you use this tool in a typical week?” These may seem basic, but they activate memory networks and help situate the participant in the experience. Visual prompts or brief scenarios can support this further. Once context is active, I move into emotional or evaluative questions (still gently) asking things like “How confident did you feel?” or “Was anything more difficult than expected?” These help surface emotional traces tied to memory. Using sliders or response ranges allows participants to express subtle variations in emotional intensity, which matters because emotion often turns small usability issues into lasting negative impressions. After emotional recall, we move into the interpretive layer, where users start making sense of what happened and why. I ask questions like “What did you expect to happen next?” or “Did the interface behave the way you assumed it would?” to uncover the mental models guiding their decisions. At this stage, responses become more thoughtful and reflective. While we sometimes use AI-powered sentiment analysis to identify patterns in open-ended responses, the real value comes from the survey’s structure, not the tool. Only after guiding users through context, emotion, and interpretation do we include satisfaction ratings, prioritization tasks, or broader reflections. When asked too early, these tend to produce vague answers. But after a structured cognitive journey, feedback becomes far more specific, grounded, and actionable. Adaptive paths or click-to-highlight elements often help deepen this final stage. So, if your survey results feel vague, the issue may lie in the pacing and flow of your questions. A great survey doesn’t just ask, it leads. And when done right, it can uncover insights as rich as any interview. *I’ve shared an example structure in the comment section.

  • View profile for Shelby Astle, PhD

    UX Researcher @ Key Lime Interactive 💚 | 8 years leading mixed-methods research on SaaS, AI/ML, FinTech, and EdTech products | Lego Builder & serial hobbyist

    3,580 followers

    Three things I'm working on to improve my interview moderation skills: 1. No elipsis questions ❓ I'm working on leaving questions at the question without giving examples (these often trail off with an elipsis): "How do you communicate with your co-workers? Do you use texting, email, Zoom, a chat app...?" It's often instinctive to ask questions like this in everyday life (especially for over-explainers like myself 👋 ), but they can get in the way of what you're really trying to accomplish in a user interview. To paraphrase Steve Portigal on the Design Better podcast: When we give suggestions of possible answers in the question, it can limit the participant's ability to reflect, unintentionally train them how they should respond, and get in the way of deep & authentic responses. We often have really good intentions for asking elipsis questions. We want to build rapport, scaffold, connect, and support participants, but these questions can actually have the opposite effect. 2. Allow silence 🤫 We've all heard this one before, but it's much easier said than done. Allowing silence during an interview gives the participant time to think and reflect, often leading to deeper and more insightful responses. It can also make the interviewee feel more comfortable and less rushed, encouraging them to share more candidly and thoroughly. I've noticed that when I'm taking notes during an interview, sometimes I'll need an extra beat to finish up a note after a participant answers a question. That unintentional moment of silence while I'm typing after I think they're done responding often leads to participants' adding an even more deep or reflective response that I wouldn't have gotten if I had rushed on to the next question. I'm working on doing this much more intentionally. 3. Redirect if participants start talking about others ↪️ I'm working on redirecting when people say things like: "Well, I could see some people using this feature, maybe if they need to communicate with external and internal team members..." or "Maybe this would work for iPhone users..." When a participant starts talking about what other people might want rather than their own opinions, you can gently guide the conversation back to their personal perspective. Remind them that you’re interested in their personal experiences and use probing questions to bring the discussion back to their own needs & opinions. Acknowledge their points about others, but reframe and redirect the conversation to highlight their individual perspective. “You mentioned that others might find this useful. How about you? How would you use it?” “I can see why that might be important to some. How do you feel about it? How would it impact your experience?” “That’s interesting. Can you tell me more about what you personally would like or need in this product?” 🤔 What are things you're working on to improve your interview moderation skills? #UserInterviews #UXResearch #UserExperience #UsabilityTesting

Explore categories