"Talk to customers" is classic startup advice. But not enough folks teach you how to talk to users in a way that gets you actual insights. Since launching Decagon and raising $100M over 3 rounds, we’ve learned a lot, especially about GTM. Here's how we've adapted our customer conversations to go beyond surface-level excitement and uncover real signals of value. We benchmark around dollars when discussing product features. Why? Because it’s easy to run a customer interview where the customer seems thrilled about a new idea we have. But excitement alone doesn’t tell you if a piece of feedback is truly valuable. The only way to find out is to ask the hard questions: → Is this something your team would invest in right now? → How much would you pay for it? → What’s the ROI you’d expect? Questions like these don’t allow for generic answers—they'll give you real clarity into a customer's willingness to pay. For example: say you float a product idea past a potential user. They're stoked by it. Then you ask how much they'd pay for said product—and the answer is $50 per person for a 3-person team. Is that worth building? It might be, depending on the outcome you're shooting for. But if your goal is to build an enterprise-grade product, that buying intent (or lack thereof) isn't going to cut it. If you'd stopped the interview at the surface-level excitement, you might have sent yourself on a journey building a product that isn't viable. By assessing true willingness to pay you can prioritize building what users find valuable versus what might sound good in theory. Get to the dollars as quickly as you can. It’s an approach that has helped us align our roadmap with what customers truly need and ensure we’re building a product that has a measurable impact.
How To Use User Experience Interviews To Drive Innovation
Explore top LinkedIn content from expert professionals.
Summary
Understanding how to use user experience (UX) interviews to drive innovation involves having structured, intentional conversations with your target audience to extract valuable insights that can shape product development. The goal is to go beyond surface-level feedback and uncover actionable data that aligns with user needs and business objectives.
- Ask targeted questions: Focus on specific user behaviors, such as their current practices, challenges, and willingness to invest in solutions, rather than their preferences or opinions.
- Use real-world examples: Present users with concrete scenarios or prototypes to gather actionable and context-driven feedback rather than hypothetical responses.
- Link findings to decisions: Ensure each insight directly informs a product decision by connecting it to measurable objectives and business outcomes.
-
-
Let's face it: most user interviews are a waste of time and resources. Teams conduct hours of interviews yet still build features nobody uses. Stakeholders sit through research readouts but continue to make decisions based on their gut instincts. Researchers themselves often struggle to extract actionable insights from their conversation transcripts. Here's why traditional user interviews so often fail to deliver value: 1. They're built on a faulty premise The conventional interview assumes users can accurately report their own behaviors, preferences, and needs. People are notoriously bad at understanding their own decision-making processes and predicting their future actions. 2. They collect opinions, not evidence "What do you think about this feature?" "Would you use this?" "How important is this to you?" These standard interview questions generate opinions, not evidence. Opinions (even from your target users) are not reliable predictors of actual behavior. 3. They're plagued by cognitive biases From social desirability bias to overweighting recent experiences to confirmation bias, interviews are a minefield of cognitive distortions. 4. They're often conducted too late Many teams turn to user interviews after the core product decisions have already been made. They become performative exercises to validate existing plans rather than tools for genuine discovery. 5. They're frequently disconnected from business metrics Even when interviews yield interesting insights, they often fail to connect directly to the metrics that drive business decisions, making it easy for stakeholders to dismiss the findings. 👉 Here's how to transform them from opinion-collection exercises into powerful insight generators: 1. Focus on behaviors, not preferences Instead of asking what users want, focus on what they actually do. Have users demonstrate their current workflows, complete tasks while thinking aloud, and walk through their existing solutions. 2. Use concrete artifacts and scenarios Abstract questions yield abstract answers. Ground your interviews in specific artifacts. Have users react to tangible options rather than imagining hypothetical features. 3. Triangulate across methods Pair qualitative insights with behavioral data, & other sources of evidence. When you find contradictions, dig deeper to understand why users' stated preferences don't match their actual behaviors. 4. Apply framework-based synthesis Move beyond simply highlighting interesting quotes. Apply structured frameworks to your analysis. 5. Directly connect findings to decisions For each research insight, explicitly identify what product decisions it should influence and how success will be measured. This makes it much harder for stakeholders to ignore your recommendations. What's your experience with user interviews? Have you found ways to make them more effective? Or have you discovered other methods that deliver deeper user insights?
-
There’s something deeply satisfying about watching a well-run qualitative interview quietly evolve into a sophisticated quantitative model. I always tell my students and collaborators that when done right, a simple conversation can eventually fuel something as complex as Structural Equation Modeling. It might sound like a stretch, but it’s really not. I went through this exact process in a study where we aimed to understand why users trust or reject a new product. Like many applied UX projects, we started with messy assumptions, vague ideas. We knew launching a survey would be premature, so we turned to interviews. We had open but focused (guided) conversations with users. Certain phrases kept surfacing. Some participants talked about feeling “disconnected” from the product, even though they found it useful. Others compared it to brands they already trusted, which clearly shaped their expectations. These comments weren’t dramatic, but they hinted at deeper structures behind user decisions. I worked through the transcripts by reading closely, making notes in the margins, and sketching out connections. There was no formal codebook in the beginning. Instead, I relied on a grounded and intuitive approach shaped by years of dealing with messy real-world data. Over time, themes began to take shape. Emotional tone, familiarity, and social alignment emerged as key ideas. These did not come from forcing responses into predefined buckets, but from how users naturally framed their experiences. It was far from a clean process. I constantly revisited groupings, challenged my own interpretations, and asked whether I was seeing real patterns or just noise. But that back-and-forth reflection is exactly where the model began to form. Once the ideas felt more stable, I started thinking about structure. One pattern stood out. When users described the product’s emotional tone early in the conversation, using words like “cold” or “inviting,” they often brought up trust later on. That sequence did not happen in reverse. It was a small but (almost!) consistent thread, and it became the basis for one of our causal paths. This is something people often overlook. Interviews do more than offer themes; they can reveal directionality. If you listen closely, the order in which ideas appear can show you which concepts come first, which serve as bridges, and how the entire experience unfolds in the user’s mind. Eventually, we translated those themes into measurable constructs and tested the model with survey data. Turning rich, emotional language into structured scale items was not easy. The final SEM model did not just fit the data well. It helped us predict how users would respond to different messaging and revealed emotional drop-off points we might have missed otherwise. All of that came from listening first, not guessing. Interviews are not the soft side of research. They are the foundation that allows your most complex methods to stand on something real.