💬 A couple of years ago, I was helping a SaaS startup to make sense of their low retention rates. The real problem? The C-suite hesitated to allow direct conversations with users. Their reasoning was rooted in their desire to maintain strictly "white-glove-level relationships" with their high-paying clients and avoid bothering them with "unnecessary" queries. Not going deeper into the validity of their rationale, but here are some things I did instead to avoid guesswork or giving assumptive recommendations: 1️⃣ Worked with internal teams: Obvious, right? But when each team works in their silo, lots of things fall through the cracks. So I got customer success, support and sales teams in the room together. We had several group discussions and identified critical common pain points they had heard from clients. 2️⃣ Analytics deep-dive: Being a SaaS platform, the startup had extensive analytics built into their product. So we spent days analyzing usage patterns, funnels, and behavior flow charts. The data spoke louder than words in revealing where users spent most of their time and where drop-offs were most common. 3️⃣ Social media as primary feedback channels: We have also started monitoring public forums, review sites, and tracked social media mentions. We collected a lot of useful insights through this unfiltered lens into users' many frustrations and occasional delights. 4️⃣ Support tickets: This part was very tedious, but the support tickets were a goldmine of information. By classifying and analyzing the nature of user concerns, we were able to identify features that users found challenging or non-intuitive. 5️⃣ Competitive analysis: And of course, we looked at the competitors. What were users saying about them? What features or offerings were making them switch or consider alternatives? 6️⃣ Internal usability tests: While I couldn't talk to users directly, I organized usability tests internally. By simulating user scenarios and tasks, we identified main friction points in the critical user journeys. Ideal? No. But definitely eye-opening for the entire team building the platform. 7️⃣ Listening in on sales demos: Last but not least, by attending sales demos as silent observers, we got to understand the questions potential customers asked, their concerns, and their initial reactions to the software. Nothing can replace solid, well-organized user research. But through these alternative methods, we managed to paint a more holistic picture of the end-to-end product experience without ever directly reaching out to users. And these methods not only helped in pinpointing the issues leading to low retention, but also offered actionable recommendations for improvement. → And the result? A more refined, user-centric product that saw an uptick in retention, all without ruffling a single white glove 😉 #ux #uxr #startupchallenges #userretention
User Experience Insights for SaaS Product Iteration
Explore top LinkedIn content from expert professionals.
Summary
User experience insights for SaaS product iteration focus on understanding how users interact with software-as-a-service (SaaS) platforms to consistently improve their functionality, usability, and overall value. By gathering data on user behavior, feedback, and preferences, teams can make informed decisions to create user-centered features that drive satisfaction and retention.
- Analyze user behavior: Use tools like analytics, usability tests, and behavior tracking to identify patterns, pinpoint friction points, and understand user needs more deeply.
- Collaborate across teams: Bring together departments like customer success, sales, and support to identify shared user pain points and improve the product based on collective insights.
- Test and prioritize: Conduct user research, test concepts with real users, and prioritize features that solve significant user challenges and align with business goals.
-
-
How do you figure out what truly matters to users when you’ve got a long list of features, benefits, or design options - but only a limited sample size and even less time? A lot of UX researchers use Best-Worst Scaling (or MaxDiff) to tackle this. It’s a great method: simple for participants, easy to analyze, and far better than traditional rating scales. But when the research question goes beyond basic prioritization - like understanding user segments, handling optional features, factoring in pricing, or capturing uncertainty - MaxDiff starts to show its limits. That’s when more advanced methods come in, and they’re often more accessible than people think. For example, Anchored MaxDiff adds a must-have vs. nice-to-have dimension that turns relative rankings into more actionable insights. Adaptive Choice-Based Conjoint goes further by learning what matters most to each respondent and adapting the questions accordingly - ideal when you're juggling 10+ attributes. Menu-Based Conjoint works especially well for products with flexible options or bundles, like SaaS platforms or modular hardware, helping you see what users are likely to select together. If you suspect different mental models among your users, Latent Class Models can uncover hidden segments by clustering users based on their underlying choice patterns. TURF analysis is a lifesaver when you need to pick a few features that will have the widest reach across your audience, often used in roadmap planning. And if you're trying to account for how confident or honest people are in their responses, Bayesian Truth Serum adds a layer of statistical correction that can help de-bias sensitive data. Want to tie preferences to price? Gabor-Granger techniques and price-anchored conjoint models give you insight into willingness-to-pay without running a full pricing study. These methods all work well with small-to-medium sample sizes, especially when paired with Hierarchical Bayes or latent class estimation, making them a perfect fit for fast-paced UX environments where stakes are high and clarity matters.
-
I walked into a $20M disaster—and realized our real problem wasn’t the product. It was how we made decisions. Our flagship SaaS tool had a 92% user rejection rate. Why? Because it was built on the instincts of the highest-paid person in the room—not user insight. • PMs were just order-takers • Feedback loops were broken • Ego > Evidence We didn’t just need better features. We needed a better system for thinking. So we borrowed a page from Ray Dalio. At Bridgewater, Dalio didn’t just build a company—he built a decision-making machine. We applied his core principles to rebuild our product org: 1. Truth > Comfort We made radical transparency our default: • Public mistake logs • Multi-directional feedback • Customers critiquing roadmaps live One feature pivot saved us 3 months of wasted dev time. 2. Merit > Hierarchy We used “believability-weighted decision-making”: • Domain experts (not titles) held more voting power • A junior PM’s dashboard idea beat the exec favorite • It boosted adoption by 89% in 90 days 3. Systems > Stars We built playbooks and decision journals to scale wisdom: • Pre-mortems to test assumptions • Post-mortems weighted by expertise • A living product ops manual in Notion The result? • 3–5x faster innovation velocity • 88% feature adoption (up from 29%) • Cross-sell rate grew from 12% → 53% • Exec override rate dropped from 67% → 9% Lesson: The smartest teams don’t rely on brilliant individuals. They build systems that surface the best ideas—consistently. If you want a product org that thinks better, not just works harder, start by fixing how you decide. Because as I tell my team: “Truth flows > Title wins.” Want to learn how we did it? Read the article linked below:
-
Most SaaS teams are building features users will never adopt. The reason isn't bad engineering. It's bad prioritization. Traditional feature prioritization follows this broken pattern: Executives want it → Competitors have it → Engineering can build it → Ship it But what users actually need gets lost in the noise. User-centered prioritization flips this completely. Instead of guessing what matters, you let user behavior and research drive every decision. Here's how it works: ↳ Start with user research to identify real pain points ↳ Test concepts with actual users before building anything ↳ Prioritize features that solve frequent, important user tasks ↳ Focus on what drives user satisfaction and business outcomes The difference is dramatic. Companies using internal opinions to prioritize features see adoption rates around 12%. Those using user-centered prioritization consistently hit 40% or higher. User-centered prioritization isn't just a method. It's a mindset shift. ↳ Instead of asking "What should we build next?" you ask "What problems are users struggling with today?" ↳ Instead of following competitor features, you follow user workflows. ↳ Instead of building what sounds impressive, you build what creates value. This approach identifies the features that matter most before you waste engineering resources. It reduces development time by focusing on proven needs. It increases adoption because users actually want what you're building. Your roadmap should serve users first. Everything else follows from there.
-
Your UX research is lying to you. And no, I'm not talking about small data inconsistencies. I've seen founders blow $100K+ on product features their users "desperately wanted" only to face 0% adoption. Most research methods are fundamentally flawed because humans are terrible at predicting their own behavior. Here's the TRUTH framework I've used to get accurate user insights: T - Test with money, not words • Never ask "would you use this?" • Instead: "Here's a pre-order link for $50" • Watch what they do, not what they say R - Real environment observations • Stop doing sterile lab tests • Start shadowing users in their natural habitat • Record their frustrations, not their feedback U - Unscripted conversations • Ditch your rigid question list • Let users go off on tangents • Their random rants reveal gold T - Track behavior logs • Implement analytics BEFORE research • Compare what users say vs. what they do • Look for patterns, not preferences H - Hidden pain mining • Users can't tell you their problems • But they'll show you through workarounds • Document their "hacks" - that's where innovation lives STOP: • Running bias-filled focus groups • Asking leading questions • Taking feedback at face value • Rushing to build based on opinions START: • Following the TRUTH framework • Measuring actions over words • Building only what users prove they need PS: Remember, Henry Ford said if he asked people what they wanted, they would have said "faster horses." Don't ask what they want. Watch what they do. Follow me, John Balboa. I swear I'm friendly and I won't detach your components.