Netflix is making a bold move that's driving users crazy on social media. They completely redesigned their homepage for the first time in 12 years. Thumbnails are MUCH larger, forcing users to scroll more to see the same amount of content. As the new look rolls out, Reddit is blowing up with complaints like, "You have to scroll endlessly to make it through what used to be a single screen's worth of content." The vast majority of posters say they hate the new look. Is this Netflix's dumbest move since Qwikster? Nope. It's not. Netflix's extensive testing showed that more users actually PREFERRED the new design, despite the vocal backlash. But, here's the fascinating psychology: This is a textbook example of what behavioral economists call the "vocal minority bias" in action. The customers who hate change are always the loudest, while satisfied users stay silent. A few more behavioral science principles in action here... Loss Aversion Amplification: When interfaces change, our brains focus intensely on what we've "lost" (the familiar layout) rather than what we've gained (better information, useful features). Status Quo Bias in Digital Behavior: We develop muscle memory with interfaces. Any disruption triggers cognitive resistance, even when the new system is objectively better. The Feedback Loop Paradox: Companies that listen only to vocal complaints often optimize for the wrong metrics. The angriest customers aren't necessarily representative of user preferences. Netflix's data showed that bigger thumbnails help users make better decisions faster by displaying more information upfront. But psychology predicts that many users experiencing change will focus on the friction, not the benefits. The Business Lesson: Sometimes the best user experience decisions feel wrong in the short term. Netflix is trusting their testing data over social media sentiment. This takes serious courage from leadership. Looking beyond the UX question, this shows the fundamental tension between what customers say they want and what actually improves their experience. In your business, are you optimizing for the vocal minority who resist change, or for the silent majority who adapt and benefit from improvements? What's your experience with major product changes? Have you found that initial resistance often fades once users adapt to superior functionality? (Or, maybe, have the users been right?😮) #BehavioralEconomics #CustomerExperience #BusinessPsychology #UserExperience
User Experience for Social Media Platforms
Explore top LinkedIn content from expert professionals.
-
-
Brains aren’t calculators (they really aren’t). People compare, not score, so why do we keep asking for numbers when their minds work in stories and snapshots? I used to rely heavily on rating questions in UX studies. You’ve probably used them too. Rate the ease of a task from 1 to 7 or indicate satisfaction on a scale from 1 to 10. These questions feel measurable and look neat in reports, but after running enough sessions, I started noticing a pattern. A participant would finish a task and pause when asked for a score. They’d hesitate, look unsure, and eventually say something like, “Maybe a six?” followed by, “I’m not really sure what that means.” That hesitation is not about the experience itself. It’s about the format of the question. Most people do not evaluate their experiences using numbers. They judge by comparing, whether against other apps, past expectations, or familiar interactions. When I started asking questions like “How did that compare to what you’re used to?” or “Was that easier or harder than expected?” the responses became clearer and more useful. Participants shared what stood out, what surprised them, and what felt better or worse. Their answers were grounded in real impressions, not guesses. This shift from rating questions to comparison questions changed how I run research. Rating scales flatten experiences into abstract numbers. Comparison questions surface preference, context, and emotion. They help users express themselves in the way they naturally reflect on experiences. And they help researchers hear the parts of the experience that actually drive behavior. There is strong support for this in cognitive science. Tversky’s Elimination by Aspects model shows that people decide by gradually filtering out options that lack something important. Prototype theory explains that we judge how well something matches our internal image of what “good” looks like. Both models show that people think in relative terms, not fixed scores. Even heuristic evaluation in usability relies on comparing designs to expected norms and mental shortcuts, not isolated measurement. These models all point to the same idea. People understand and evaluate experiences through contrast. Asking them to rate something on a scale often hides what they really feel. Asking them to compare helps them express it. I still use quantitative data when needed. It helps with tracking and reporting. But when I want to understand why something works or fails, I ask comparison questions. Because users don’t think in scores. They think in reference points, in expectations, and in choices. That is what we should be listening to.
-
How do you figure out what truly matters to users when you’ve got a long list of features, benefits, or design options - but only a limited sample size and even less time? A lot of UX researchers use Best-Worst Scaling (or MaxDiff) to tackle this. It’s a great method: simple for participants, easy to analyze, and far better than traditional rating scales. But when the research question goes beyond basic prioritization - like understanding user segments, handling optional features, factoring in pricing, or capturing uncertainty - MaxDiff starts to show its limits. That’s when more advanced methods come in, and they’re often more accessible than people think. For example, Anchored MaxDiff adds a must-have vs. nice-to-have dimension that turns relative rankings into more actionable insights. Adaptive Choice-Based Conjoint goes further by learning what matters most to each respondent and adapting the questions accordingly - ideal when you're juggling 10+ attributes. Menu-Based Conjoint works especially well for products with flexible options or bundles, like SaaS platforms or modular hardware, helping you see what users are likely to select together. If you suspect different mental models among your users, Latent Class Models can uncover hidden segments by clustering users based on their underlying choice patterns. TURF analysis is a lifesaver when you need to pick a few features that will have the widest reach across your audience, often used in roadmap planning. And if you're trying to account for how confident or honest people are in their responses, Bayesian Truth Serum adds a layer of statistical correction that can help de-bias sensitive data. Want to tie preferences to price? Gabor-Granger techniques and price-anchored conjoint models give you insight into willingness-to-pay without running a full pricing study. These methods all work well with small-to-medium sample sizes, especially when paired with Hierarchical Bayes or latent class estimation, making them a perfect fit for fast-paced UX environments where stakes are high and clarity matters.
-
Getting the right feedback will transform your job as a PM. More scalability, better user engagement, and growth. But most PMs don’t know how to do it right. Here’s the Feedback Engine I’ve used to ship highly engaging products at unicorns & large organizations: — Right feedback can literally transform your product and company. At Apollo, we launched a contact enrichment feature. Feedback showed users loved its accuracy, but... They needed bulk processing. We shipped it and had a 40% increase in user engagement. Here’s how to get it right: — 𝗦𝘁𝗮𝗴𝗲 𝟭: 𝗖𝗼𝗹𝗹𝗲𝗰𝘁 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 Most PMs get this wrong. They collect feedback randomly with no system or strategy. But remember: your output is only as good as your input. And if your input is messy, it will only lead you astray. Here’s how to collect feedback strategically: → Diversify your sources: customer interviews, support tickets, sales calls, social media & community forums, etc. → Be systematic: track feedback across channels consistently. → Close the loop: confirm your understanding with users to avoid misinterpretation. — 𝗦𝘁𝗮𝗴𝗲 𝟮: 𝗔𝗻𝗮𝗹𝘆𝘇𝗲 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 Analyzing feedback is like building the foundation of a skyscraper. If it’s shaky, your decisions will crumble. So don’t rush through it. Dive deep to identify patterns that will guide your actions in the right direction. Here’s how: Aggregate feedback → pull data from all sources into one place. Spot themes → look for recurring pain points, feature requests, or frustrations. Quantify impact → how often does an issue occur? Map risks → classify issues by severity and potential business impact. — 𝗦𝘁𝗮𝗴𝗲 𝟯: 𝗔𝗰𝘁 𝗼𝗻 𝗖𝗵𝗮𝗻𝗴𝗲𝘀 Now comes the exciting part: turning insights into action. Execution here can make or break everything. Do it right, and you’ll ship features users love. Mess it up, and you’ll waste time, effort, and resources. Here’s how to execute effectively: Prioritize ruthlessly → focus on high-impact, low-effort changes first. Assign ownership → make sure every action has a responsible owner. Set validation loops → build mechanisms to test and validate changes. Stay agile → be ready to pivot if feedback reveals new priorities. — 𝗦𝘁𝗮𝗴𝗲 𝟰: 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 What can’t be measured, can’t be improved. If your metrics don’t move, something went wrong. Either the feedback was flawed, or your solution didn’t land. Here’s how to measure: → Set KPIs for success, like user engagement, adoption rates, or risk reduction. → Track metrics post-launch to catch issues early. → Iterate quickly and keep on improving on feedback. — In a nutshell... It creates a cycle that drives growth and reduces risk: → Collect feedback strategically. → Analyze it deeply for actionable insights. → Act on it with precision. → Measure its impact and iterate. — P.S. How do you collect and implement feedback?
-
I often see people who misinterpret social media as a community building tool. It can be used as such, but very tough to do. (and most people who think they are doing it right are just building another distribution outlet — which is great, but different from building a community) It requires a slightly different approach than the average social strategy. Social Platforms (like X & LinkedIn) • Open networks • Content dependent • Great because people are usually spending lots of their time there • Tough to stand out since you’re competing against the algorithm, other creators, brands, and everyone else in the feed Community Platforms (like Discord, Slack, Circle) • Usually closed networks • Dependent on user engagement • Great for consolidating your core group of members • Very tough to maintain over time since you need people to come back to your specific group (even tougher if engagement is declining) Ok, so how do you use social platforms top build an online community? 1/ Define your community 2/ Share it on your social accounts, in your bio, etc. 3/ Align your content around this community and what they love 4/ When you create your content, keep this specific community in mind 5/ Share updates publicly just like you would within a Discord channel 6/ Allocate a good chunk of time per day to community management 7/ Nurture your most engaged followers by supporting their content 8/ Make introductions directly in the feed wherever possible 9/ Use your platform to elevate others in your community 10/ Introduce group language that people can use How do you know when you’re doing it right? • People will use your account to discover others with similar interests • People will use your language and phrases in their posts • People will use the comments section of your posts like a forum • People will host meetups or connect with one another IRL at events • People will often tag you in content related to your community In closing, Yes, you can use social platforms like X & LinkedIn to build an online community. But it requires much more effort than just posting content about your brand or the problem you solve. You’ve got to constantly keep the community you’re serving top of mind, put in the time to nurture your members, and be consistent over a long period of time.
-
When I was head of growth, our team reached 40% activation rates, and onboarded hundreds of thousands of new users. Without knowing it, we discovered a framework. Here are the 6 steps we followed. 1. Define value: Successful onboarding is typically judged by new user activation rates. But what is activation? The moment users receive value. Reaching it should lead to higher retention & conversion to paid plans. First define it. Then get new users there. 2. Deliver value, quickly Revisit your flow and make sure it gets users to the activation moment fast. Remove unnecessary steps, complexity, and distractions along the way. Not sure how to start? Try reducing time (or steps) to activate by 50%. 3. Motivate users to action: Don't settle for simple. Look for sticking points in the user experience you can solve with microcopy, empty states, tours, email flows, etc. Then remind users what to do next with on-demand checklists, progress bars, & milestone celebrations. 4. Customize the experience: Ditch the one-size fits all approach. Learn about your different use cases. Then, create different product "recipes" to help users achieve their specific goals. 5. Start in the middle: Solve for the biggest user pain points stopping users from starting. Lean on customizable templates and pre-made playbooks to help people go 0-1 faster. 6. Build momentum pre-signup: Create ways for website visitors to start interacting with the product - and building momentum, before they fill out any forms. This means that you'll deliver value sooner, and to more people. Keep it simple. Learn what's valuable to users. Then deliver value on their terms.
-
Accessibility should be seen as necessary, mandatory, and crucial. Here are 8 tips for Global Accessibility Awareness Day (GAAD). Before I dive into these simple tips, let’s quickly learn about GAAD. The main purpose of GAAD is to get everyone talking, thinking, and learning about digital access and inclusion, and the 1 Billion+ people with disabilities. GAAD is celebrated annually on the third Thursday of May, so this year it's on May 15th (today!). A disabled person should be able to experience the internet, apps, social media, and all digital spaces like anyone else, but unfortunately, many websites and digital spaces are still inaccessible. So here are 8 easy tips for digital accessibility: 1. Color Contrast Accessible content generally has high contrast between the background and text colors, which makes it easier to read. For example, using a black background with white text will be accessible for most people. There are exceptions to this guidance as those with colorblindness and conditions like Irlen Syndrome may have other needs. 2. Closed Captions When hosting virtual meetings, always provide closed captions. Also, provide captions for content that you produce online. Please provide fully accurate captions instead of relying on automatically-generated ones. 3. Image Descriptions (IDs) Write IDs to help blind and low vision people learn what an image looks like. This is especially important when an image conveys information, such as an event flyer. You can add IDs within a post or in the comments. 4. Audio Description (AD) Audio description is helpful for those with vision disabilities. AD describes visual content in enough detail so that people don't miss out on information. Include AD in videos and verbally describe images in presentations. 5. Transcripts Transcripts are wonderful for business because they allow you to improve your SEO rankings since your audio or video content has been turned into words. Transcripts also help make content accessible for the D/deaf and hard of hearing, those with other disabilities, and more. 6. Label Buttons Unlabeled buttons on apps and websites create access issues. This is very important for screen reader users. Each user needs to be able to easily determine what a button does and also find the buttons. 7. Pascal Case Hashtags Capitalize each word within a hashtag to ensure a screen reader can understand it. Example: #DisabilityAwareness 8. Include Diverse Images Many times, disabled people don't see themselves represented in the world. This is especially true for disabled people of color. Use diverse images in media representation, advertisements, images on social media, and more. Did you know about Global Accessibility Awareness Day? Will you use these tips? cc: GAAD (Global Accessibility Awareness Day) Foundation PS: For more accessibility tips, check out my free accessibility ebook (linked at the top of my profile)! #Accessibility #GAAD
-
The Psychology Behind Instagram's "Second Chance" Algorithm Fascinating insight that's being overlooked in the Instagram carousel discussion - it's not just about getting a second chance, it's about understanding user behavior patterns and attention economics. Here's a perspective shift.... The "second chance" feature isn't just an algorithm trick - it's Instagram's acknowledgment that first impressions don't always stick in our digital age. Think about it: 1. The First Scroll: Users are often in "discovery mode" - quick, surface-level scanning 2. The Second Appearance: Now they're in "consideration mode" - more likely to engage thoughtfully What This Reveals About Human Behavior - Our brains process information differently on repeat exposures - Trust builds with familiarity - Decision-making often requires multiple touchpoints Strategic Implications for Businesses - Front-load your carousel with intrigue, not just information - Design for both quick-scan and deep-dive audiences - Use the "second chance" as a psychological anchor point While everyone's focused on gaming the algorithm, the true opportunity lies in understanding the psychology behind why this feature exists. How do you approach content differently knowing your audience might see it twice? Does this change your storytelling strategy? #DigitalStrategy #ContentPsychology #BusinessInsights #MarketingPsychology #LinkedInThoughts
-
Yesterday we had over 100 people sign up for Trigify.io, out of those 100 we had a 40% user activation. Here's how we re-did our sign-up process.. 1. 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲𝗱 𝗮𝗻 𝗼𝗻𝗯𝗼𝗮𝗿𝗱𝗶𝗻𝗴 𝗳𝗼𝗿𝗺 𝗮𝘁 𝘁𝗵𝗲 𝘀𝘁𝗮𝗿𝘁 𝗼𝗳 𝘁𝗵𝗲 𝗹𝗼𝗴𝗶𝗻, 𝘄𝗶𝘁𝗵 𝗮 𝗹𝗶𝘁𝘁𝗹𝗲 𝗔𝗜 𝘁𝘄𝗶𝘀𝘁.. → We asked why they were at Trigify.io & what pain they were looking to fix. → Based on this we then used AI to route them to 1 of 10 different marketing 'onboarding' flows where I've done over 20 different videos focusing on educating & activating the user. 2. 𝗪𝗲 𝗶𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲𝗱 𝘁𝗵𝗲 𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝘁𝗼 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘁𝗼 Slack 𝗯𝗲𝗳𝗼𝗿𝗲 𝗹𝗼𝗴𝗴𝗶𝗻𝗴 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. → We wanted to create a hooked emotional state. → Tracking your own LinkedIn is already going to have a high emotional state as social media has created the dopamine drug there so we wanted to tap into this. → When Trigify runs the sync and pulls in your posts or who evers it then alerts you via Slack & Email bringing you back to the platform. 3. 𝗖𝗼𝗻𝗻𝗲𝗰𝘁 𝘀𝘁𝗿𝗮𝗶𝗴𝗵𝘁 𝘁𝗼 𝗼𝘂𝗿 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝘀𝘂𝗶𝘁 → By placing this in the onboarding flow we at 60% of users connect and then 40% use it. → When you log into Trigify.io you are already 29% completed Seems an odd one but studies have shown that if you are already halfway through doing something you continue doing so. 4. 𝗜𝗻-𝗮𝗽𝗽 𝗻𝗼𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 Using Knock we've created a Bell icon that has helped push people through the onboarding flow & create that loop cycle we are after. ----- Watching the session (replays) back was amazing, seeing someone: Connect 3 accounts and pull their engagement, pull 2000 leads, get their email, and export to Smartlead in under 10 minutes - was epic. We've failed a lot at PLG but this seems like a step forward after months and months of steps back & hours spent watching PostHog! With great usage and great feature means awesome results like the below 👎
-
User research is great, but what if you do not have the time or budget for it........ In an ideal world, you would test and validate every design decision. But, that is not always the reality. Sometimes you do not have the time, access, or budget to run full research studies. So how do you bridge the gap between guessing and making informed decisions? These are some of my favorites: 1️⃣ Analyze drop-off points: Where users abandon a flow tells you a lot. Are they getting stuck on an input field? Hesitating at the payment step? Running into bugs? These patterns reveal key problem areas. 2️⃣ Identify high-friction areas: Where users spend the most time can be good or bad. If a simple action is taking too long, that might signal confusion or inefficiency in the flow. 3️⃣ Watch real user behavior: Tools like Hotjar | by Contentsquare or PostHog let you record user sessions and see how people actually interact with your product. This exposes where users struggle in real time. 4️⃣ Talk to customer support: They hear customer frustrations daily. What are the most common complaints? What issues keep coming up? This feedback is gold for improving UX. 5️⃣ Leverage account managers: They are constantly talking to customers and solving their pain points, often without looping in the product team. Ask them what they are hearing. They will gladly share everything. 6️⃣ Use survey data: A simple Google Forms, Typeform, or Tally survey can collect direct feedback on user experience and pain points. 6️⃣ Reference industry leaders: Look at existing apps or products with similar features to what you are designing. Use them as inspiration to simplify your design decisions. Many foundational patterns have already been solved, there is no need to reinvent the wheel. I have used all of these methods throughout my career, but the trick is knowing when to use each one and when to push for proper user research. This comes with time. That said, not every feature or flow needs research. Some areas of a product are so well understood that testing does not add much value. What unconventional methods have you used to gather user feedback outside of traditional testing? _______ 👋🏻 I’m Wyatt—designer turned founder, building in public & sharing what I learn. Follow for more content like this!