Design Critique Methods for Feedback

Explore top LinkedIn content from expert professionals.

  • View profile for Keith Ferrazzi
    Keith Ferrazzi Keith Ferrazzi is an Influencer

    #1 NYT Bestselling Author | Keynote Speaker | Coach | Architecting the Future of Human-AI Collaboration

    57,724 followers

    Most team meetings are just report-outs dressed as collaboration. Someone walks through a 20-slide deck, a few people nod, a few multitask, and then the real feedback comes later via Slack messages, hallway conversations, or not at all. By the time the truth surfaces, it’s often too late to help. That’s why I’ve become such a champion of one of our most powerful High Return Practices: Stress Testing. Stress Testing is how world-class teams pressure-test big ideas before they hit the real world. It replaces “sit and listen” with “see something, say something” in a way that’s safe, structured, and supportive. Here’s how it works: Step 1: A team member presents their project in just one slide. What’s been achieved so far? Where are they struggling? What’s planned next? Step 2: The team’s job is to actively challenge that. Step 3: In groups of three, team members discuss: What challenges or risks do we see? What innovations or advice can we offer? What support can we give to help this succeed? Step 4: Feedback is documented in a shared space. Not anonymous, not vague but actionable and respectful. Step 5: The presenter closes with one of three responses: Yes, I’ll act on this. No, here’s why not. Maybe, we need to explore it more. That simple follow-through keeps trust intact and ensures no one feels steamrolled. Stress Testing invites everyone into shared accountability and helps the whole team see blind spots before they become roadblocks. And the best part is it doesn’t take hours. You can run a full stress test in 20 minutes and walk away with more clarity, more momentum, and more ownership than most teams get in a week.

  • The feedback sandwich often misses the mark and can even backfire. Instead of creating clarity, it can muddy the message and feel insincere. Let's dive into why this approach doesn't work and explore a better way to give feedback with Radical Candor. ❌ What Not to Do: "Great job! But the presentation lacked details. Still, I appreciate your enthusiasm." ✅ What to Do Instead: Use CORE: 🔸 Context: Cite the specific situation. 🔸 Observation: Describe what was said or done. 🔸 Result: Explain the consequence. 🔸 Expected nExt stEps: Outline the expected next steps. Example of CORE Feedback: "I asked you to help us be more efficient (Context). You went above and beyond by implementing Slack (Observation). The team is now spending less time on email and more time communicating effectively (Result). We'd love for you to explore other tools to streamline communication in the office (Expected nExt stEps)." Giving feedback is crucial for growth, but it needs to be clear, kind, and actionable. Read more: https://bit.ly/3LhIzZ2 #ManagementTips #RadicalCandor #Leadership #Feedback #COREMethod #EffectiveCommunication #GrowthMindset

  • View profile for Deborah Riegel

    Wharton, Columbia, and Duke B-School faculty; Harvard Business Review columnist; Keynote speaker; Workshop facilitator; Exec Coach; #1 bestselling author, "Go To Help: 31 Strategies to Offer, Ask for, and Accept Help"

    39,913 followers

    Have you ever felt that immediate internal bristle when someone gives you #feedback? That visceral "but, but, but..." response that bubbles up before you've even fully processed what they've said? I had one of those moments just last week. A client mentioned that my explanation of a leadership framework "went a bit into the weeds". My first thought? "But I was just being thorough!" (Complete with an internal eye roll that would make any teenager proud.) #Defensiveness is such a natural human response. Our brains are literally wired to protect our self-image — it's not a character flaw, it's neurobiology! (Thanks, brain.) But here's what I've learned from years of both giving and receiving difficult feedback: how we handle those defensive moments often determines whether we grow from feedback or just barely survive it. Here's my toolkit for when those defensive walls go up (and they will): 1. Notice the feeling without jumping to action. When your chest tightens or your thoughts race toward justification, just label it: "This is defensiveness showing up." That tiny pause creates space between feeling and reacting. 2. Remember that impact beats intent every time. My intentions for that workshop were excellent (thoroughness!), but if the impact was confusion, that's what matters. My good intentions don't erase someone else's experience. 3. Reframe feedback as a catalyst for improvement and growth. The people who tell us uncomfortable truths are offering us something valuable. Sometimes the feedback that stings most contains the exact insight we need. (I have found that the truer the feedback is, the more it hurts.) 4. Focus on specific behaviors rather than your identity. There's a world of difference between "that explanation was confusing" and "you're a confusing person." Separate the action from your sense of self. 5. Give yourself permission to be imperfect. You're allowed to be a work in progress. (I know that I sure am.) Developing this #mindset transforms defensiveness from a threat to your worth into a normal part of your growth journey. What are your go-to strategies when defensiveness strikes? I'd love to hear what works for you. And yes, I'll shorten my explanation for the next time. Sometimes, the feedback that makes us squirm today often becomes the #wisdom we're grateful for tomorrow. #Professionaldevelopment #leadership #emotionalIntelligence #Feedbackculture

  • View profile for Josh Braun
    Josh Braun Josh Braun is an Influencer

    Struggling to book meetings? Getting ghosted? Want to sell without pushing, convincing, or begging? Read this profile.

    275,488 followers

    I heard Jason Fried, CEO at 37signals give feedback to a designer once, and it blew me away. Here’s what he said: “I like how you designed the save icon. It makes it clear that the work is being saved.” “I wish there was a way to show when the saving is done. “What if the icon turned green when it was finished saving?” Simple, right? But there’s some powerful psychology at work here. 1. Affirmation first. Jason starts with what’s working. This creates a sense of safety and keeps the designer open to feedback. People are more likely to listen when they feel valued. 2. Make it collaborative. Instead of saying, “This is wrong” or “You need to fix this,” Jason says, “I wish…” and “What if…” These phrases invite problem-solving rather than defensiveness. 3. Be specific. The feedback isn’t vague, like “Make it better.” It’s actionable: “What if the icon turned green?” Clarity reduces friction and makes the next step obvious. This isn’t just about design. It’s about leadership. Sales. Relationships. People respond better to feedback when it’s thoughtful, collaborative, and clear. So next time you give feedback, try Jason’s approach: I like. I wish. What if.

  • View profile for Jess Cook

    Head of Marketing at Vector

    36,838 followers

    Raise your hand 🙋🏻♀️ if this has ever happened to you ⤵ You put a piece of content in front of someone for approval. They say, “You should show this to Sally. She’d have thoughts on this.” So you show it to Sally. She not only has thoughts, but she also recommends you share the draft with Doug. Doug also has feedback, some of which aligns with Sally’s and some of which does not. Now you’re two days behind schedule, have conflicting feedback to parse through, and are wondering how you could have avoided this mess. Try this next time 👇 In the planning phase of a project, put a doc together that outlines 3 levels of stakeholders: 1) Your SMEs 🧠 → Apply as much of their feedback as possible — they are as close a proxy to your audience as you can get. 2) Your key approver(s) ✅ → Keep this group small, 1–2 people if possible. → Weigh their feedback knowing that they are not necessarily an SME 𝘣𝘶𝘵 they do control whether or not the project moves forward. 3) Your informed partners 🤝 → Typically, those who will repurpose or promote your content in some way. (e.g. field marketing, comms, growth, etc.) → Make revisions based on their feedback at your discretion. → You may even want to frame the delivery of your draft as, "Here’s an update on how this is progressing. No action needed at this time." Share this doc with all listed stakeholders. Make sure they understand the level of feedback you’re expecting from them, and by when. Then use the doc to track feedback and approvals throughout the life of the project. Preventing your circle of approvers from becoming concentric: 👍 keeps you on track 👍 keeps your content from pleasing your stakeholders more than your audience

  • View profile for Melissa Milloway

    Designing Learning Experiences That Scale | Instructional Design, Learning Strategy & Innovation

    114,301 followers

    Back in 2017, my team had a simple but powerful ritual. We held "I have a design challenge" meetings, where someone would bring a project they were working on, and we’d workshop it together. These sessions weren’t just about fixing problems. They helped us grow our skills as a team and learn from each other’s perspectives. In 2024, I wanted to bring that same energy to learning designers looking to level up their skills in a fun and engaging way. This time, I turned to Tim Slade’s eLearning Challenges but took a different approach. Instead of just participating, we started doing live reviews of the challenge winners. How It Works One person drives the meeting, screensharing the challenge winner’s eLearning project while recording the session. We pause at each screen and ask two simple but high-impact questions: ✅ What worked well and why? ✅ What would you do differently and why? This sparks rich discussions on everything from instructional design and accessibility to visual design and interactivity. Everyone brings their unique expertise, turning the meeting into a collaborative learning experience. Want to Try It? Here’s What You Need ✔️ A web conferencing tool with recording capabilities ✔️ Adobe Premiere Pro or a transcript tool (optional, but helpful) ✔️ A generative AI tool like ChatGPT, Gemini, or Claude (optional for extracting themes from discussions) After the session, we take the recording and import it into Adobe Premiere, which generates a transcript in seconds. Then, using GenAI, we pull key themes, quotes, and takeaways, turning raw discussions into actionable insights. Why This Works This approach takes learning from passive to interactive. You’re not just seeing best practices. You’re critically analyzing them with peers, learning through feedback, and refining your own instructional design instincts. Would you try this with your team? Have you tried something similar? What worked well? #InstructionalDesign #GenAI #LearningDesign #eLearning #AIinLearning #CourseDevelopment #DigitalLearning #IDStrategy #EdTech #eLearningDesign #LearningTechnology #InnovationInLearning #CustomerEducation

  • View profile for Jeff White

    Improving Medtech software ➤ Advancing UX careers with storytelling @ uxstorytelling.io ➤ UX Consultant ➤ UX Designer & Educator

    49,432 followers

    This is how you lose a room of stakeholders when you present: “On this screen I used a list view, with a search box & filters on top.” Do this instead: “System admins struggle to find specific users so they can change their permissions. They’re in a rush and it’s a needle in a haystack: 50% of our customers have more than 10,000 users in their system. We’re going to keep missing our SUS score goal without a fix here. This issue brought our scores down below 80 for the first time in a year. This new search and filter design makes it twice as fast to narrow down—and the list view now shows enough data to confirm they’ve found the right user without clicking into the detail sheet”. Boom 🎉 In the first example the designer describes the interface to us. In the second they tell a story. Stop describing UI and features to stakeholders. Start telling a story to help them understand why the design is good. For better products and better design reviews focus on outcomes. Not outputs.

  • View profile for Mohsen Rafiei, Ph.D.

    UXR Lead | Assistant Professor of Psychological Science

    10,323 followers

    When I was interviewing users during a study on a new product design focused on comfort, I started to notice some variation in the feedback. Some users seemed quite satisfied, describing it as comfortable and easy to use. Others were more reserved, mentioning small discomforts or saying it didn’t quite feel right. Nothing extreme, but clearly not a uniform experience either. Curious to see how this played out in the larger dataset, I checked the comfort ratings. At first, the average looked perfectly middle-of-the-road. If I had stopped there, I might have just concluded the product was fine for most people. But when I plotted the distribution, the pattern became clearer. Instead of a single, neat peak around the average, the scores were split. There were clusters at both the high and low ends. A good number of people liked it, and another group didn’t, but the average made it all look neutral. That distribution plot gave me a much clearer picture of what was happening. It wasn’t that people felt lukewarm about the design. It was that we had two sets of reactions balancing each other out statistically. And that distinction mattered a lot when it came to next steps. We realized we needed to understand who those two groups were, what expectations or preferences might be influencing their experience, and how we could make the product more inclusive of both. To dig deeper, I ended up using a mixture model to formally identify the subgroups in the data. It confirmed what we were seeing visually, that the responses were likely coming from two different user populations. This kind of modeling is incredibly useful in UX, especially when your data suggests multiple experiences hidden within a single metric. It also matters because the statistical tests you choose depend heavily on your assumptions about the data. If you assume one unified population when there are actually two, your test results can be misleading, and you might miss important differences altogether. This is why checking the distribution is one of the most practical things you can do in UX research. Averages are helpful, but they can also hide important variability. When you visualize the data using a histogram or density plot, you start to see whether people are generally aligned in their experience or whether different patterns are emerging. You might find a long tail, a skew, or multiple peaks, all of which tell you something about how users are interacting with what you’ve designed. Most software can give you a basic histogram. If you’re using R or Python, you can generate one with just a line or two of code. The point is, before you report the average or jump into comparisons, take a moment to see the shape of your data. It helps you tell a more honest, more detailed story about what users are experiencing and why. And if the shape points to something more complex, like distinct user subgroups, methods like mixture modeling can give you a much more accurate and actionable analysis.

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher @ Perceptual User Experience Lab | Human-AI Interaction Researcher @ University of Arkansas at Little Rock

    8,025 followers

    Survey data often ends up as static reports, but it doesn’t have to stop there. With the right tools, those responses can help us predict what users will do next and what changes will matter most. In recent years, predictive modeling has become one of the most exciting ways to extend the value of UX surveys. Whether you’re forecasting churn, identifying what actually drives your NPS score, or segmenting users into meaningful groups, these methods offer new levels of clarity. One technique I keep coming back to is key driver analysis using machine learning. Traditional regression models often struggle when survey variables are correlated. But newer approaches like Shapley value analysis are much better at estimating how each factor contributes to an outcome. It works by simulating all possible combinations of inputs, helping surface drivers that might be masked in a linear model. For example, instead of wondering whether UI clarity or response time matters more, you can get a clear ranked breakdown - and that turns into a sharper product roadmap. Another area that’s taken off is modeling behavior from survey feedback. You might train a model to predict churn based on dissatisfaction scores, or forecast which feature requests are likely to lead to higher engagement. Even a simple decision tree or logistic regression can identify risk signals early. This kind of modeling lets us treat feedback as a live input to product strategy rather than just a postmortem. Segmentation is another win. Using clustering algorithms like k-means or hierarchical clustering, we can go beyond generic personas and find real behavioral patterns - like users who rate the product moderately but are deeply engaged, or those who are new and struggling. These insights help teams build more tailored experiences. And the most exciting part for me is combining surveys with product analytics. When you pair someone’s satisfaction score with their actual usage behavior, the insights become much more powerful. It tells us when a complaint is just noise and when it’s a warning sign. And it can guide which users to reach out to before they walk away.

Explore categories