Understanding User Experience

Explore top LinkedIn content from expert professionals.

  • View profile for Leyla Acaroglu
    Leyla Acaroglu Leyla Acaroglu is an Influencer

    Sustainability & Circular Economy Change Maker, Designer, UNEP Champion of the Earth, Keynote Speaker, Systems Thinker, LinkedIn Instructor, Podcast Host. Founder DisruptDesign.co, UnSchools.co & CircularFutures.co.

    40,044 followers

    The design industry has a gender diversity issue. It's predominately made up of men (the UK Design Council reports it's 78% male) this throws up a bunch of issues including the prevalence of #genderbias in the way physical products, AI and user experiences are designed. "Shrink it and pink it" is the mantra used in product design and articulated in this article in the Harvard Social Impact Review by Karen Korellis Reuther, the former Creative Executive at NIKE > https://lnkd.in/eH7AZY2Z This isn't just an issue of the cliche 🙄 of making things more dainty and 'girly', the lack of consideration of diversity in design can be deadly and costly: 🚘 Women are 73% more likely to be injured in a car crash than men. 👩🚒 Female firefighters experience a four times greater rate of injury than men, in part because of ill-fitting personal protective equipment. 👿 Then there is the so-called 'pink tax' where products made for females are often more expensive creating a gender-based price discrimination. This BBC article identifies 8 different ways the world is not designed for women from phones to office layouts https://lnkd.in/e2eg_ET7. and a lot more is detailed in Caroline Criado Perez's book "Invisible Women – Exposing Data Bias in a World Designed for Men" where she explores how persistent gender-blindness results in a 'one-size-fits-men' in many everyday products. Then there are the issues with gender bias in #AI: 🗣 Most voice-activated assistants are given female names and voices which reinforces harmful stereotypes (https://lnkd.in/eCFmsJYY) ❌ A UNESCO study revealed the worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping (https://lnkd.in/ecrZfkVt) 🔖 133 AI systems across different industries were studied and it found that about 44% of them showed gender bias, with 25% exhibited both gender and racial bias (https://lnkd.in/eheZ3gyk) Maartje van Proosdij's "The become average product line" playfully criticizes how everyday products unintentionally exclude groups of people and is well worth checking out! https://lnkd.in/exmR7s-i There needs to be more of a discussion right now about how products and technology, especially AI don't perpetuate harmful race and gender-based stereotypes. One of the main issues is the present lack of diversity in these industries feeds biases into the things made. #genderbias #productdesign #makechange #gender #bias #designissues #designequity #diversity #uxdesign #designequity #gender

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | Generative AI | Agentic AI

    690,000 followers

    Over the last year, I’ve seen many people fall into the same trap: They launch an AI-powered agent (chatbot, assistant, support tool, etc.)… But only track surface-level KPIs — like response time or number of users. That’s not enough. To create AI systems that actually deliver value, we need 𝗵𝗼𝗹𝗶𝘀𝘁𝗶𝗰, 𝗵𝘂𝗺𝗮𝗻-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 that reflect: • User trust • Task success • Business impact • Experience quality    This infographic highlights 15 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 dimensions to consider: ↳ 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 — Are your AI answers actually useful and correct? ↳ 𝗧𝗮𝘀𝗸 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲 — Can the agent complete full workflows, not just answer trivia? ↳ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 — Response speed still matters, especially in production. ↳ 𝗨𝘀𝗲𝗿 𝗘𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 — How often are users returning or interacting meaningfully? ↳ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗥𝗮𝘁𝗲 — Did the user achieve their goal? This is your north star. ↳ 𝗘𝗿𝗿𝗼𝗿 𝗥𝗮𝘁𝗲 — Irrelevant or wrong responses? That’s friction. ↳ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗗𝘂𝗿𝗮𝘁𝗶𝗼𝗻 — Longer isn’t always better — it depends on the goal. ↳ 𝗨𝘀𝗲𝗿 𝗥𝗲𝘁𝗲𝗻𝘁𝗶𝗼𝗻 — Are users coming back 𝘢𝘧𝘵𝘦𝘳 the first experience? ↳ 𝗖𝗼𝘀𝘁 𝗽𝗲𝗿 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 — Especially critical at scale. Budget-wise agents win. ↳ 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗽𝘁𝗵 — Can the agent handle follow-ups and multi-turn dialogue? ↳ 𝗨𝘀𝗲𝗿 𝗦𝗮𝘁𝗶𝘀𝗳𝗮𝗰𝘁𝗶𝗼𝗻 𝗦𝗰𝗼𝗿𝗲 — Feedback from actual users is gold. ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 — Can your AI 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳 𝘢𝘯𝘥 𝘳𝘦𝘧𝘦𝘳 to earlier inputs? ↳ 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 — Can it handle volume 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 degrading performance? ↳ 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 — This is key for RAG-based agents. ↳ 𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗦𝗰𝗼𝗿𝗲 — Is your AI learning and improving over time? If you're building or managing AI agents — bookmark this. Whether it's a support bot, GenAI assistant, or a multi-agent system — these are the metrics that will shape real-world success. 𝗗𝗶𝗱 𝗜 𝗺𝗶𝘀𝘀 𝗮𝗻𝘆 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗼𝗻𝗲𝘀 𝘆𝗼𝘂 𝘂𝘀𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀? Let’s make this list even stronger — drop your thoughts 👇

  • View profile for Tyler White

    Senior Product Designer | Teaching the ROI of Design | Helping SaaS teams turn UX clarity into revenue growth

    4,941 followers

    Design is done. Everyone claps. The Figma file gets dropped in Slack. “Ready for dev.” But no one asked what happens when the user clicks the same button five times in a row. Or what happens when the API fails. Or how the component scales on mobile. Then engineering opens it—and starts guessing. This is where products break. And budgets bleed. A weak handoff process is more expensive than most teams realize. And no, tagging Devs in Figma comments at 5:58 PM is not a handoff process. Let’s say one unclear spec adds 2 extra dev days every sprint. If you have a team of 4 developers running biweekly cycles, and each day costs $800 per developer, that is $6,400 per sprint just cleaning up avoidable messes. Multiply that over a year, and you are leaking over $160,000 in pure waste. All because someone did not annotate the edge case. And that is just the cost of time. What about morale? What about product velocity? What about the trust between design and engineering? The vibes, gone. Design is not finished when the frame looks good. It is finished when it works in code. That means aligning on interactions. Writing copy that accounts for logic. Annotating every “what happens if they click this twice in a row” scenario. Reviewing builds before handoff is even complete. Clear specs. Clear logic. Clear outcomes. Fewer late-night DMs. That is the ROI of design. #uxdesign #designhandoff #productdesign #uxstrategy #designops #growthdesign #b2bsaas #roiofdesign

  • View profile for Aditya Vivek Thota
    Aditya Vivek Thota Aditya Vivek Thota is an Influencer

    Senior Software Engineer | React, TypeScript, JavaScript (ES6+), Next.js, SvelteKit, Node.js, Python, Applied AI, UX Design, Agentic Workflows

    54,520 followers

    Most frontend engineers (including myself) and UX designers never formally study web application security. And it shows, in the designs and implementations we see every day. I want to change that. Let me give you a simple, common example: a login form. You typically have two fields: username and password. Now, what happens when you enter the wrong username? In many designs, the UI throws a helpful error like: - The username does not exist - Invalid username Great for UX, right? It clearly tells the user what’s wrong so they can fix it. But that “helpful” message has a hidden cost. What looks like good UX is also a potential attack vector. Think about it. If you're a hacker trying to find valid usernames, this kind of feedback is gold. You can brute-force the login form just to identify real usernames. It gets worse when usernames are email addresses, which is super common. Let’s say you’re targeting a company. You visit the login page of their enterprise product and start guessing email IDs. Try firstname.lastname@company.com, and the system tells you: username not found. Now tweak it a bit. Try again. Suddenly, the error changes to: incorrect password. Boom! You just discovered a valid email ID. No need to breach a database or wait for a leak. The app told you everything you needed to know. With this approach, you can reverse-engineer the company’s email pattern, search LinkedIn for employees, and instantly generate a full list of valid email addresses. All thanks to a login form. This is a basic UX scenario, but one that can be easily exploited. The fix? Simple: use a generic error message like Invalid username or password. Sure, it slightly degrades the UX. But the security tradeoff is well worth it. Now think: how many apps out there still leak this kind of info through error messages? How many engineers or designers consider this angle? If you've used some legacy or government platforms, chances are you’ve seen exactly what I’m talking about. I recently started reading a book on web application security, and it’s been eye-opening. This example was shared in the book, and it really got me thinking about the idea of “secure UX design.” We need more people to start exploring this side of the curve. Not just building beautiful interfaces, but building secure ones.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer
    216,451 followers

    Design For Real Life (https://dfrlbook.com), a wonderful free book with practical insights on how to test where your designs might fail before you ship, check new features or interactions against more realistic scenarios, and build a business case for making decisions through a lens of kindness. Written and made available by Eric Meyer and Sara Wachter-Boettcher. When we are designing digital products, too often we fall back to “regular” use cases — with “average” users and “normal” conditions. Some things that don’t seem to quite fit that context are often perceived as edge cases, and require separate attention, often reviewed closer to the delivery time or after release. Yet usually people who spend time with a digital product don’t give full attention to that digital product alone. The attention is sparse and fragmented. They often use other applications and tools that assist them in their work. They might be puzzling pieces of data together to make sense of the situation. And most notably: they might be in very stressful situations which require undivided focus and attention that they can’t give. Not all tasks are critical and urgent, but some are — and there, the “normal” condition might look very different and much more constrained that what we account for. One thing we try to include early are conversations about stress cases. We need to understand when and how our digital product is used, how it performs under suboptimal conditions — from people being late to work and having a crying baby in their arms to severe headaches, urgency, lack of safety, crisis, emergency. And I love that the book emphasizes that so well! People aren’t edge cases. We all are just different — with challenges and contexts that vary significantly, and a good application has to accommodate for these conditions well. Making it easy to get things done, but also making it very difficult to make mistakes — in real life and for real people. Useful resources: Designing For Stressed Out Users, by H Locke Part 1: https://lnkd.in/ew_65Km4 Part 2: https://lnkd.in/eV4Cjmha Part 3: https://lnkd.in/eKTRGx8Q Designing For Stressed-Out Users, by Robin Camille Davis https://lnkd.in/eHA8BB6b Designing For Stress (Podcast), by Katie Swindler https://lnkd.in/e3jkPr8K Designing For Safety (Podcast), by Eva PenzeyMoog https://lnkd.in/eSKm56fX Life and Death Design (Book), by Katie Swindler https://lnkd.in/eUwQGQXV Fire Drills: Communications Strategy in a Crisis, by Mandy Brown https://lnkd.in/e2d_MqEg #ux #design

  • View profile for Matt Wood
    Matt Wood Matt Wood is an Influencer

    CTIO, PwC

    75,346 followers

    𝔼𝕍𝔸𝕃 field note (2 of 3): Finding the benchmarks that matter for your own use cases is one of the biggest contributors to AI success. Let's dive in. AI adoption hinges on two foundational pillars: quality and trust. Like the dual nature of a superhero, quality and trust play distinct but interconnected roles in ensuring the success of AI systems. This duality underscores the importance of rigorous evaluation. Benchmarks, whether automated or human-centric, are the tools that allow us to measure and enhance quality while systematically building trust. By identifying the benchmarks that matter for your specific use case, you can ensure your AI system not only performs at its peak but also inspires confidence in its users. 🦸♂️ Quality is the superpower—think Superman—able to deliver remarkable feats like reasoning and understanding across modalities to deliver innovative capabilities. Evaluating quality involves tools like controllability frameworks to ensure predictable behavior, performance metrics to set clear expectations, and methods like automated benchmarks and human evaluations to measure capabilities. Techniques such as red-teaming further stress-test the system to identify blind spots. 👓 But trust is the alter ego—Clark Kent—the steady, dependable force that puts the superpower into the right place at the right time, and ensures these powers are used wisely and responsibly. Building trust requires measures that ensure systems are helpful (meeting user needs), harmless (avoiding unintended harm), and fair (mitigating bias). Transparency through explainability and robust verification processes further solidifies user confidence by revealing where a system excels—and where it isn’t ready yet. For AI systems, one cannot thrive without the other. A system with exceptional quality but no trust risks indifference or rejection - a collective "shrug" from your users. Conversely, all the trust in the world without quality reduces the potential to deliver real value. To ensure success, prioritize benchmarks that align with your use case, continuously measure both quality and trust, and adapt your evaluation as your system evolves. You can get started today: map use case requirements to benchmark types, identify critical metrics (accuracy, latency, bias), set minimum performance thresholds (aka: exit criteria), and choose complementary benchmarks (for better coverage of failure modes, and to avoid over-fitting to a single number). By doing so, you can build AI systems that not only perform but also earn the trust of their users—unlocking long-term value.

  • View profile for Matt Kerbel
    Matt Kerbel Matt Kerbel is an Influencer

    LinkedIn Top Voice | WSJ Marketer To Watch | Marketing leader, disruptor, and advisor | Building the world’s most loved car sharing marketplace @Turo 🚘

    54,860 followers

    One of the most underrated elements in business? 𝐋𝐢𝐯𝐞𝐝 𝐞𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞. There's a reason why universities ask for essays. There's more to the human than the grades on paper. For example, perhaps someone only had a B+ average in high school. What would be helpful to know is that only 5% of that school annually makes it to college, the applicant would be first-generation to college, and they had to walk 4 miles each way daily to and from school, while working a job at night. In that situation, B+ is a phenomenal result. Lived experience and context matters immensely. Whitney Wolfe Herd is the founder and CEO of Bumble Inc. — one of the world's preeminent dating and connecting platforms. Whitney co-founded Tinder, but left because of allegations of sexual harassment and discrimination from a co-founder. When asked about how the idea for Bumble sparked following Tinder, Whitney didn't say she saw a gap into the market. She didn't say it was because of feminism. The reason she stated is that her and her girlfriends often wanted to make the first move with a guy, but—because of societal norms at the time—felt it would be looked poorly upon if they texted first, or approached a guy before they approached, or got a guy's number, and so on. Because of lived experience, she knew that the stereotype was real—one of "desperation" when in fact they were just going for what they wanted. Experience that only a woman in that life phase could appreciate. In other words: No man would have founded Bumble. It just wouldn't have happened because of a lack of lived experience. And in 2021, at age 31, Whitney became the youngest female CEO to take a company public. What are the implications of this? ➜ Lived experience should exist on resumes (or applicants need to find a way to provide this context) ➜ Teams who want to get the best out of new tools like AI need to consider the lived experience of the individual/team writing the prompts ➜ The strongest prospects of success of having people on our teams who have lived experience of those we're trying to reach; not just doing research When Whitney took Bumble public, she'd had 10 or so years experience in tech and entrepreneurship. But, more importantly, she'd had 31 years growing up as a woman in America. And my personal lived experience is the story of why I went into psychology, marketing, and business. But that's a story for another day 😉 Does this resonate with you? What do you think?

  • View profile for Rohit V.

    Group Product Manager @ Angel One | Ex-Flipkart, Cleartrip, Paytm | 🎓 IIM Bangalore

    9,975 followers

    Have you always wondered what are these grey skeletons in the UI of the app ? One common UI/UX strategy we encounter is the use of skeleton screens—those gray placeholders that appear before content fully loads. But what exactly is happening behind the scenes, and why is it important to manage this experience thoughtfully? 🧩 What Are Skeleton Screens? Skeleton screens are visual placeholders designed to indicate that content is loading. Instead of showing a blank screen or a static loading spinner, they provide users with a visual representation of where content will eventually appear—think gray rectangles for text or circles for profile pictures. From a user's perspective, this improves perceived performance: they feel the app is doing something, even if the data isn't immediately available. ⚙️ Why Do They Appear? Prolonged skeleton screens can result from: → Network Delays: Slow or unreliable internet connections. → Backend/API Bottlenecks: Data not being fetched in time due to server overloads or failures. → Frontend Rendering Delays: Issues in replacing placeholders with real content, possibly due to bugs or heavy processing on the client side. 🎯 Why It is important for PMs to understand it ? Skeleton screens are more than just a "loading state"; they set user expectations and influence trust in your product. However, prolonged or improperly handled skeleton loading can frustrate users, leading to drop-offs or negative feedback. Here’s how to approach this as a PM: 🟢 Collaborate With Engineering to optimize API performance 🟢 Fallback content is displayed in case of errors. 🟢 Use tools to monitor metrics like API response times and app loading times. 🟢 Measure the impact of prolonged skeleton loading on user engagement and retention. 🟢 If delays persist, consider offering messaging like “Still loading, thanks for your patience!” to keep users informed rather than leaving them guessing. 🟢 Collaborate with designers to make skeleton screens more engaging—e.g., adding animations or better visual clues. It’s our job as PMs to ensure that skeleton screens don’t just "mask" a delay—they bridge the gap between user expectations and product delivery. Can you guess which app is there in the screenshot :) #ProductManagement #UXDesign #Design #ProductDesign #SkeletonScreens #UserExperience

  • View profile for Oliver King

    Founder & Investor | AI Operations for Financial Services

    5,021 followers

    Why would your users distrust flawless systems? Recent data shows 40% of leaders identify explainability as a major GenAI adoption risk, yet only 17% are actually addressing it. This gap determines whether humans accept or override AI-driven insights. As founders building AI-powered solutions, we face a counterintuitive truth: technically superior models often deliver worse business outcomes because skeptical users simply ignore them. The most successful implementations reveal that interpretability isn't about exposing mathematical gradients—it's about delivering stakeholder-specific narratives that build confidence. Three practical strategies separate winning AI products from those gathering dust: 1️⃣ Progressive disclosure layers Different stakeholders need different explanations. Your dashboard should let users drill from plain-language assessments to increasingly technical evidence. 2️⃣ Simulatability tests Can your users predict what your system will do next in familiar scenarios? When users can anticipate AI behavior with >80% accuracy, trust metrics improve dramatically. Run regular "prediction exercises" with early users to identify where your system's logic feels alien. 3️⃣ Auditable memory systems Every autonomous step should log its chain-of-thought in domain language. These records serve multiple purposes: incident investigation, training data, and regulatory compliance. They become invaluable when problems occur, providing immediate visibility into decision paths. For early-stage companies, these trust-building mechanisms are more than luxuries. They accelerate adoption. When selling to enterprises or regulated industries, they're table stakes. The fastest-growing AI companies don't just build better algorithms - they build better trust interfaces. While resources may be constrained, embedding these principles early costs far less than retrofitting them after hitting an adoption ceiling. Small teams can implement "minimum viable trust" versions of these strategies with focused effort. Building AI products is fundamentally about creating trust interfaces, not just algorithmic performance. #startups #founders #growth #ai

  • View profile for Dr. Isil Berkun
    Dr. Isil Berkun Dr. Isil Berkun is an Influencer

    Applying AI for Industry Intelligence | Stanford LEAD Finalist | Founder of DigiFab AI | 300K+ Learners | Former Intel AI Engineer | Polymath

    18,500 followers

    LLMs in production: 1️⃣ Lesson 1: Hallucinations Aren’t a Bug, They’re a Design Challenge We built our PCB troubleshooting system with LangChain + AutoGen, but the game-changer? Adding hallucination detection with deterministic fallbacks. The rule: When the model isn’t confident, fall back to known-good answers. Result: Trust went from 40% to 90%. 2️⃣ Lesson 2: RAG Speed > RAG Size Everyone obsesses over vector database choice. We used Pinecone + OpenAI embeddings with smart caching. The real win? Prompt templates and knowing when to retrieve vs. when to reason. → 60% faster time-to-answer→ Lower costs→ Happier users 3️⃣ Lesson 3: Generic Models = Generic Results Here’s the uncomfortable truth: Out-of-the-box ChatGPT won’t solve your specific problem. Fine-tuning with LoRA/PEFT on YOUR domain data is how you build a competitive moat. We use HuggingFace Transformers with mixed precision. The model learns your language, your context, your edge cases. 4️⃣ Lesson 4: Not Everything Belongs in the Cloud For NASA SBIR pre-work, we run real-time sensor fusion on Jetson Orin at the edge. Privacy-sensitive? Keep it local. Need to learn? Send insights to cloud. Need to respond instantly? Edge wins. The pattern: Edge-first for speed and privacy, cloud loops for continuous learning. 5️⃣ Lesson 5: Prompting IS Engineering Stop treating prompts like magic spells. Start treating them like code. → Role prompting for context→ Few-shot examples for patterns→ Chain-of-thought for reasoning→ Temperature tuning for determinism They’re your production reliability toolkit. 6️⃣ Lesson 6: Demos Impress, Architecture Delivers I’ve designed systems from concept to delivery, reviewed code across full stacks, and shipped AI to production. Here’s what separates working once from working always: → Systematic error handling→ Monitoring and observability→ Graceful degradation→ Human-in-the-loop when needed The Bottom Line: - Building AI that works in a demo takes weeks. - Building AI that works in production takes discipline. - Building AI that users trust takes both. Which lesson hit home for you? And what’s your biggest challenge putting LLMs into production? Drop a comment, I try to respond to every single one. P.S. Follow for more lessons from the intersection of AI and manufacturing.

Explore categories