𝗘𝘅𝗽𝗹𝗼𝗿𝗶𝗻𝗴 𝘁𝗵𝗲 𝗣𝗼𝘁𝗲𝗻𝘁𝗶𝗮𝗹 𝗼𝗳 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗔𝗴𝗲𝗻𝘁𝘀 𝗶𝗻 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗔𝗜 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 𝗗𝗲𝘀𝗶𝗴𝗻 This paper explores the impact of virtual agents in Explainable AI (XAI) interaction design, focusing on their effect on user trust. A user study was conducted with a speech recognition system to analyze how different modalities of virtual agents (text, voice and embodied presence) influence user perceptions of trust and explainability. Key Features 𝗫𝗔𝗜 𝗶𝗻 𝗦𝗽𝗲𝗲𝗰𝗵 𝗥𝗲𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝗼𝗻: -Uses LIME (Local Interpretable Model-Agnostic Explanations) to generate visual explanations. -Spectrogram-based explanations for keyword classification. 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁 𝗦𝗲𝘁𝘂𝗽: -60 participants divided into 4 groups: no agent, text agent, voice agent, and virtual embodied agent. -Participants spoke predefined keywords into a speech recognition system. -AI-generated visual explanations were presented using different agent modalities. 𝗖𝗼𝗿𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻 𝗕𝗲𝘁𝘄𝗲𝗲𝗻 𝗧𝗿𝘂𝘀𝘁 𝗮𝗻𝗱 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗔𝗴𝗲𝗻𝘁𝘀: -Increasing the human-likeness of virtual agents enhanced user trust in AI systems. -Users preferred voice output and visual embodiment over text-only interactions. Experiment & Evaluation 𝗧𝗲𝘀𝘁-𝗚𝗿𝗼𝘂𝗽 𝗧𝗿𝘂𝘀𝘁 𝗖𝗼𝗺𝗽𝗮𝗿𝗶𝘀𝗼𝗻: -A linear trend was observed where trust increased from the no-agent group to the fully embodied agent. -The embodied virtual agent scored the highest on trust and helpfulness. 𝗔𝗴𝗲𝗻𝘁 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻: -Users found embodied agents engaging due to their voice, gestures and expressions. -The text agent group had the lowest interaction levels. 𝗫𝗔𝗜 𝗩𝗶𝘀𝘂𝗮𝗹 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻: -Users wanted more linguistic explanations alongside visualizations. -Many requested interactive features like clickable explanations for better understanding. Keep learning and keep growing 😊 !!
User trust in ambient computing
Explore top LinkedIn content from expert professionals.
Summary
User trust in ambient computing refers to the confidence people have that smart, AI-driven technologies in their environment will act predictably, safely, and transparently. Building trust in these systems is crucial because users need to feel that they can depend on technology to understand their intent and perform tasks reliably.
- Prioritize transparency: Share clear explanations and updates about how your AI system works and what it is doing so users feel informed and comfortable.
- Design for reliability: Focus on consistent performance and predictable behavior, making sure your technology handles errors gracefully and communicates any changes directly to users.
- Empower user control: Provide options for users to adjust settings, override decisions, or offer feedback, helping them feel safe and supported when interacting with ambient computing solutions.
-
-
The better your ML system gets, the more painful its failures become. When a system works 95% of the time, people start to 𝘁𝗿𝘂𝘀𝘁 it. They stop checking. They assume it just works. And then? One strange failure. Unexplained. Misaligned. Just off. And trust is gone. This happens all the time in: - Agent-based systems - RAG pipelines - Customer-facing applications - Even a simple churn model that flags your biggest client by mistake 𝗥𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗯𝘂𝗶𝗹𝗱𝘀 𝘁𝗿𝘂𝘀𝘁. 𝗧𝗿𝘂𝘀𝘁 𝗶𝗻𝗰𝗿𝗲𝗮𝘀𝗲𝘀 𝗿𝗶𝘀𝗸. That’s the paradox. 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗿𝗮𝗶𝘀𝗲𝘀 𝗲𝘅𝗽𝗲𝗰𝘁𝗮𝘁𝗶𝗼𝗻𝘀. And when your system slips, it doesn’t feel like a bug. It feels like a breach of trust. 𝗪𝗵𝗮𝘁 𝗰𝗮𝗻 𝘆𝗼𝘂 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗱𝗼 𝗮𝗯𝗼𝘂𝘁 𝗶𝘁? This is not just for production teams. Even small portfolio projects benefit from this mindset. 𝟭. 𝗗𝗼𝗻’𝘁 𝗷𝘂𝘀𝘁 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗲 𝗳𝗼𝗿 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆. • Track weird and rare inputs • Tag errors by why they happened, not just how many • Log how users react: ignore, undo, repeat 𝟮. 𝗕𝘂𝗶𝗹𝗱 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 𝗮𝗻𝗱 𝗳𝗮𝗹𝗹𝗯𝗮𝗰𝗸𝘀. • Add a check: if confidence is low, return a safe default • Show a confidence score or quick explanation • Catch hallucinations and infinite loops with basic logic 𝟯. 𝗗𝗲𝘀𝗶𝗴𝗻 𝗳𝗼𝗿 𝘀𝗵𝗶𝗳𝘁𝗶𝗻𝗴 𝘁𝗿𝘂𝘀𝘁. • First-time users need context and safety • Repeat users can become overconfident, so remind them • If your model or data changes, let the user know 𝗜𝗻 𝗼𝗻𝗲 𝗥𝗔𝗚 𝗽𝗿𝗼𝗷𝗲𝗰𝘁, 𝘄𝗲 𝗻𝗼𝘁𝗶𝗰𝗲𝗱 𝘀𝗼𝗺𝗲 𝘂𝘀𝗲𝗿 𝗾𝘂𝗲𝗿𝗶𝗲𝘀 𝘄𝗲𝗿𝗲 𝘁𝗼𝗼 𝘃𝗮𝗴𝘂𝗲 𝗼𝗿 𝗼𝘂𝘁-𝗼𝗳-𝘀𝗰𝗼𝗽𝗲. Instead of letting the system hallucinate, we added a simple check: → If similarity score was low and the top documents were generic, we showed: “𝘚𝘰𝘳𝘳𝘺, 𝘐 𝘯𝘦𝘦𝘥 𝘮𝘰𝘳𝘦 𝘤𝘰𝘯𝘵𝘦𝘹𝘵 𝘵𝘰 𝘩𝘦𝘭𝘱 𝘸𝘪𝘵𝘩 𝘵𝘩𝘢𝘵.” It wasn’t fancy. But it prevented bad answers and kept user trust. Because these systems don’t just run on code. They run on trust. 𝗜𝗳 𝘆𝗼𝘂'𝗿𝗲 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗔𝗜 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹 𝘂𝘀𝗲, 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗳𝗼𝗿 𝗮 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗽𝗿𝗼𝗷𝗲𝗰𝘁: → Start small. → Add a fallback → Explain your outputs → Track how your system performs over time These aren’t advanced tricks. They’re good engineering. 💬 What’s one thing you could add to your current project to make it more reliable? ♻️ Repost to help someone in your network.
-
Over the past few weeks, I’ve spent hours observing small business owners as they integrate AI Agents into their daily workflows. The adoption journey is anything but linear - it starts with enthusiasm, dips into frustration, and then, if designed well, makes the leap into a more sustained usage. I am going to call this moment “the trust leap” - a shift from early curiosity to deeper reliance. Here’s what that journey often looks like: Day 1: Enthusiasm with a Side of Skepticism “What can it actually do? Will it really track leads and keep my data up to date? I’m not sure I trust it to draft my emails just yet.” Week 1: Discovery A “wow” moment - perhaps the Agent completes a task in seconds that would have taken hours. The potential starts to become real. Week 2: Yays and Nays Excitement meets reality. “I could write a personalized email to 20 different customers in just a few seconds - amazing!!.” “But….. it couldn’t handle my complex spreadsheet with all the formulas. And why do I have to explain the same thing multiple times?” Weeks 3-5: Make or Break This is the most critical phase. It’s the moment where user trust is either won or lost. If designed well, users start making the big jump - relying on the AI Agent, feeling confident in its capabilities, and seeing it as a reliable assistant. If designed poorly - if the AI Agent remains inconsistent, hard to control, or unreliable users will abandon it entirely. Week 6-7: If all goes well, Trust Spreads “I feel I have a partner. It’s saving me time and money. I want my team to use it too.” Week 8: A Critical Asset “Please don’t take it away. I’d have to hire someone to do what it does.” .......... While most technology products follow a similar user adoption path, one key distinction I am seeing with AI Agents is that people don’t just see them as tools, they see them as collaborators. People want to build trust that the system understands their intent, adapts to their needs, and won’t fail them at a critical moment. They don’t just need to know how to use it - they need to believe they can depend on it. Designing for user trust will be a critical factor in unlocking AI Agent adoption. Curious what you are learning seeing AI Agents being adopted by “real users” not just tech enthusiasts :)
-
Earning Users’ Trust with Quality When users interact with an AI-driven product, they may not see your data pipelines, but they definitely notice when the system outputs something that doesn’t make sense. Each unexpected error chips away at credibility. Conversely, consistently accurate, sensible recommendations gradually build lasting trust. The secret to winning that trust? Prioritize data quality above all else. How data quality fosters user confidence: Consistent performance: Reliable data inputs yield stable outputs. Users become comfortable knowing the AI rarely “goes rogue” with bizarre suggestions. Predictable behavior: High-quality data preserves known patterns. When the AI behaves predictably—reflecting real-world trends—users can rely on it for critical tasks. Transparent provenance: Even if users don’t dig into the data details, they appreciate knowing there’s a rigorous process behind the scenes. When you communicate your governance efforts—without overwhelming them—you reinforce trust. Error mitigation: When anomalies do appear, high-quality data pipelines often include fallback mechanisms (e.g., default rules, human-in-the-loop checks) that stop glaring mistakes from reaching end users. Consequences of ignoring data quality: User frustration: Imagine an e-commerce AI recommending out-of-stock products or the wrong sizes repeatedly. Frustration mounts quickly. Brand erosion: A few high-profile misfires can tarnish your company’s reputation. “AI that goes haywire” becomes a memorable tagline that sticks. Decreased adoption: Users who lose faith won’t invest time learning or relying on your platform. They revert to manual processes or competitor tools they perceive as more reliable. Building user trust isn’t a one-time effort; it’s continuous vigilance. Regularly audit your data sources, validate inputs, and refine processes so your AI outputs remain solid. Over time, this dedication to data quality cements confidence, turning skeptics into loyal advocates who believe in your product’s reliability.
-
User trust is the biggest problem for AI products. No matter how powerful it is. No matter how accurate. No matter how “smart” it claims to be. If users don’t trust your AI, they won’t use it. ✦ Here’s what most AI product teams get wrong: They focus on what the AI can do. But forget how it makes people feel. AI shouldn’t just be functional. It should feel safe, supportive, and human. Let’s talk about how to design trust into your AI product. ✦ 1. Let users stay in control. People trust AI more when they can steer it. Not when it takes over. ✅ Let users start/stop features ✅ Give them override options ✅ Adjust how “aggressive” the AI behaves 2. Design for failure — gently. AI will mess up. The UX either keeps users or loses them. ✅ Use warm, clear fallback messages ✅ Guide users to recover without blame 3. Use a friendly, supportive tone. Language shapes emotion. Cold = intimidating. Warm = trusted. ✅ Rewrite system text in your brand voice ✅ Use coaching-style prompts, not commands 4. Start with smart defaults. Users don’t want 10 setup steps. They want to get going fast. ✅ Turn on helpful features by default ✅ Let them opt out later—not opt in 5. Show that the AI is learning. If users see progress, they’ll trust the system more. ✅ Reflect behavior-based improvements ✅ Acknowledge positive change 6. Add human moments. People trust people—not code. ✅ Add warm phrasing and small acknowledgments ✅ Use “teammate tone” in key moments ✦ The AI doesn’t need to be perfect. But it does need to feel predictable, optional, and human. That’s what builds trust. And trust is what gets adoption. What’s the best UX detail you’ve seen in an AI product lately? ♻️ Share if it was useful. 🔔 Follow Valentine Boyev for more updates!
-
It’s not always the AI model that users distrust—it’s the UX (how it is presented to the user and the user control over how output is used). Same model, two interfaces: one builds trust, the other builds complaints. I've seen it firsthand (through successes and mistakes) - give users transparency and control, and even imperfect models feel more reliable. Assume the model is perfect and automate it all - good night. High importance to put yourself in the user's shoes to understand what is important and what is risky (for them to verify, don't assume you are right). This is why I was thrilled to find a framework that puts words to it, "CAIR" (Confidence in AI Results). Great overview of it in this link! Highly recommend this read to any PMs and AI engineers thinking about how to ship AI 𝘱𝘦𝘰𝘱𝘭𝘦 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 𝘶𝘴𝘦. Trust isn’t just earned...it’s designed. https://lnkd.in/gnZCwT66 LangChain Assaf Elovic and Harrison Chase Assaf Elovic Harrison Chase
-
𝗘𝘃𝗲𝗿 𝗳𝗲𝗲𝗹 𝗹𝗶𝗸𝗲 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 𝗮𝗿𝗲 𝗲𝗮𝘀𝘆 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗯𝘂𝘁 𝗵𝗮𝗿𝗱 𝘁𝗼 𝘁𝗿𝘂𝘀𝘁? That’s because most teams jump to code before they consider the consequences. If you're designing with AI, these 6 Core AI Design Principles can make or break user trust: 1️⃣ Transparency – Make it clear how the AI works and what it's doing. 2️⃣ User Control – Let users influence outcomes, not just receive them. 3️⃣ Trust & Explainability – People trust what they can understand. 4️⃣ Fairness & Bias – Audit regularly to avoid discrimination and blind spots. 5️⃣ Ethical Responsibility – Design with intention, not just innovation. 6️⃣ Error Handling – Plan for mistakes and guide users through them. Remember: Designing for AI = Designing for trust. Products may impress, but principles create impact. ✨ 👨💻 Created by Mian Ali Hassan — Content Creator & Visual Designer Follow for more career — boosting design content. #AIDesign #EthicalAI #DesignPrinciples #UXDesign #HumanCenteredAI #ResponsibleAI #AIUX #ExplainableAI #AIForGood #AliHassanUX #ProductDesign #AITrust
-
🤖✨ Designing with AI? Don’t go in blind. AI is powerful, but building great human–AI interactions isn’t always straightforward. That’s why Microsoft Research created the Guidelines for Human-AI Interaction — 18 practical rules to help teams design AI experiences that feel useful, trustworthy, and human-centered. 1️⃣ 4 Phases of Interaction The guidelines are grouped into phases that map to the user journey: • Initially → set expectations & onboard. • During Interaction → guide, give feedback, recover from errors. • When Wrong → handle mistakes & uncertainty gracefully. • Over Time → learn, adapt, and build trust. 2️⃣ How to Apply HAX The guidelines act as a checklist during design and evaluation of AI-powered experiences. Teams can: • Use them in design reviews to ensure responsible interaction patterns. • Apply them in usability testing to spot trust and comprehension issues. • Prioritise them when integrating AI into existing products. 3️⃣ Impact on UX & Trust Applying these guidelines helps create AI experiences that feel: • Transparent: Users understand what the AI can (and can’t) do. • Controllable: Users feel in charge, not at the mercy of the system. • Trustworthy: Clear communication of uncertainty reduces over-reliance and frustration. • Adaptive: AI improves with continued use, aligning more closely with user needs. 🔎 Pros & Cons ✳️ Advantages: • Evidence-based and validated through user studies. • Cover the full lifecycle of interaction (not just one-off tasks). • Practical and flexible across domains (consumer apps, B2B, enterprise). • Strengthens user trust in AI systems. ❌ Limitations: • Guidelines are high-level — teams still need domain-specific heuristics. • Can be challenging to apply in rapid prototyping. Reference https://lnkd.in/e42BAVr5 💬 Have you used these guidelines in your design or AI projects? 👇 Drop your thoughts — I’ll add links to resources in the comments. #UX #AIUX #HumanAIInteraction #DesignGuidelines #ResponsibleAI #UserExperience
-
75% of organizations implementing AI fail to exploit the full value. Why? A lack of trust and understanding between the system and its users. Trust determines whether your brilliant AI system gets shelved 🚫 or revolutionizes an organization 🚀 What exactly is trust in this context? 🤔 Trust is the attitude that human or AI, will help me achieve my goals in situations where I may be vulnerable or uncertain. It's that gut feeling that says, "This system will do what I need it to do when I need it done." We begin our "relationship" with AI because we have tasks to accomplish. The exchange comes down to whether the functionality works or not. 👍🏽 👎🏽 This transactional interaction carries the same psychological principles as my human-to-human interpersonal trust. So, what can impact trust in AI? 1️⃣ Individual differences. Some people are naturally more inclined to trust technology than others. 2️⃣ Past experiences. Interaction with similar technologies shapes initial trust levels. A previous disappointing experience with an AI tool might create skepticism toward all AI systems, regardless of their actual capabilities. What can you do? ✅Examine the psychology behind trust in AI and automated systems ✅Utilize frameworks for building appropriate trust Your goal is to create AI systems that not only perform well technically but also gain human confidence and adoption. Trust in AI isn't a nice-to-have. It's foundational. 💯 #humancentereddesign #productmanagement #innovation #AI #systemsthinking #humanfactors
-
Trust is one of the biggest drivers (or blockers) for adoption in AI products. Take this conversation I had on Monday as an example: customer: "Using Briefcase I published 70 transactions in like, thirty seconds or something. It was just click publish because I have more reassurance in the software, you know, it's good." me: "And how long would it normally take you to to publish 70 transactions in the tool you were using previously?" customer: "An hour? An hour and a bit, I think. That's it." There are also some practical differences Briefcase offers versus the incumbent tool — mulit-line split, HRMC-compliant VAT rate assignment, market-leading accuracy, dynamic item-based categorisation, etc — but these are just features. And even with all that, if the customer still feels the need to review every transaction in great detail, the time savings will be less (say, 3X, rather than 120X). To unlock real efficiency gains with AI you need to be able to trust it, or as our customer put it, "have more reassurance". One of the main ways we deliver this in Briefcase is through explainability. Rather than treating AI as a black box where miscellaneous data goes in and miscellaneous data comes out, we've configured our AI agents to justify every decision they make and expose this to the user. This transparency creates trust and gives our users a clear audit trail on every step of the workflow. As we develop more complex agents we are always thinking of new ways of breaking down their workflows and add more explainability... stay tuned 💼