Digital trust in assistive technologies

Explore top LinkedIn content from expert professionals.

Summary

Digital trust in assistive technologies means that people feel confident relying on digital tools—like AI-powered healthcare apps or robotic assistants—because these technologies are safe, transparent, and respect users’ rights and privacy. Building this trust is essential for patients, clinicians, and organizations to use and benefit from these innovations in everyday life.

  • Prioritize transparency: Clearly explain how your digital tool works, where its data comes from, and what decisions it makes to help users feel secure and informed.
  • Encourage human oversight: Always ensure that professionals can review and override technology recommendations, keeping real experts involved in critical decisions.
  • Build social confidence: Foster trust by involving respected team members and advocates who support and model responsible use of assistive technologies, influencing others to trust and adopt these tools.
Summarized by AI based on LinkedIn member posts
  • View profile for Sigrid Berge van Rooijen

    Helping healthcare use the power of AI⚕️

    24,195 followers

    Why are you ignoring a crucial factor for trust in your AI tool? By overlooking crucial ethical considerations, you risk undermining the very trust that drives adoption and effective use of your AI tools. Ethics in AI innovation ensures that technologies align with human rights, avoid harm, and promote equitable care. Building trust with patients and healthcare practitioners alike. Here are 12 important factors to consider when working towards trust in your tool. Transparency: Clearly communicating how AI systems operate, including data sources and decision-making processes. Accountability: Establish clear lines of responsibility for AI-driven outcomes. Bias Mitigation: Actively identifying and correcting biases in training data and algorithms. Equity & Fairness: Ensure AI tools are accessible and effective across diverse populations. Privacy & Data Security: Safeguard patient data through encryption, access controls, and anonymization. Human Autonomy: Preserve patients’ rights to make informed decisions without AI coercion. Safety & Reliability: Validate AI performance in real-world clinical settings. And test AI tools in diverse environments before deployment. Explainability: Design AI outputs that clinicians can interpret and verify. Informed Consent: Disclose AI’s role in care to patients and obtain explicit permission. Human Oversight: Prevent bias and errors by maintaining clinician authority to override AI recommendations. Regulatory Compliance: Adhere to evolving legal standards for (AI in) healthcare. Continuous Monitoring: Regularly audit AI systems post-deployment for performance drift or new biases. Address evolving risks and sustain long-term safety. What are you doing to increase trust in your AI tools?

  • View profile for Jan Beger

    Global Head of AI Advocacy @ GE HealthCare

    84,919 followers

    This paper examines how trust is built or challenged among patients and healthcare professionals using AI-based triage systems in Swedish primary care. 1️⃣ Trust relies on patients’ ability and willingness to provide accurate information during AI-guided symptom reporting. 2️⃣ Some patients exaggerate symptoms to gain attention, driven by fears the AI might dismiss their concerns. 3️⃣ Patients’ digital skills and prior experience with similar tools influenced how effectively they used the AI application. 4️⃣ Concerns about how symptom data is used and stored shaped how openly patients interacted with the AI system. 5️⃣ AI outputs must align with healthcare professionals’ clinical reasoning, especially in complex or nuanced cases. 6️⃣ Experienced professionals were more skeptical of AI suggestions, using them as checks rather than guides, unlike less experienced peers. 7️⃣ The AI’s rigid, symptom-focused questioning often failed to capture patient complexity, limiting trust and utility. 8️⃣ Emotional responses, especially in vulnerable situations, shaped user trust more than cognitive evaluations alone. 9️⃣ Professional oversight was critical—healthcare workers acted as a safeguard against potential AI errors or oversights. 🔟 Both groups emphasized the need for clear roles, responsibilities, and guidelines for interpreting and acting on AI-generated information. ✍🏻 Emilie Steerling, Petra Svedberg, Per Nilsen, Elin Siira, Jens Nygren. Influences on trust in the use of AI-based triage—an interview study with primary healthcare professionals and patients in Sweden. Frontiers in Digital Health. 2025. DOI: 10.3389/fdgth.2025.1565080

  • View profile for Roman Briker

    Behavioral Scientist | Assistant Professor in OB @ UM | Psychologist & Consultant, Coach, and Keynote-Speaker

    3,946 followers

    🔬 Paper Alert: Trust in AI is not built in isolation – it’s social. 🤖 Proud supervisor moment: My (and Simon B. de Jong´s) doctoral student Türkü Erengin has just published her very first paper, "You, Me, and the AI: The Role of Third-Party Human Teammates for Trust Formation Toward AI Teammates," in Wiley´s Journal of Organizational Behavior. 🤖 So, what does this research tell us? AI teammates are becoming a reality in modern workplaces. But while research has focused on how humans individually evaluate AI, Türkü’s work brings a fresh perspective: trust in AI is shaped severely by and learned from the people around us. Using two main (+ two supplementary) studies including a really cool observational, incentivized study with human-AI teams (including real GPT-powered physical service robot Temi, see picture)—this paper shows that: ✅ If a human teammate trusts an AI, their colleagues are more likely to trust it too. This effect is not only quite strong, it is also stable when controlling for people´s own, initial preferences after trying the AI the first time and holds true in contexts where actual money is on the table! ✅ This effect disappears if the human teammate themselves is seen as untrustworthy. ✅ Trust in AI is not just about the AI's own reliability—it depends on social context and human relationships. 🚀 Why does this matter? 1️⃣ Organizations implementing AI should focus on social dynamics and context rather than just AI performance. It does not (only) matter how well AI functions - if relevant others around employees don´t trust AI, employees won´t either. 2️⃣ Building trust in AI requires trusted human advocates—if key employees are skeptical, adoption suffers. 3️⃣ AI trust calibration is crucial: Over-reliance and under-reliance on AI both have risks, and leaders should consider social influences when introducing AI teammates. 🎉 Huge congratulations to Türkü for this important contribution! If you’re interested in how social cognitive theory can explain trust in AI teams, check out the full paper. What makes this even more special? JOB is the journal where my first academic paper was published—and where my own PhD supervisor (Frank Walter) had their first journal publication. A true academic full-circle moment! 🎓🔁 I’d love to hear from others: Have you noticed social influences shaping how people trust AI in your workplace? Have you ever seen CEOs, leaders, colleagues modeling (or refraining from modeling) trust in AI? #AI #TrustInAI #HumanAITeams #OrganizationalBehavior #FutureOfWork #Leadership #AcademicMentorship

  • View profile for Amie Leighton

    Founder @ Allia Health | Creating tech that’s on clinicians’ side

    5,928 followers

    Yesterday I spoke with hospital AI advisor Dr. Adenike 'Omo’ D. about what matters to clinicians regarding AI. She highlighted five core elements: Data - Bias - Explainability - Transparency - Human-Centred Design 1. Where's the data from? Diversity in data drives diversity in performance. 2. Bias checks - what's done to actively minimise it? No system is neutral. 3. Explainability - opening the "black box". Trust requires understanding how conclusions are drawn. 4. Transparency - clearly communicating limitations. Progress derives from an honest appraisal of strengths and weaknesses. 5. Human-centred design - integrating simply into workflows. Has it been built with clinicians' onboard? The last point really stood out. Technology has often not considered real clinical context, meaning that systems - like EHRs - have often created more work rather than less. Dr. Omo shared an example of a clinician-centric design project she worked on. She had 4 teams who shadowed workflows across different settings. Then, they collaboratively mapped user journeys, pain points and custom protocols. Constantly thinking: 'How can we simplify? Reduce the steps? The product proved not just usable but really valuable: "Clinicians who typically didn't like technology would come and tell me how much they love it." There is a massive potential for ethical AI in healthcare. Still, the magic only happens when diverse groups work together towards one shared goal: helping all people live happier and fuller lives.   My TDLR: Adoption of AI in healthcare will be governed by trust. #aihealthcare #aihealth #digitalmentalhealth

  • View profile for Junaid Kalia, MD

    Founder & CEO at SaveLife.AI | Healthcare AI Innovator & Clinical Neurologist | Host of Signal and Symptoms Podcast | Driving AI to Transform Healthcare | Hacker, Hustler & Hipster

    17,947 followers

    𝗗𝗿. 𝗚𝗼𝗼𝗴𝗹𝗲 𝘁𝗼 𝗗𝗿. 𝗖𝗵𝗮𝘁𝗚𝗣𝗧: 𝗧𝗿𝘂𝘀𝘁 𝗘𝘃𝗶𝗱𝗲𝗻𝗰𝗲, 𝗡𝗼𝘁 𝗛𝘆𝗽𝗲 🚨 New research from MIT Media Lab, Stanford Medicine, and IBM Research reveals a critical blind spot in our digital health era: people struggle to distinguish between AI-generated and doctor-written medical advice—and often trust AI just as much, if not more. 🔬 In a study of 300 participants, AI-generated medical responses—regardless of accuracy—were rated as valid, trustworthy, and complete as those from real physicians. In fact, high-accuracy AI answers were seen as 𝘮𝘰𝘳𝘦 trustworthy and satisfactory than doctors’. Alarmingly, even low-accuracy AI responses were perceived on par with doctors’ advice, and participants indicated a strong willingness to follow this potentially harmful guidance1. Key findings: • 𝗜𝗻𝗱𝗶𝘀𝘁𝗶𝗻𝗴𝘂𝗶𝘀𝗵𝗮𝗯𝗹𝗲 𝗔𝗱𝘃𝗶𝗰𝗲: Participants could not reliably tell AI from doctor responses (accuracy ~50%)1. • 𝗢𝘃𝗲𝗿𝘁𝗿𝘂𝘀𝘁 𝗶𝗻 𝗔𝗜: Both laypeople and experts rated high-accuracy AI responses as more valid and trustworthy than doctors’—especially when labeled as coming from a doctor1. • 𝗗𝗮𝗻𝗴𝗲𝗿 𝗼𝗳 𝗜𝗻𝗮𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗔𝗜: Low-accuracy AI advice was still trusted and acted upon, highlighting a real risk of misinformation and harm if AI is used unsupervised in healthcare1. • 𝗘𝘅𝗽𝗲𝗿𝘁 𝗕𝗶𝗮𝘀: Even physicians rated AI responses higher when blinded to the source, but were more critical when they knew the answer came from AI1. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: As generative AI becomes more prevalent in healthcare, we must prioritize rigorous oversight and collaboration with medical professionals. Trust should be earned through evidence—not hype or the illusion of authority. AI can extend the reach of healthcare, but only if its outputs are accurate, transparent, and always reviewed by real experts. Let’s build a future where technology augments, not replaces, the wisdom and responsibility of medical professionals. #AI #Healthcare #Trust #DigitalHealth #EvidenceBasedMedicine

  • View profile for Pawan Kohli

    Advancing AI Solutions in Healthcare | Ex-Unicorn Startup | Startup advisor | Investor Relations | Connector | Speaker | Mentor

    16,983 followers

    Research paper by Nature Portfolio investigates the factors influencing user adoption of #AI #health #assistants using an extended version of the Unified Theory of Acceptance and Use of Technology (UTAUT) model. ➡️ Research Objective and Framework - Study aimed to identify factors influencing users' intentions to use AI health assistants and enhance understanding of acceptance mechanisms for this technology. - Researchers extended the traditional UTAUT model by incorporating two additional variables: perceived trust (PT) and perceived risk (PR) alongside the original constructs of performance expectancy (PE), effort expectancy (EE), social influence (SI), and facilitating conditions (FC). ➡️ Methodology - Researchers conducted an online survey with 373 Chinese participants using the IFLY Healthcare app as experimental material. - IFLY Healthcare app is an AI-powered health management tool that integrates features including disease self-examination, report interpretation, drug inquiries, and medical information searches. - Participants watched a tutorial video, used the app for at least two interactions, then completed a questionnaire using a seven-point Likert scale. The data was analyzed using covariance-based structural equation modeling (CB-SEM). ➡️ Key Findings UTAUT Model Validation: - Performance expectancy, effort expectancy, and social influence significantly positively affected behavioral intention to use AI health assistants - Facilitating conditions did not show a significant impact on behavioral intention - Original UTAUT structure proved robust in the AI health assistant context Trust and Risk Relationships: - Perceived trust was closely related to performance expectancy, effort expectancy, and behavioral intention - Perceived trust negatively impacted perceived risk - Perceived risk adversely affected behavioral intention to use the technology ➡️ Participant Demographics - Study included 114 men and 259 women, with the majority (80.7%) aged between 19-39 years and holding bachelor's degrees (82.04%). Notably, 62.73% of participants had prior experience using the IFLY Healthcare app. ➡️ Practical Implications - Findings provide valuable insights for developers and operators of AI health assistants, particularly highlighting the importance of building user trust while minimizing perceived risks. - Study demonstrates that users' willingness to adopt AI health technology depends not only on its perceived usefulness and ease of use but also significantly on trust factors and risk perceptions. ➡️ Healthcare Context - AI health assistants are positioned as significant tools for personalized healthy lifestyle recommendations and decision support, with applications ranging from health management to chronic disease support. - Technology shows potential to reduce hospitalization rates, outpatient visits, and treatment requirements while improving early disease diagnosis and treatment efficiency.

  • View profile for Harvey Castro, MD, MBA.
    Harvey Castro, MD, MBA. Harvey Castro, MD, MBA. is an Influencer

    ER Physician | Chief AI Officer, Phantom Space | AI & Space-Tech Futurist | 5× TEDx | Advisor: Singapore MoH | Author ‘ChatGPT & Healthcare’ | #DrGPT™

    49,504 followers

    #AIAgents in #Healthcare (Part 3 of 5): Regulation and Trust: Why Transparency Matters Artificial intelligence holds enormous promise for transforming healthcare, yet trust and regulatory compliance are significant barriers to widespread adoption. Studies consistently show patients trust human healthcare providers more than AI-driven systems, largely due to the opacity of #AI decision-making processes. Regulatory agencies worldwide, including the #FDA and #WHO, are actively working to establish guidelines ensuring AI's safety, transparency, and reliability. For instance, the FDA’s latest AI/Machine Learning framework emphasizes continuous validation, while WHO guidelines stress the importance of transparency, data privacy, and risk management. To build trust, healthcare AI must prioritize: Explainability: Clear, understandable AI decisions. Bias Mitigation: Eliminating unfair biases through rigorous validation. Data Security: Strict adherence to privacy standards (#HIPAA, #GDPR). Trust isn’t merely regulatory—it's fundamental to patient acceptance and clinical success. AI transparency isn’t optional; it's essential. Have you encountered AI transparency concerns in your practice or with patients? I'd value hearing your perspective on overcoming these challenges. Follow me for the next post: "Hype vs. Reality – Separating AI Promises from Clinical Proof." #HealthcareAI #AIinMedicine #PatientTrust #Regulation #DigitalHealth #HealthTech #AItransparency #DoctorGPT #DrGPT #EthicalAI

Explore categories