How AI Is Changing Patient Trust in Healthcare

Explore top LinkedIn content from expert professionals.

Summary

Artificial intelligence (AI) is rapidly transforming healthcare, with its capabilities to enhance diagnostics, treatment plans, and patient communication. However, the integration of AI also raises questions about patient trust, particularly around transparency, safety, and the role of human oversight in healthcare decisions.

  • Communicate openly: Clearly explain when and how AI is being used in patient care to help build transparency and trust in the technology.
  • Ensure human oversight: Reinforce patient confidence by emphasizing that their healthcare decisions will include physician input alongside AI recommendations.
  • Involve patients actively: Develop AI tools with patient feedback to ensure that their concerns, values, and preferences are a part of the decision-making process.
Summarized by AI based on LinkedIn member posts
  • View profile for Benjamin Schwartz, MD, MBA
    Benjamin Schwartz, MD, MBA Benjamin Schwartz, MD, MBA is an Influencer

    SVP, Care Services & Strategy at Commons Clinic

    36,207 followers

    Turns out patients prefer ChatGPT's medical advice over human nurses', but only when they don't know it's coming from AI. This fascinating study of 253 TKA patients investigated the use of LLMs to answer patients' questions following knee replacement surgery. Orthopedic nurses and ChatGPT were asked the same questions, and their answers were graded by surgeons. Grades were almost identical between the two. When the researchers surveyed patients about their preferences, an interesting paradox emerged: 54% of patients were more comfortable with ChatGPT's answers compared to 34% for nurses' responses. Yet when asked directly, 93% said they'd be uncertain about trusting AI for medical questions. Almost 66% responded that their comfort level in trusting the answer would change if they knew it was provided by ChatGPT. In conclusion, ChatGPT gave good answers that patients preferred, but patients still demonstrated a level of discomfort and distrust in AI. Is ignorance bliss? As AI becomes more ubiquitous in healthcare, will skepticism hurt adoption? How do we bridge that gap? The study highlights the potential of Generative AI and LLMs but also reveals the barriers that remain. As we move full steam ahead to introduce artificial intelligence tools in medicine, we might want to take a moment to consider the patient perspective.

  • View profile for Hansa Bhargava MD
    Hansa Bhargava MD Hansa Bhargava MD is an Influencer

    Chief Clinical Strategy @Healio I Former Medscape CMO I Top Voice LinkedIn I Speaker I Advisor I Podcast Host I Bridging clinical medicine, innovation and storytelling to impact health |

    5,847 followers

    I will never forget the mom in the ER whose child was just diagnosed with Type 1 Diabetes. Tears rolled down her face as she processed this- ‘Will he be okay?’ she asked. ‘Yes. Trust us- we will make sure of it.’ She nodded. There are many skills that a health care professional must have to deliver the best care for their patient. The one that has helped me most as a physician, is establishing trust, often with kind communication. From talking to the parents of the very sick 5-month-old who needed a spinal tap to rule out meningitis, to the teen who was in denial of her pregnancy and didn’t want to tell her mother, to diagnosing a 10-year-old with Type 1 diabetes and giving parents this news, the key ingredient is establishing trust. As AI and innovation explode in healthcare, what role does TRUST play for patient and clinician adoption? The best and most proven AI tools to improve health will not succeed, if they do not have TRUST and relationship building from the clinicians or patients who are using them. Do doctors and patients see AI in health similarly? There have been a number of surveys gauging attitudes towards AI. Recently, Future of Health Index (FHI) Philips questioned over 16,000 patients and 1,926 healthcare professionals in an online survey. The findings included that although 63% of HCPs felt that AI could improve healthcare, only 48% of patients do. Age of patients mattered- only 1/3 of those over 45 felt AI could optimize health. But the issue of TRUST for patients was key: - Over 70% of patients would feel more comfortable about AI use in healthcare, if their doctor or nurse gave them information about it. - 44% of patients would feel more comfortable with AI if reassured an HCP had oversight  - Validated testing for safety and effectiveness of the tool helped 35% of patients more comfortable Clinicians seem to be engaged in AI use in health; the AMA and Healio have shown physicians to be engaged and interested in AI use. In their respective surveys 50% to 68% of doctors are using AI enhanced tools, includeing transcription, search, and patient education. But one theme constantly resonates across all 3 surveys – the desire for SAFETY. 85% of HCPs were concerned about safety and legal risk of AI usage in the FHI survey with over half desiring clear guidelines for usage and limitations. In a time when patients are still waiting almost 2 months to see specialists and clinicians are still feeling overwhelmed with admin tasks, AI can certainly make a difference. But it seems that, at the end of the day, the simple task of TRUST is what will make a difference in the ADOPTION of these tools. And that means having clinicians and patients understand, and be comfortable with the technologies, and ensuring safe and tested innovations as well. Do you think TRUST is important in AI tool integration? #innovation #trust https://lnkd.in/es3tjwib

  • View profile for Dr. Kedar Mate
    Dr. Kedar Mate Dr. Kedar Mate is an Influencer

    Founder & CMO of Qualified Health-genAI for healthcare company | Faculty Weill Cornell Medicine | Former Prez/CEO at IHI | Co-Host "Turn On The Lights" Podcast | Snr Scholar Stanford | Continuous, never-ending learner!

    21,054 followers

    Not every patient trusts (or distrusts) healthcare AI equally. A recent global survey published in @JAMA Network Open found interesting variances among groups, according to gender, tech literacy and health status. The survey of nearly 14,000 hospital patients in 43 countries sought to assess patient attitudes toward AI. Among the findings: ·      Most patients were positive about the general use of AI in healthcare (57.6%) and favored its increased use (62.9%) ·      Female patients were slightly less positive about the general use of AI (55.6%) than males (59.1%) ·      The worse their health, the more negative patients felt about AI ·      Not surprisingly, the more patients knew about AI and tech in general, the more positive they were about AI in healthcare.   There was also a clear lean toward explainable AI, with 70.2% of patients indicating a preference for AI with transparent decision-making processes. That’s higher than a previous U.S. study which found that only 42% of patients felt uncomfortable with highly accurate AI diagnoses that lacked explainability. Whatever the percentage, it’s clear that patients want to know what's going on "under the hood" with algorithms that support clinical decision-making. AI implementation must be transparent with clear explanations of decisions for providers and patients. That is the only way we can build the necessary foundation of trust in the technology that will allow it to achieve its full potential. You can read the survey results here: https://lnkd.in/dJmkRtSw #HealthcareAI #HealthcarePolicy #GenAI #AIHealth

  • Would YOU trust an AI to make life-or-death decisions about your health—without a doctor involved? 🤯 In a sweeping global survey of 13,800+ patients across 43 countries, only 4.4% said yes. The message? Patients aren’t ready for doctor-less AI. And maybe, they never will be. Patients want AI that enhances their care—not replaces their physician. They want tools that are transparent, explainable, and guided by a human hand. Notably, patients in poorer health were less likely to trust AI. Why? It’s not just about tech literacy. Chronic illness can erode trust, reduce autonomy, and intensify fears of losing control. 🤝 The solution? A Patient-in-the-Loop model—where patients are not passive recipients, but co-creators of the AI tools that serve them. Pair this with physician oversight, and we unlock the real potential of AI in healthcare: Augmentation over automation. 👜 The takeaway for innovators and clinicians: Don’t chase autonomy. Chase alignment. Build AI that respects the patient, empowers the doctor, and keeps the human connection at the center. Because in healthcare, the true measure of progress isn't how autonomous our systems become—but how deeply they reflect the values, voices, and vulnerabilities of the people they serve. #ArtificialIntelligence #DigitalHealth #HealthTech #PatientCentricCare #AIinMedicine #HumanCenteredAI #ExplainableAI #EthicalAI #TrustInTechnology #FutureOfHealthcare

Explore categories