Something in healthcare AI has been bothering me—and two posts this morning helped me name it. 🔹 One from Spencer Dorn MD, who warns that AI isn’t just making doctors more efficient—it’s turning us into quantified workers. Every action tracked. Every decision scored. A creeping loss of professional judgment, masked as “optimization.” 🔹 The other from Graham Walker, MD Walker who points out something equally dangerous: AI doesn’t need to be accurate—it just needs to sound like it is. A confident hallucination can earn trust. Especially in healthcare, where sounding like a doctor often works better than being one. At first, these seem like different critiques: One about surveillance, the other about persuasion. But together, they reveal a bigger shift: AI is slowly changing who we trust in medicine—and why. Not long ago, trust came from relationships. You knew your physician. They knew you. Trust was earned over time, through presence, context, and accountability. Now? Patients are being coached to trust the interface. Doctors are being scored by the dashboard. And both are being trained to believe: if it’s fluent, fast, and confident—it must be right. We’re drifting from relational medicine to performative medicine. Where appearing reliable replaces actually being responsible. That’s what ties Spencer’s “quantified doctor” to Graham’s “confident AI.” In both cases, the human gets flattened—either into a metric, or a voice that can be mimicked. This isn’t a Luddite argument. AI will help medicine in powerful ways. But we can’t ignore what’s being hollowed out in the process. Because once trust migrates from people to systems, it doesn’t just change the work. It changes the soul of the profession. Full posts linked below, definitely recommend the read! #HealthcareonLinkedin #healthcare #AI #AIinHealthcare
Why platform algorithms hurt healthcare trust
Explore top LinkedIn content from expert professionals.
-
-
Tech speed doesn’t transform healthcare. It breaks it. But we keep pretending it does. Over the past 20 years, I’ve seen tech-driven healthcare projects quietly unravel trust, safety, and clinician autonomy. Not because of malice, but because of a fundamental mismatch between tech culture and clinical reality. Tech moves fast. Healthcare moves carefully. Tech thrives on scale. Healthcare thrives on context. Tech automates. Clinicians reason. Here’s where well-intentioned "tech-ification" breaks down. And why healthcare innovators need to rethink the playbook before the hidden costs become irreversible: 1. Speed vs. Clinical Judgment In tech, speed is a feature. In healthcare, speed can be a liability. Digitising a workflow isn’t the same as improving care. Some of the worst clinical decisions I’ve witnessed came from well-meaning clinicians rushing to ‘click through’ an overloaded EHR designed for billing, not bedside care. 2. Automation vs. Accountability In tech, automation removes friction. In healthcare, it can remove responsibility. I once sat in a clinical review where a patient was harmed because the care team followed a flawed algorithm recommendation. Nobody felt responsible. they were "just following the system." 3. Scale vs. Local Context Tech thrives on playbooks you can copy-paste. But healthcare isn’t scalable like SaaS. What works in a high-volume urban ER fails completely in a rural clinic with no full-time specialists. I’ve seen national rollouts collapse because they ignored this truth. 4. Metrics vs. Meaning Tech loves dashboards. DAUs, MAUs, CACs, conversion rates. Healthcare has metrics too, but the most important outcomes are often invisible to those dashboards: - The conversation that helped a patient finally trust their diagnosis. - The HCP who caught a subtle sign no algorithm could see. - The ethical pause before an irreversible treatment. Healthcare isn’t a system just to optimize. It’s a covenant between patients and those who care for them. If you’re working at the intersection of tech and healthcare, ask yourself: Are we solving for the patient, or for the pitch deck? The future of healthcare innovation won’t be led by those who move the fastest IMO. It’ll be led by those who earn (and deserve) the trust of the people whose jobs are the lives at stake.
-
A lesson from self-driving cars… Healthcare's AI conversation remains dangerously incomplete. While organizations obsess over provider adoption, we're neglecting the foundational element that will determine success or failure: trust. Joel Gordon, CMIO at UW Health, crystallized this at a Reuters conference, warning that a single high-profile AI error could devastate public confidence sector-wide. His point echoes decades of healthcare innovation: trust isn't given—it's earned through deliberate action. History and other industries can be instructive here. I was hoping by now we’d have fully autonomous self-driving vehicles (so my kids wouldn’t need a real driver’s license!), but early high-profile accidents and driver fatalities damaged consumer confidence. And while it’s picking up steam again, but we lost some good years as public trust needed to be regained. We cannot repeat this mistake with healthcare AI—it’s just too valuable and can do so much good for our patients, workforce, and our deeply inefficient health systems. As I've argued in my prior work, trust and humanity must anchor care delivery. AI that undermines these foundations will fail regardless of technical brilliance. Healthcare already battles trust deficits—vaccine hesitancy, treatment non-adherence—that cost lives and resources. AI without governance risks exponentially amplifying these challenges. We need systematic approaches addressing three areas: Transparency in AI decision-making, with clear explanations of algorithmic conclusions. WHO principles emphasize AI must serve public benefit, requiring accountability mechanisms that patients and providers understand. Equity-centered deployment that addresses rather than exacerbates disparities. There is no quality in healthcare without equity—a principle critical to AI deployment at scale. Proactive error management treating mistakes as learning opportunities, not failures to hide. Improvement science teaches that error transparency builds trust when handled appropriately. As developers and entrepreneurs, we need to treat trust-building as seriously as technical validation. The question isn't whether healthcare AI will face its first major error—it's whether we'll have sufficient trust infrastructure to survive and learn from that inevitable moment. Organizations investing now in transparent governance will capture AI's potential. Those that don't risk the fate of other promising innovations that failed to earn public confidence. #Trust #HealthcareAI #AIAdoption #HealthTech #GenerativeAI #AIMedicine https://lnkd.in/eEnVguju
-
AI Misdiagnosed Him—The System Said He Was Fine. But the symptoms told a different story. He was Black, 52, experiencing chest pain. The AI model rejected further testing. Later tests showed a serious cardiac issue missed earlier. A 4.5% lower testing rate. For Black patients. From algorithms trusted to help. This isn’t just a bug. It’s a warning bell. It’s what happens when historical bias gets written into code. And deployed across hospitals, unseen. AI in healthcare is scaling fast. But trust doesn't scale by default. Transparency must be earned. Fairness must be engineered. Operations leaders can't rely on model accuracy alone. Bias mitigation isn't optional—it's operationally critical. Have you asked your vendors how their models perform across race, gender, and geography? And can they prove it? What’s one step your team has taken to ensure fairness in AI-backed decisions? CellStrat #bias #cellbot #cellassist #automation #trust #race #healthcare #healthcareAI #AI
-
It just became more difficult to trust LLMs in healthcare. Since a recent study showed a shocking fact: it’s easier to corrupt LLMs than previously expected. Just a small number of corrupted samples can sabotage LLMs. And therefore manipulate the output of the LLM. Ultimately risking impacting patient care, when used in healthcare. We’ve often gotten the impression that more data is better. But bigger models trained on more data are equally vulnerable to small amounts of poisoned dat LLM models can be exploited during the AI model training and fine-tuning. Even tiny amounts of poisoned data can implant hidden backdoors that trigger harmful AI behaviors. 250 poison samples can compromise the models, independent on model and dataset sizes. Data poisoning attacks or deliberate corruptions of AI training data, can: - Sabotage your AI models - Cause misdiagnoses or wrong treatments - Harmful recommendations - Risk patient safety - Disrupt workflows - Misallocate resources - Cause misinformation So, the new study from Anthropic should not be ignored. Especially when operating in healthcare. Here are some of my reflections: 1. Data poisoning risk being ignored by many healthcare AI projects 2. AI model size does not guarantee immunity from poisoning 3. Patient safety consequences can be severe but subtle 4. Security investments often miss data integrity aspects 5. Regulatory frameworks lag behind new AI vulnerabilities AI systems influencing millions of patients depend on accurate training data. We cannot accept this risk when using LLMs, or other AI tools in patient care. We need to: 1. Implemented strict data validation pipelines 2. Developed continuous AI model monitoring systems 3. Improve staff training on AI threat awareness 4. Collaborated cross-sector on AI governance 5. Invested in research on AI attack mitigation It will be increasingly important to ensure data quality for LLMs. Especially in fields where patient outcomes could be affected. If patients and healthcare professionals don’t trust the data or the outcome, the tool will die. Understanding data poisoning will be critical for healthcare leaders who want to implement AI safely. How are you preparing for a safe implementation?
-
Most AI health apps in India die because they’re over engineered & lack context . I’ve seen 100s of pitch decks and 90% won’t survive past downloads. Not because AI isn’t powerful, but founders chase the wrong use case. Sharing notes so we can build better. 1) Most copy US models built for digitized EMRs and dump it into India, where 90% of records are on paper. AI outside the doctor patient flow is useless. Clinicians are burnt out and AI must make them faster, not force them to learn another software. 2) It’s a foolish assumption to think patients will log in daily. In India, 80% patients are episodic & they show up sick & vanish when better. Retention needs habit + urgency, and you can’t build habit if your app doesn’t solve a daily pain point. 3) Everyone boasts “Our AI predicts disease before symptoms.” That’s not a feature, it’s a liability. In healthcare, trust is the product and one wrong prediction and you’ve lost it for good. Especially in a low trust market, this backfires. 4) AI isn’t free. Most app position it like that. In a price sensitive market like India, no one will pay 500 bucks/month for “insights” when a real GP costs 500/ visit. AI can augment, but it can’t replace human trust and follow up. If you really want to solve for Healthcare using AI then integrate into existing healthcare touchpoints. Make AI invisible and just make the care feel faster, cheaper, more accurate. Build trust as your moat.
-
🩺 “The scan looks normal,” the AI system says. The doctor hesitates. Will the clinician trust the algorithm? And perhaps most importantly—should they? We are entering an era where artificial intelligence will be woven into the fabric of healthcare decisions, from triaging patients to predicting disease progression. The potential is breathtaking: earlier diagnoses, more efficient care, personalized treatment plans. But so are the risks: opaque decision-making, inequitable outcomes, and the erosion of the sacred trust between patient and provider. The challenge is no longer just about building better AI. It’s about building better ways to decide if—and how—we should use it. That’s where the FAIR-AI framework comes in. Developed through literature reviews, stakeholder interviews, and expert workshops, it offers healthcare systems a practical, repeatable, and transparent process to: 👍 Assess risk before implementation, distinguishing low, moderate, and high-stakes tools. 👍 Engage diverse voices, including patients, to evaluate equity, ethics, and usefulness. 👍 Monitor continuously, ensuring tools stay aligned with their intended use and don’t drift into harm. 👍 Foster transparency, with plain-language “AI labels” that demystify how tools work. FAIR-AI treats governance not as a barrier to innovation, but as the foundation for trust—recognizing that in medicine, the measure of success isn’t how quickly we adopt technology, but how wisely we do it. Because at the end of the day, healthcare isn’t about technology. It’s about people. And people deserve both the best we can build—and the safeguards to use it well. #ResponsibleAI #HealthcareInnovation #DigitalHealth #PatientSafety #TrustInAI #HealthEquity #EthicsInAI #FAIRAI #AIGovernance #HealthTech
-
Last week at an AI healthcare summit, a Fortune 500 CTO admitted something disturbing: "We spent $7M on an enterprise AI system that sits unused. Nobody trusts it." And this is not the first time I have come across such cases. Having built an AI healthcare company in 2018 (before most people had even heard of transformers), I've witnessed this pattern from both sides: as a builder and as an advisor. The reality is that trust is the real bottleneck to AI adoption (not capability). I learned this firsthand when deploying AI in highly regulated healthcare environments. I have watched brilliant technical teams optimize models to 99% accuracy while ignoring the fundamental human question: "Why should I believe what this system tells me?" This creates a fascinating paradox that affects both enterprises, as well as people like you and me, so we can effectively use AI today: Users want AI that works autonomously (requiring less human input) yet remains interpretable (providing more human understanding). This tension is precisely where UI design becomes the determining factor in market success. Take Anthropic's Claude, for example. Its computer use feature reveals reasoning steps anyone can follow. It changes the experience from "AI did something" to "AI did something, and here's why" – making YOU more powerful without requiring technical expertise. The business impact speaks for itself: their enterprise adoption reportedly doubled after adding this feature. The pattern repeats across every successful AI product I have analyzed. Adept's command-bar overlay shows actions in real-time as it navigates your screen. This "show your work" approach cut rework by 75%, according to their case studies. These are not random enterprise solutions. They demonstrate how AI can 10x YOUR productivity today when designed with human understanding in mind. They prove a fundamental truth about human psychology: Users tolerate occasional AI mistakes if they can see WHY the mistake happened. What they won't tolerate is blind faith. Here's what nobody tells you about designing UI for AI that people actually adopt: • Make reasoning visible without overwhelming. Surface the logic, not just the answer • Signal confidence levels honestly. Users trust systems more when they admit uncertainty • Build correction loops that let people fix AI mistakes in seconds, not minutes • Include preview modes so users can verify before committing This is the sweet spot. — The market is flooded with capable AI. The shortage is in trusted AI that ordinary people can leverage effectively. The real moat is designing interfaces that earn user trust by clearly explaining AI's reasoning without needing technical expertise. The companies that solve for trust through thoughtful UI design will define the next wave of AI. Follow me Nicola for more insights on AI and how you can use it to make your life 10x better without requiring technical expertise.
-
My new Forbes article. Is TikTok making us sicker? The most expensive thing in U.S. healthcare right now isn’t a new drug or device. It’s bad information. AI has supercharged misinformation, platforms still reward novelty over accuracy, and the bill shows up as ER visits, delayed care, counterfeit meds, and resurging diseases we’d already solved. The fixes aren’t mysterious: prebunk early, vet before sharing, demand transparency, enforce obvious scams. Families, employers, platforms, and health systems all have a role to play. Medical misinformation isn’t a side effect of the internet. It’s a business model. And unless incentives change, we’ll keep paying the price in dollars, outcomes, and lives. #Healthcare #AI #Misinformation #DigitalHealth #HealthEquity #PublicHealth #Infodemic https://lnkd.in/dYrBgUa4
-
No Trust, No Transformation. Period. AI is becoming ready for the healthcare frontlines. But without trust, it stays in the demo room. At every conference, HIMSS, HLTH Inc., Society for Imaging Informatics in Medicine (SIIM), and even yesterday’s HLTH Europe’s Transformation Summit tech dazzles. AI, cloud, interoperability...are ready to take the stage. And yet, one thing lingers in every room: TRUST. We celebrate the breakthroughs and innovation, but quietly wonder: Will clinicians actually adopt this? Will patients accept it? It’s unmistakable…If we don’t solve the trust gap, digital tools remain in demo stage, not becoming an adopted solution! This World Economic Forum & Boston Consulting Group (BCG) white paper was mentioned yesterday at the health transformation summit by Ben Horner and was heavily discussed during our round table conversation at the summit. It lays out a bold vision for building trust in health AI and it couldn’t come at a more urgent time. Healthcare systems are under pressure, and AI offers real promise. But without trust, that promise risks falling flat. Here are some of the key points summarized by AI from the report “Earning Trust for AI in Health”: • Today’s regulatory frameworks are outdated: They were built for static devices, not evolving AI systems. • AI governance must evolve: Through regulatory sandboxes, life-cycle monitoring, and post-market surveillance. • Technical literacy is key: Many health leaders don’t fully understand AI’s risks or capabilities. That must change. • Public–private partnerships are essential: To co-develop guidelines, test frameworks, and ensure real-world impact. • Global coordination is lacking: Diverging regulations risk limiting access and innovation, especially in low-resource settings. Why it matters: AI will not transform healthcare unless we embed trust, transparency, and accountability into every layer from data to IT deployment. That means clinicians/hcps need upskilling, regulators need new tools, and innovators must be part of the solution, not just the source of disruption. The real innovation? Building systems that are as dynamic as the technology itself. Enjoy the read and let me know your thoughts…