Clinicians don’t trust your HealthTech product. And they’re right not to. You think you’re selling innovation. But they’re seeing liability. When a doctor uses your product, they’re not just clicking a button. They’re staking their license, reputation, and someone’s life on a tool they didn’t build… Made by someone who’s never stepped inside an operating theatre. This is the Clinical Trust Chasm. Most HealthTech companies never cross it. They win pilots, not trust. Investors, not integration. Press, not protocols. Trust in medicine isn’t earned with features. It’s earned with consequences. Ask any surgeon why they use a specific tool. It’s not because it’s cutting-edge. It’s because it’s predictable under pressure. They’ve seen it fail, and seen what happens next. They know it's blind spots. They know when not to use it. You can’t shortcut that with UI polish and a few endorsements. If you want your HealthTech product to be adopted, not just trialled: You have to reverse the trust equation. Here’s how I’ve seen it work: - Put the clinician in control - Stop “automating decisions”. Start augmenting judgement. - Build fail-safes, override paths, audit trails. Trust starts when you acknowledge what you don’t know. Design for blame Assume someone will get hurt using your product. Will they say: “We knew this tool. We trusted it. We stood by it.” Or: “They promised it would work.” Over-communicate uncertainty No one’s ever said, “That medical device was too transparent.” Show the confidence intervals. Flag the edge cases. Clinicians are trained to work with ambiguity, just not surprise. Many HealthTech founders think clinicians are “resistant to change”. IMO they’re not. They’re allergic to risk they didn’t consent to. They don’t need to understand your model. They need to understand how it breaks, and what happens when it does. Build for that moment. That’s where real adoption begins.
Engineering trust in medical device software
Explore top LinkedIn content from expert professionals.
Summary
Engineering trust in medical device software means designing and maintaining technology that doctors and patients feel confident using, knowing it will perform safely and transparently. Trust is built by minimizing risks, ensuring the software is clear about its decisions, and supporting healthcare providers in making informed choices rather than automating them.
- Prioritize transparency: Clearly communicate how the device software makes decisions and highlight any limitations, so users know what to expect and how to respond.
- Maintain rigorous risk tracking: Use integrated, dynamic risk management systems that keep up with design changes and allow for early detection and correction of potential problems.
- Empower user judgment: Design software features that support clinicians’ expertise, such as providing override options and detailed audit trails, rather than taking critical decisions out of their hands.
-
-
Complex, software-intensive medical devices need many design iterations during development and frequent upgrades after product launch. How can rigorous risk management keep up with all those changes? If risk assessments are managed in documents (spreadsheets) then it will be very difficult, and in some cases impossible, to manually keep all the risk information and traceability up-to-date. Instead, a platform-based approach is needed where all the risk information and key design controls information are all managed together. This is an approach I call “Dynamic Risk Management” for efficient risk assessment and tracking of risk controls in an environment of frequent design changes. The most common approach I've seen to risk management (document-based) is quite static. This means that any changes to the product design require lots of editing to the risk documents. Product teams under time pressure are then tempted to wait until the product design stops changing before compiling the risk analysis documents (with all the drawbacks of that approach). Don’t wait until the end of product development to perform risk analysis! In this article “Dynamic Risk Management for Software-Enabled Medical Devices” I explain: 🔷 The shortcomings of the document-based approach to risk management–why spreadsheets work well initially but not throughout the product life cycle 🔷 The basic mechanics of using the platform-based approach, with dedicated software tools (“The Hub”) to manage risks and risk controls 🔷 Integration of risk management with design controls in The Hub 🔷 Documentation automation to revise documents rapidly and efficiently https://lnkd.in/eRr9sVEh This is the fourth article in a series I co-authored with Monik Sheth, founder of Ultralight Labs (now part of Greenlight Guru) Development of complex, software-intensive medical devices requires iterative design and iterative design requires dynamic risk management.
-
BREAKING! The FDA just released this draft guidance, titled Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, that aims to provide industry and FDA staff with a Total Product Life Cycle (TPLC) approach for developing, validating, and maintaining AI-enabled medical devices. The guidance is important even in its draft stage in providing more detailed, AI-specific instructions on what regulators expect in marketing submissions; and how developers can control AI bias. What’s new in it? 1) It requests clear explanations of how and why AI is used within the device. 2) It requires sponsors to provide adequate instructions, warnings, and limitations so that users understand the model’s outputs and scope (e.g., whether further tests or clinical judgment are needed). 3) Encourages sponsors to follow standard risk-management procedures; and stresses that misunderstanding or incorrect interpretation of the AI’s output is a major risk factor. 4) Recommends analyzing performance across subgroups to detect potential AI bias (e.g., different performance in underrepresented demographics). 5) Recommends robust testing (e.g., sensitivity, specificity, AUC, PPV/NPV) on datasets that match the intended clinical conditions. 6) Recognizes that AI performance may drift (e.g., as clinical practice changes), therefore sponsors are advised to maintain ongoing monitoring, identify performance deterioration, and enact timely mitigations. 7) Discusses AI-specific security threats (e.g., data poisoning, model inversion/stealing, adversarial inputs) and encourages sponsors to adopt threat modeling and testing (fuzz testing, penetration testing). 8) And proposed for public-facing FDA summaries (e.g., 510(k) Summaries, De Novo decision summaries) to foster user trust and better understanding of the model’s capabilities and limits.
-
Poor risk analysis costs you everything. It doesn’t take much to break trust in MedTech. One missed risk. One design flaw. One weak mitigation plan. And suddenly — your product, your credibility, even patient safety is on the line. When risk management fails, it’s not just a technical issue. It’s a leadership gap. The good news? You don’t need to predict every problem. You need systems that detect them early (Before they turn into something bigger.) These 5 tools help you do just that: 1. ISO 14971 ↳ It covers everything from analysis to monitoring. 2. FMEA ↳ Finds weak spots early before they turn into real failures. 3. Fault Tree Analysis ↳ Helps you trace problems back to the real cause. 4. Ishikawa Diagram ↳ Visual tool to see all possible risk factors at once. 5. HAZOP Study ↳ Perfect for spotting hidden risks in complex processes. Here’s the bottom line: Risk management is everyone’s responsibility. It’s how you build trust with your team, your partners, and your regulators. Because when you manage risk well… you don’t just protect your product. You protect patients. ♻️ Find this valuable? Repost for your network. Follow Bastian Krapinger-Ruether for expert insights on MedTech compliance and QM.
-
The ugly truth about AI in healthcare: It's not trustworthy enough. People on LinkedIn tell you to "make your AI trustworthy" Almost nobody tells you how to. If they won't, I will… So, here's how to build trust in European Medical AI in 8 simple steps: 𝗦𝘁𝗲𝗽 1 - 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁 𝗜𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗮𝗯𝗹𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 Example ↳ Compare interpretability of different models, prioritizing those that provide clear explanations for their decisions. 𝗦𝘁𝗲𝗽 2 - 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗘𝘅𝗽𝗹𝗮𝗻𝗮𝘁𝗼𝗿𝘆 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 Example ↳ Show complex medical data in before-and-after visuals, highlighting key changes and AI insights. 𝗦𝘁𝗲𝗽 3 - 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁 𝗥𝗲𝗹𝗲𝘃𝗮𝗻𝘁 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀 Example ↳ Use heatmaps to highlight key decision-making factors in medical imaging analysis. 𝗦𝘁𝗲𝗽 4 - 𝗖𝗿𝗲𝗮𝘁𝗲 𝗘𝘅𝗽𝗹𝗮𝗻𝗮𝘁𝗼𝗿𝘆 𝗜𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲 Example ↳ Design an interface with clear AI decisions explanations, allowing users to explore the reasoning behind ie. diagnoses. 𝗦𝘁𝗲𝗽 5 - 𝗔𝗹𝗶𝗴𝗻 𝗪𝗶𝘁𝗵 𝗘𝘁𝗵𝗶𝗰𝘀 Example ↳ Follow a Ethical Guidelines checklist for AI, ensuring compliance with principles like human autonomy and fairness. 𝗦𝘁𝗲𝗽 6 - 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗳𝗼𝗿 𝗖𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 Example ↳ Display your interpretability approach in a regulatory submissions template, detailing how your AI meets (upcoming) EU standards. 𝗦𝘁𝗲𝗽 7 - 𝗦𝗽𝗲𝗮𝗸 𝗖𝗹𝗮𝗿𝗶𝘁𝘆 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 Example ↳ Make your AI explanations simple and transparent, using language that both medical professionals and patients can understand. 𝗦𝘁𝗲𝗽 8 - 𝗘𝗻𝘀𝘂𝗿𝗲 𝗣𝗿𝗶𝘃𝗮𝗰𝘆 𝗮𝗻𝗱 𝗗𝗮𝘁𝗮 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 Example ↳ Implement robust data protection measures and clear governance policies, adhering to GDPR and sector-specific regulations. Remember, interpretability, ethics, and data protection are key to trust. Speak the language of clarity in your AI, and healthcare professionals will embrace it with confidence. What are you doing to make your AI trustworthy?
-
When smart medical devices need to explain themselves 🔬 How do we bridge the gap between the "black box" nature of AI systems and the transparency requirements of European regulations? A new study from researchers at the University of Zurich and University of Namur addresses this tension by developing a systematic methodology for matching explainable AI (XAI) tools with the specific requirements of GDPR, the AI Act, and Medical Device Regulation. Medical AI represents one of the largest investment areas globally, nearly $6 billion according to Stanford's 2023 AI Index. Yet as these systems evolve from simple diagnostic aids to sophisticated closed-loop devices that make autonomous treatment decisions, we're entering uncharted territory where algorithmic opacity meets life-or-death consequences. The researchers created a framework that categorizes smart biomedical devices by their control mechanisms: (i) open-loop systems where humans interpret data; (ii) closed-loop systems that act autonomously; and (iii) semi-closed-loop systems that blend human and machine decision-making. Each category triggers different regulatory requirements for explanation. The study reveals 11 distinct "legal explanatory goals" that EU regulations pursue - from understanding system risks to interpreting specific outputs. A closed-loop epilepsy device that automatically triggers brain stimulation faces the full weight of GDPR's "right to explanation," while semi-closed-loop spinal cord stimulators have different transparency requirements. The research acknowledges a nuanced reality often overlooked in discussions of AI regulation: simply applying an XAI algorithm doesn't guarantee meaningful explanation or regulatory compliance. The effectiveness depends on proper implementation, appropriate audience consideration, and recognition that most existing XAI methods rely on imperfect heuristics. As we embed AI deeper into healthcare, we're asking fundamental questions about trust, autonomy, and the nature of informed consent when the systems making recommendations are too complex for humans to fully comprehend. The methodology provides a practical framework for developers navigating the complex intersection of innovation and regulation. It also reveals the inherent tensions: The most transparent systems aren't always the most accurate, and the drive for explainability might sometimes conflict with clinical effectiveness. This research suggests we need adaptive approaches that can evolve with both technological advancement and regulatory development. The framework they propose is designed to accommodate future XAI methods and emerging legal requirements - recognizing that this intersection of AI and healthcare regulation will continue to evolve. Link to the study in the first comment.