Are you curious about how to create safe and effective artificial intelligence and machine learning (AI/ML) devices? Let's demystify the essential guiding principles outlined by the U.S. FDA, Health Canada | Santé Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) for Good Machine Learning Practice (GMLP). These principles aim to ensure the development of safe, effective, and high-quality medical devices. 1. Multi-Disciplinary Expertise Drives Success: Throughout the lifecycle of a product, it's crucial to integrate expertise from diverse fields. This ensures a deep understanding of how a model fits into clinical workflows, its benefits, and potential patient risks. 2. Prioritize Good Software Engineering and Security Practices: The foundation of model design lies in solid software engineering practices, coupled with robust data quality assurance, management, and cybersecurity measures. 3.Representative Data is Key: When collecting clinical study data, it's imperative to ensure it accurately represents the intended patient population. This means capturing relevant characteristics and ensuring an adequate sample size for meaningful insights. 4.Independence of Training and Test Data: To prevent bias, training and test datasets should be independent. While the FDA permits multiple uses of training data, it's crucial to justify each use to avoid inadvertently training on test data. 5. Utilize Best Available Reference Datasets: Developing reference datasets based on accepted methods ensures the collection of clinically relevant and well-characterized data, understanding their limitations. 6. Tailor Model Design to Data and Intended Use: Designing the model should align with available data and intended device usage. Human factors and interpretability should be prioritized, focusing on the performance of the Human-AI team. 7. Test Under Clinically Relevant Conditions: Rigorous testing plans should be in place to assess device performance under conditions reflecting real-world usage, independent of training data. 8. Provide Clear Information to Users: Users should have access to clear, relevant information tailored to their needs, including the product’s intended use, performance characteristics, data insights, limitations, and user interface interpretation. 9. Monitor Deployed Models for Performance: Deployed models should be continuously monitored in real-world scenarios to ensure safety and performance. Additionally, managing risks such as overfitting, bias, or dataset drift is crucial for sustained efficacy. These principles provide a robust framework for the development of AI/ML-driven medical devices, emphasizing safety, efficacy, and transparency. For further insights, dive into the full paper from FDA, MHRA, and Health Canada. #AI #MachineLearning #HealthTech #MedicalDevices #FDA #MHRA #HealthCanada
Trust and Safety Strategies in HealthTech
Explore top LinkedIn content from expert professionals.
Summary
Building trust and safety into healthtech, especially when leveraging AI, involves creating strategies that prioritize patient well-being, transparency, and collaborative design tailored to healthcare needs. These strategies aim to ensure that digital technologies in healthcare operate securely, effectively, and ethically, while supporting clinical workflows and enhancing decision-making processes.
- Incorporate multidisciplinary expertise: Engage diverse expertise throughout product development and implementation to ensure AI solutions align with clinical workflows and prioritize patient safety.
- Focus on transparency and accountability: Maintain clear records of data sources, system limitations, and AI processes to build trust among healthcare professionals and patients.
- Monitor and evaluate continually: Implement systems for ongoing evaluation and risk management to ensure AI and health technologies perform reliably and address evolving clinical needs.
-
-
Superhuman AI agents will undoubtedly transform healthcare, creating entirely new workflows and models of care delivery. In our latest paper from Google DeepMind Google Research Google for Health, "Towards physician-centered oversight of conversational diagnostic AI," we explore how to build this future responsibly. Our approach was motivated by two key ideas in AI safety: 1. AI architecture constraints for safety: Inspired by concepts like 'Constitutional AI,' we believe systems must be built with non-negotiable rules and contracts (disclaimers aren’t enough). We implemented this using a multi-agent design where a dedicated ‘guardrail agent’ enforces strict constraints on our AMIE AI diagnostic dialogue agent, ensuring it cannot provide unvetted medical advice and enabling appropriate human physician oversight. 2. AI system design for trust and collaboration: For optimal human-AI collaboration, it's not enough for an AI's final output to be correct or superhuman; its entire process must be transparent, traceable and trustworthy. We implemented this by designing the AI system to generate structured SOAP notes and predictive insights like diagnoses and onward care plans within a ‘Clinician Cockpit’ interface optimized for human-AI interaction. In a comprehensive, randomized OSCE study with validated patient actors, these principles and design show great promise: 1. 📈 Doctors time saved for what truly matters: Our study points to a future of greater efficiency, giving valuable time back to doctor. The AI system first handled comprehensive history taking with the patient. Then, after the conversation, it synthesized that information to generate a highly accurate draft SOAP note with diagnosis - 81.7% top-1 diagnostic accuracy 🎯 and > 15% absolute improvements over human clinicians - for the doctor’s review. This high-quality draft meant the doctor oversight step took around 40% less time ⏱️ than a full consultation performed by a PCP in a comparable prior study. 2. 🧑⚕️🤝 A framework built on trust: The focus on alignment resulted in a system preferred by everyone. The architecture guardrails proved highly reliable with the composite system deferring medical advice >90% of the time. Overseeing physicians reported a better experience with the AI ✅ compared to the human control groups, and (actor) patients strongly preferred interacting with AMIE ⭐, citing its empathy and thoroughness. While this study is an early step, we hope its findings help advance the conversation on building AI that is not only superhuman in capabilities but also deeply aligned with the values of the practice of medicine. Paper - https://lnkd.in/gTZNwGRx Huge congrats to David Stutz Elahe Vedadi David Barrett Natalie Harris Ellery Wulczyn Alan Karthikesalingam MD PhD Adam Rodman Roma Ruparel, MPH Shashir Reddy Mike Schäkermann Ryutaro Tanno Nenad Tomašev S. Sara Mahdavi Kavita Kulkarni Dylan Slack for driving this with all our amazing co-authors.
-
World Health Organization's latest report on 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐢𝐧𝐠 𝐀𝐈 𝐢𝐧 𝐡𝐞𝐚𝐥𝐭𝐡𝐜𝐚𝐫𝐞. Here’s my summary of key takeaways for creating a mature AI ecosystem. 𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐓𝐫𝐚𝐧𝐬𝐩𝐚𝐫𝐞𝐧𝐜𝐲: In the development of health AI systems, developers should maintain detailed records of dataset sources, algorithm parameters, and any deviations from the initial plan to ensure transparency and accountability. 𝐑𝐢𝐬𝐤 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭: The development of health AI systems should entail continuous monitoring of risks such as cybersecurity threats, algorithmic biases, and data model underfitting to guarantee patient safety and effectiveness in real-world settings. 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐚𝐥 𝐚𝐧𝐝 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐕𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧: When validating health AI systems, provide clear information about training data, conduct independent testing with randomized trials for thorough evaluation, and continuously monitor post-deployment for any unforeseen issues. 𝐃𝐚𝐭𝐚 𝐐𝐮𝐚𝐥𝐢𝐭𝐲 𝐚𝐧𝐝 𝐒𝐡𝐚𝐫𝐢𝐧𝐠: Developers of health AI systems should prioritize high-quality data and conduct thorough pre-release assessments to prevent biases or errors, while stakeholders should work to facilitate reliable data sharing in healthcare. 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐚𝐧𝐝 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧: In the development of a health AI systems, developers should be well-versed in HIPAA regulations and implement robust compliance measures to safeguard patient data, ensuring it aligns with legal requirements and protects against potential harms or breaches. 𝐄𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭 𝐚𝐧𝐝 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: Establish communication platforms for doctors, researchers, and policymakers to streamline the regulatory oversight process, leading to quicker development, adoption, and refinement of safe and responsible health AI systems. 👉 Finally, note that leaders should implement the recommendations holistically. 👉 A holistic approach is essential for building a robust and sustainable AI ecosystem in healthcare. (Source in the comments.)
-
🤖 Exciting advances in AI are transforming clinical decision support, but how do companies in the space make sure their solutions are trustworthy, effective, and truly supportive of clinical needs? Rhett Alden shared a compelling presentation last week at #himss24 outlining 3 key principles that align with my advice to clients on responsible AI deployment: Trust, Content Quality & Provenance, and Validation. 🚀🔍 1️⃣ 𝐓𝐫𝐮𝐬𝐭: 𝐓𝐡𝐞 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 🛡️ Trust is not just a principle; it's the bedrock of effective AI in healthcare. To build this trust, companies should: 👉🏻Leverage domain-specific LLMs to understand and interpret medical nuances accurately. 👉🏻Build a foundation rooted in Responsible AI and Quality Management Systems (QMS) Principles to ensure solutions are ethically developed and deployed. 👉🏻Implement Clinical Safety Frameworks safeguards against unintended consequences, protecting both patients and practitioners. 2️⃣ 𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐐𝐮𝐚𝐥𝐢𝐭𝐲 & 𝐏𝐫𝐨𝐯𝐞𝐧𝐚𝐧𝐜𝐞: 𝐓𝐡𝐞 𝐁𝐚𝐜𝐤𝐛𝐨𝐧𝐞 📚 Data provenance refers to the lineage or origin of a piece of data and where it has moved from to where it is presently. This is crucial for transparency and trustworthiness in clinical decision-making, so: 👉🏻The information feeding into AI systems must be copyright secure, ensuring all data is ethically sourced and legally compliant. 👉🏻Utilize Retrieval Augmented Generation for up-to-date, accurate, and contextually relevant information. 3️⃣ 𝐕𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧: 𝐓𝐡𝐞 𝐎𝐧𝐠𝐨𝐢𝐧𝐠 𝐂𝐨𝐦𝐦𝐢𝐭𝐦𝐞𝐧𝐭 🔬 For AI solutions to remain relevant and reliable, continuous validation is key: 👉🏻Automated and Clinician SME (Subject Matter Expert) evaluation ensures the AI's recommendations are clinically sound and practical. 👉🏻Real-world monitoring and CAPA (Corrective and Preventive Action) mechanisms ensure solutions adapt to new data, evolving standards, and emerging clinical practices. I'd love to hear your thoughts on these principles or any others you believe are crucial for the successful integration of AI into healthcare. 🌐💡 #AIinHealthcare #ClinicalDecisionSupport #DigitalHealth #HealthcareInnovation #TrustInAI #QualityInHealthcare Sam Pinson Reema Taneja
-
Jennifer Cooper et al. (2025) explored primary care providers’ perspectives on AI-assisted decision-making for patients with multiple long-term conditions (MLTC) through in-depth interviews. The interview highlighted essential insights for trust, safety, and human-centered digital transformation. Key Takeaways - Providers grappled with balancing medical needs, psychosocial factors, polypharmacy, and guideline gaps. This points to the nuanced challenge of MLTC care. - HCPs saw potential for AI to enhance safety and decision quality, but voiced concerns that over-reliance could erode therapeutic relationships. - Their top “must-haves” included transparent, explainable AI recommendations; seamless EHR integration; time efficiency; and preservation of clinician and patient autonomy. Dipu’s Take This study reflects critical lessons we emphasize in clinical education and quality improvement: - MLTC care isn’t linear. AI must support multifaceted decision layers, not simplify them. - Explainability, system integration, and time-saving are essential to clinician buy-in. - Empathy remains non-negotiable. Technology should augment, not replace, the clinician’s human connection. - Clinicians need training to appraise AI outputs, challenge algorithmic suggestions wisely, and retain decision autonomy. Let's discuss: How is your organization or program incorporating clinician-centered design into AI tools for complex care? What training and safeguards are you putting in place? https://lnkd.in/eTE9pwJ8
-
The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) sets out principles for Artificial Intelligence ahead of planned UK regulation: 🤖The MHRA has published a white paper outlining the need for specific regulation of AI in healthcare, emphasizing the importance of making AI-enabled health technology not only safe but also universally accessible 🤖 The agency is advocating for robust cybersecurity measures in AI medical devices and plans to release further guidance on this issue by 2025 🤖 It stresses the importance of international alignment in AI regulation to avoid the UK being at a competitive disadvantage and calls for upgraded classifications for certain AI devices that currently do not require authorization before market entry. 🤖MHRA has implemented the five key principles of AI usage: safety, security, transparency, fairness, and accountability. These principles aim to ensure AI systems are robust, transparent, fair, and governed by clear accountability mechanisms. 🤖The MHRA particularly emphasize transparency and explainability in AI systems, requiring companies to clearly define the intended use of their AI devices and ensure that they operate within these parameters 🤖Fairness is also highlighted as a key principle, with a call for AI healthcare technologies, to be accessible to all users, regardless of their economic or social status. 🤖The MHRA recently introduced the "AI Airlock", a regulatory sandbox that allows for the testing and refinement of AI in healthcare, ensuring AI's integration is both safe and effective 👇Link to article and white paper in comments #digitalhealth #AI
-
We are seeing more frameworks for the safe deployment of genAI. The Institute for Healthcare Improvement's Lucian Leape Institute just released specific recommendations for stakeholders across the healthcare ecosystem. The report summarizes three use cases that highlight areas where genAI could significantly impact patient safety: Documentation support – developing patient history summaries, supporting patient record reconciliation (including medication reconciliation), ambient recording of patient-clinician conversations, and drafting documentation. Clinical decision support - providing diagnostic support and recommendations, offering early detection or warning on changes to patient condition, and developing potential treatment plans. Patient-facing chatbots - acting as a data collector to support triage, interacting with patients and responding to their questions and concerns, and supporting care navigation. The report provides a detailed review of mitigation and monitoring strategies and expert panel recommendations; and an appraisal of the implications of genAI for the patient safety field. The expert panel (consisting of leaders from Amazon, Google, Microsoft, Harvard Medical School, The Leapfrog Group, and Kaiser Permanente) recommended: - Serve and safeguard the patient. Disclose and explain the use of patient-facing AI-based tools to patients. - Learn with, engage, and listen to clinicians. Equip clinicians with general knowledge on genAI and related ethical issues, as well as specific instruction on how to use available AI-based tools. - Evaluate and ensure AI efficacy and freedom from bias. Establish an evidence base of rigorously tested and validated AI-based tools, including the results of their use in real-life clinical situations. - Establish strict AI governance, oversight, and guidance both for individual health delivery systems and the federal government. - Be intentional with the design, implementation, and ongoing evaluation of AI tools. Follow human-centered design principles, actively engage end users in all phases of design, and validate models and tools with small-scale tests of real-world clinical uses. - Engage in collaborative learning across health care systems. I think this is a great summary. Did they miss anything? #genAI #healthcareAI #patientsafety
-
A brilliant medical technology sits unused eighteen months after FDA clearance because hospitals don't trust its outcomes data enough to build value-based contracts around it. This scenario plays out repeatedly across healthcare, where compliance is often treated as a regulatory checkbox rather than the foundation of trust that enables value-based partnerships. The consequences are devastating – innovative solutions that could transform patient care remain stuck in pilot after pilot while companies wonder why their clinical evidence isn't translating to commercial success. The uncomfortable truth is that in value-based care, governance isn't just about avoiding regulatory trouble. It's about building the confidence that allows partners to stake their financial future on your technology's performance. When a health system's shared savings bonus or a payer's medical loss ratio depends on your solution working as promised, they need more than marketing claims – they need systematic evidence and regulatory approvals validating that your processes are trustworthy. Cutting-edge MedTech companies have recognized this shift. They're implementing AI governance frameworks that detect performance drift before it impacts outcomes. They're creating data provenance systems that make patient-generated information trustworthy for clinical decisions. They're building supply chain oversight that ensures security and reliability throughout their technology's lifecycle. Today's newsletter unpacks Pillar 5 of the Value-Based MedTech framework: a comprehensive approach to governance and compliance that transforms these functions from cost centers to strategic enablers. Read on! ___________________________________________ Sam Basta, MD, MMM is a pioneer of Value-Based Medical Technology and LinkedIn Top Voice. Over the past two decades, he advised many healthcare and medical technology startups on translating clinical and technological innovation into business success. From value-based strategy and product development to go-to-market planning and execution, Sam specializes in creating and communicating compelling value propositions to customers, partners and investors. His weekly NewHealthcare Platforms newsletter is read by thousands of executives and professionals in the US and globally. #healthcareonlinkedin #artificialintelligence #ai #valuebasedcare #healthcare Vivek Natarajan Tom Lawry Subroto Mukherjee Rana el Kaliouby, Ph.D. Rashmi R. Rao Paulius Mui, MD Avi Rosenzweig Mark Miles Deepak Mittal, MBA, MS, FRM Elena Cavallo, ALM, ACC Chris Grasso
-
The Imperative of #Transparency in #AI: Insights from Dr. Jesse Ehrenfeld and the Boeing 737 Max Tragedy Jesse Ehrenfeld MD MPH President of the #AmericanMedicalAssociation, recently highlighted the critical need for transparency in AI deployments at the RAISE Health Symposium 2024. He referenced the tragic Boeing 737 Max crashes, where a lack of transparency in AI systems led to devastating consequences, underscoring the importance of clear communication and human oversight in AI applications. Key Lessons: 1. **Transparency is Non-Negotiable**: Dr. Ehrenfeld stressed that users must be fully informed about AI functionalities and limitations, using the Boeing 737 Max as a cautionary tale where undisclosed AI led to fatal outcomes. 2. **Expectation of Awareness**: Dr. Ehrenfeld provided a relatable example from healthcare, stating he would expect to know if a ventilator he was using in surgery was being adjusted by AI. This level of awareness is essential to ensure safety and effectiveness in high-stakes environments. 3. **Human Oversight is Essential**: The incidents highlight the need for human intervention and oversight, ensuring that AI complements but does not replace critical human decision-making. 4. **Building Trust in Technology**: Prioritizing transparency, safety, and ethics in AI is crucial for building trust and preventing avoidable disasters. As AI continues to permeate various sectors, it is imperative to learn from past mistakes and ensure transparency, thereby fostering a future where technology enhances human capabilities responsibly. **Join the Conversation**: Let's discuss how we can further integrate transparency in AI deployments across all sectors. Share your thoughts and experiences below. #AIethics #TransparencyInAI #HealthcareInnovation #DigitalHealth #DrGPT
-
Healthcare—a sector where innovation rapidly translates to real-world impact—is undergoing one of the most profound AI-driven transformations. The breakthroughs we help deliver are reshaping patient care, experiences, and outcomes, and underscore the deep purpose and sense of responsibility we bring to our work. I recently read through a report from the World Economic Forum and Boston Consulting Group (BCG) – “Earning Trust for AI in Health: A Collaborative Path Forward” – which outlines a cross-industry framework to build trust with AI and underlines a stark reality for us: without transparency and responsibility, we cannot capitalize on the promise of AI to improve healthcare. There are exciting breakthroughs in the industry happening every day. With the potential to improve and streamline patient care, implementing AI tools requires that the data and information that these tools provide is credible and reliable. At Pfizer we put responsible AI into action with our Responsible AI program, including a proprietary internal toolkit that allows colleagues to easily and consistently implement best practices for responsible AI in their work. Responsibility also played a crucial role in our recently launched Generative AI tool, #HealthAnswersbyPfizer, which utilizes trusted, independent third-party sources so that consumers can access relevant health and wellness information that is up to date. As we apply AI in the real world, these conversations around trust and ethics are paramount. It is our responsibility to not only lead the advancements that will improve the industry, but to also lead the movement in responsible, ethical AI that advances and protects us, not hinders or harms us. This will encourage the adoption of tools that can lead to healthier lives, lower costs, and a brighter future. To read more about the WEF/BCG report: https://bit.ly/406b0AS