Multimodal Biomedical AI Models

Explore top LinkedIn content from expert professionals.

Summary

Multimodal biomedical AI models refer to advanced AI systems that analyze and combine multiple types of medical data—such as images, text, and biological markers—to enhance healthcare outcomes like disease detection, diagnosis, and treatment planning.

  • Use diverse datasets: Incorporate large, well-annotated datasets from multiple sources like imaging studies and lab results to build more accurate and versatile AI models.
  • Focus on integration: Design systems that can combine visual, textual, and biological data seamlessly for comprehensive medical analysis.
  • Validate rigorously: Ensure models undergo extensive clinical validation to confirm reliability before deployment in real-world healthcare settings.
Summarized by AI based on LinkedIn member posts
  • View profile for Zain Khalpey, MD, PhD, FACS

    Director of Artificial Heart & Robotic Cardiac Surgery Programs | Network Director Of Artificial Intelligence | Course Director- Advanced Robotic Cardiac Course 2025 (AF In The Desert) | #AIinHealthcare

    71,619 followers

    Today, on World Cancer Day, we recognize the profound impact cancer has on individuals and families worldwide. My father had stage IIIB adenocarcinoma of the lung, with his left upper lobe removed, and my uncle succumbed to small cell lung cancer. Both were non-smokers. These stories underscore the urgency of advancing our detection methods. It's a personal mission for many, driven by the hope that through technology, particularly the fusion of Knowledge AI and Big Data AI, we can unveil these silent killers early enough to make a difference. Here's a proposed 10-step protocol for deploying an algorithm capable of early detection of solitary lung nodule cancer, leveraging blood biomarkers, radiology, and other modalities: Data Collection and Integration: Gather extensive datasets covering various patient demographics and stages of lung cancers. Big Data Infrastructure: Develop efficient data handling for structured and unstructured data. Knowledge AI Models: Utilize medical knowledge to enhance AI models. Machine Learning and Deep Learning: Apply AI techniques for identifying early-stage cancer patterns. Radiology Image Analysis: Train AI for advanced image recognition of lung scans. Blood Biomarker Detection: Develop algorithms for non-invasive blood test analysis. Predictive Modeling: Personalize risk assessments using predictive models. Clinical Validation: Ensure model accuracy through extensive clinical trials. Integration into Clinical Workflows: Collaborate with healthcare providers to incorporate AI into existing processes. Continuous Learning and Improvement: Establish a system for regular AI model updates based on new data and discoveries. By following these steps, we can harness AI's power to transform early lung cancer detection, potentially saving countless lives. The fusion of Knowledge AI and Big Data AI offers hope, turning silent stories into beacons of progress. Through early detection, we aspire to beat cancer.

  • View profile for Pranav Rajpurkar

    Co-founder of a2z Radiology AI. Harvard Associate Professor.

    13,133 followers

    Excited to share our latest research on generalist AI for medical image interpretation! 🩺🖥️ In collaboration with an incredible team, we developed MedVersa - the first multimodal AI system that learns from both visual and linguistic supervision to excel at a wide variety of medical imaging tasks. By leveraging a large language model as a learnable orchestrator, MedVersa achieves state-of-the-art performance in 9 tasks, sometimes outperforming top specialist models by over 10%. To train and validate MedVersa, we curated MedInterp, one of the largest multimodal dataset for medical image interpretation to date, consisting of over 13 million annotated instances spanning 11 tasks across 3 modalities. This diverse dataset allowed us to create a truly versatile and robust AI assistant. MedVersa's unique architecture enables it to handle multimodal inputs and outputs, adapt to real-time task specifications, and dynamically utilize visual modules when needed. This flexibility and efficiency highlight its potential to streamline clinical workflows and support comprehensive medical image analysis. We believe this work represents a significant milestone in the development of generalist AI for healthcare. By demonstrating the viability of multimodal generative medical AI, we hope to pave the way for more adaptable and efficient AI-assisted clinical decision-making. We're excited to engage in discussions about how generalist models like MedVersa could shape the future of healthcare! 🏥🔮 Hong-Yu Zhou, Subathra Adithan, Julián Nicolás Acosta, Eric Topol, MD Read our paper: https://lnkd.in/d2cEKh6Q

  • View profile for Ahsen Khaliq

    ML @ Hugging Face

    35,774 followers

    Google presents Capabilities of Gemini Models in Medicine Excellence in a wide variety of medical applications poses considerable challenges for AI, requiring advanced reasoning, access to up-to-date medical knowledge and understanding of complex multimodal data. Gemini models, with strong general capabilities in multimodal and long-context reasoning, offer exciting possibilities in medicine. Building on these core strengths of Gemini, we introduce Med-Gemini, a family of highly capable multimodal models that are specialized in medicine with the ability to seamlessly use web search, and that can be efficiently tailored to novel modalities using custom encoders. We evaluate Med-Gemini on 14 medical benchmarks, establishing new state-of-the-art (SoTA) performance on 10 of them, and surpass the GPT-4 model family on every benchmark where a direct comparison is viable, often by a wide margin. On the popular MedQA (USMLE) benchmark, our best-performing Med-Gemini model achieves SoTA performance of 91.1% accuracy, using a novel uncertainty-guided search strategy. On 7 multimodal benchmarks including NEJM Image Challenges and MMMU (health & medicine), Med-Gemini improves over GPT-4V by an average relative margin of 44.5%. We demonstrate the effectiveness of Med-Gemini's long-context capabilities through SoTA performance on a needle-in-a-haystack retrieval task from long de-identified health records and medical video question answering, surpassing prior bespoke methods using only in-context learning. Finally, Med-Gemini's performance suggests real-world utility by surpassing human experts on tasks such as medical text summarization, alongside demonstrations of promising potential for multimodal medical dialogue, medical research and education. Taken together, our results offer compelling evidence for Med-Gemini's potential, although further rigorous evaluation will be crucial before real-world deployment in this safety-critical domain.

  • Contemporary AI platforms such as LLMs and traditional statistical ML tools are not mutually exclusive despite such a common (mis)perception. In our recent paper, we show that modern "blackbox" transformer-based generative AI models and classical "interpretable" probabilistic graphical models can be combined to analyze complex phenomena, such as patient-specific tumor sub-types, based on multi-modal biomedical data offering both intrinsic insights (inferred from a PGM) and extrinsic contexts (captured by the encoders.) https://lnkd.in/dBp7niUt https://contextualized.ml/ Caleb Ellington Ben Lengerich Manolis Kellis

  • View profile for Sadashiva Pai, PhD, MBA

    Founder & CEO at Science Mission LLC

    24,674 followers

    AI-driven drug development against autoimmune diseases Multiomic molecular profiling data of large cohorts of patients with autoimmune and autoinflammatory disorders (AIIDs) combined with the use of artificial intelligence (AI) and machine learning allow disease models to be created that support drug development. AI-based predictive models are used to (i) stratify patients into homogeneous clusters, (ii) represent the pathophysiology as a perturbed biological system with inferences of causality, (iii) design and optimize drug candidates or combination therapies, and (iv) evaluate the efficacy and safety of drug candidates in virtual patient models. AI fosters the evolution of a computational precision medicine aiming to relate individual patient characteristics with predicted properties of drug candidates so as to offer more personalized treatments for AIIDs. #sciencenewshighlights #ScienceMission https://lnkd.in/gZVBJftq

  • View profile for Andrew Sellergren

    Software Engineer at Google

    7,616 followers

    I am very excited to announce the latest additions to the Med-Gemini family of models! In our latest research, we bring generative #AI to multimodal #medicine including reporting 3D #radiology scans for the first time. We explore potential across 2D #radiology / #pathology / #dermatology / #retina images and outcomes prediction from #genomics using our 3 latest Med-Gemini models. Some highlights: 🧪 state-of-the-art (SOTA) AI-based chest X-ray (CXR) report generation based on expert evaluation, exceeding previous best results across two separate datasets by an absolute margin of 1% and 12%, where 57% and 96% of AI reports on normal cases, and 43% and 65% on abnormal cases, are evaluated as "equivalent or better" than the original radiologists' reports 🧪 first ever large multimodal model-based report generation for 3D computed tomography (CT) volumes using Med-Gemini-3D, with 53% of AI reports considered clinically acceptable, although additional research is needed to meet expert radiologist reporting quality 🧪 surpasses the previous best performance in CXR visual question answering (VQA) and performs well in CXR classification and radiology VQA, exceeding SOTA or baselines on 17 of 20 tasks 🧪 in histopathology, ophthalmology, and dermatology image classification, Med-Gemini-2D surpasses baselines across 18 out of 20 tasks and approaches task-specific model performance 🧪 beyond imaging, Med-Gemini-Polygenic outperforms the standard linear polygenic risk score-based approach for disease risk prediction and generalizes to genetically correlated diseases for which it has never been trained. Check out our paper for more details: https://lnkd.in/gg3xqzPi Med-Gemini is the result of a months-long sprint by a large group of incredibly talented and hard-working folks: Lin Yang Shawn Xu Timo Kohlberger Yuchen Zhou Ira Ktena, PhD Atilla K. Faruk Ahmed Farhad Hormozdiari Tiam Jaroensri Eric Wang Ellery Wulczyn Fayaz Jamil Theo Guidroz Charles Lau, MD, MBA Siyuan Qiao Yun Liu Akshay Goel, M.D. Kendall Park Arnav Agharwal Nicholas George Yang Wang Ryutaro Tanno David Barrett Wei-Hung Weng S. Sara Mahdavi Khaled Saab Tao Tu Dr Sreenivasa Raju Kalidindi Mozzi Etemadi Jorge Cuadros OD PhD Greg Sorensen Yossi Matias Katherine Chou Greg Corrado Joëlle Barral Shravya Shetty David Fleet S. M. Ali Eslami Daniel Tse Shruthi Prabhakara Cory McLean David Steiner Rory Pilgrim Christopher Kelly Shekoofeh Azizi Daniel Golden Google Health #GoogleHealth #AI #machinelearning #medicine #radiology #medicalai

  • View profile for Donna Morelli

    Data Analyst, Science | Technology | Health Care

    3,540 followers

    AI model enables earlier detection of diabetes through chest x-rays. Emory University. Published: August 2, 2023. Excerpt: A new artificial intelligence model finds x-ray images collected during routine medical care can provide warning signs for diabetes, even in patients who don’t meet guidelines for elevated risk. The model could help physicians detect disease earlier and prevent complications, said a multi-institutional team which published the findings in Nature Communication. Applying computational method known as deep learning to images and electronic health record data, researchers developed a model that successfully flagged elevated diabetes risk in a retrospective analysis, often years before patients were diagnosed with disease. That’s significant, given prevalence of diabetes in the U.S. has more than doubled over the past 35 years. Current guidelines suggest screening patients for type 2 diabetes if partients are between 35 and 70 years old and have a body mass index (BMI) in the overweight to obese range. Many studies, have found the strategy misses a significant number of cases, particularly in #racial #ethnic #minorities for whom BMI is a less effective predictor of diabetes risk. Patients with undiagnosed diabetes are at much higher risk for complications from disease including irreversible organ damage and even death. Each year, millions of Americans receive chest x-rays for chest pain, difficulty breathing, injury or before surgeries. Emory completes on average about 200,000 radiographs annually. While radiologists are not looking for diabetes when they assess x-rays, the images become part of a patient’s medical record and could be analyzed later for diabetes or other conditions. “Chest x-rays provide an ‘opportunistic’ alternative to universal diabetes testing,” says Judy Wawira Gichoya, MD, assistant professor of radiology and imaging sciences, and the lead researcher from Emory. “This is an exciting potential application of AI to pull out data from tests used for other reasons and positively impact patient care.” The AI model was trained on more than 270,000 x-ray images from 160,000 patients, with deep learning determining the image features that best predicted a later diagnosis of diabetes. Because chest x-rays are not a common way to detect diabetes, the researchers also used explainable AI techniques to determine how and why the model made its determinations. The methods pointed to #location of #fatty #tissue as important for determining risk, a logic that aligns with recent medical findings that #visceral #fat in the #upper #body and #abdomen is associated with #type2diabetes, #insulin #resistance, #hypertension and other conditions. Publication: Nature Communications August 02, 2023 Opportunistic detection of type 2 diabetes using deep learning from frontal chest radiographs https://lnkd.in/enjqpXfg https://lnkd.in/eA27RUHX

  • View profile for Bill Russell

    Transforming Healthcare, One Connection at a Time.

    14,598 followers

    The difference between useful AI and expensive noise in healthcare? Context. While most organizations wait for vendor roadmaps, small teams at CHOP and Stanford are solving AI's fundamental challenge: giving LLMs the clinical context they need to actually help patients. CHOP's CHIPPER - A single informaticist used Model Context Protocol to orchestrate 17 clinical tools, creating an AI assistant that understands patient history, current medications, lab trends, and clinical guidelines simultaneously. Development time? Months, not years. Stanford's ChatEHR - Embedded directly in Epic, reducing emergency physician chart review time by 40% during critical handoffs. Built by a small multidisciplinary team focused on workflow integration over feature lists. What makes this significant: → Open frameworks (MCP, SMART-on-FHIR) enable rapid innovation → Small teams with hybrid expertise move faster than large vendor projects → Context matters more than AI model capabilities → Workflow integration beats standalone AI applications The organizations building clinical context infrastructure today will have significant advantages as AI capabilities mature. #HealthcareIT #ArtificialIntelligence #ClinicalInformatics #HealthTech This non-AI-generated image is a real scene from my life. Visited with family last week and welcomed our first grandchild. Not the dog, a real grandchild, but I'm not at liberty to share pictures just yet.

  • View profile for Tao Tu

    Staff Research Scientist at Google DeepMind

    2,844 followers

    Excited to push the frontier of multimodal LLMs for Medicine! We previewed an ambitious generalist approach with Med-PaLM M last week as the first demonstration of a generalist biomedical AI system that flexibly encodes and integrates multimodal biomedical data.  Joint work across many teams at Google Research Google Health Google DeepMind Med-PaLM M: https://lnkd.in/eebQmPwE Med-PaLM: https://lnkd.in/eXnr8k7V This blog from Greg Corrado Yossi Matias contextualizes the work with other relevant approaches such as model grafting and tool use. https://lnkd.in/eaHpCNxn In addition to care delivery applications, Med-PaLM M opens up exciting opportunities for AI-accelerated scientific discovery, enabling us to tap into a broader range of disciplines in bioscience, such as neuroscience and omics sciences. We are open to collaborations!

  • View profile for Srikanth Bhakthan

    Data & AI Leader | Driving AI Business Innovation

    11,161 followers

    Views are Personal. Not a reflection of my employer. Can GPT4-V serve medical applications? A comprehensive observation and systematic evaluvation on medical image modalities and anatomy using GPT4-V on multi-modal medical diagnosis. It looks in to the possibilities of supporting real-world medical applications and clinical decision-making. A  question of paramount importance, not only for the AI community, but also for clinicians, patients, and healthcare administrators. Limitations of this report: Only Qualitative Evaluation & Sample Bias. Paper: https://lnkd.in/gw3yQUxr Eval set: https://lnkd.in/gEpC2Ysv Dataset: https://radiopaedia.org/ & other sources GPT4-V system card: https://lnkd.in/gJWixuav GPT4-V Contributors - https://lnkd.in/gZXjfBZS Red team effort Reference - GPT4-V system card: "𝘙𝘦𝘥 𝘵𝘦𝘢𝘮𝘦𝘳𝘴 𝘧𝘰𝘶𝘯𝘥 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦𝘳𝘦 𝘸𝘦𝘳𝘦 𝘪𝘯𝘤𝘰𝘯𝘴𝘪𝘴𝘵𝘦𝘯𝘤𝘪𝘦𝘴 𝘪𝘯 𝘪𝘯𝘵𝘦𝘳𝘱𝘳𝘦𝘵𝘢𝘵𝘪𝘰𝘯 𝘪𝘯 𝘮𝘦𝘥𝘪𝘤𝘢𝘭 𝘪𝘮𝘢𝘨𝘪𝘯𝘨—𝘸𝘩𝘪𝘭𝘦 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘸𝘰𝘶𝘭𝘥 𝘰𝘤𝘤𝘢𝘴𝘪𝘰𝘯𝘢𝘭𝘭𝘺 𝘨𝘪𝘷𝘦 𝘢𝘤𝘤𝘶𝘳𝘢𝘵𝘦 𝘳𝘦𝘴𝘱𝘰𝘯𝘴𝘦𝘴, 𝘪𝘵 𝘤𝘰𝘶𝘭𝘥 𝘴𝘰𝘮𝘦𝘵𝘪𝘮𝘦𝘴 𝘨𝘪𝘷𝘦 𝘸𝘳𝘰𝘯𝘨 𝘳𝘦𝘴𝘱𝘰𝘯𝘴𝘦𝘴 𝘧𝘰𝘳 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯." 17 Human body systems: Central Nervous System Head and Neck Cardiac Chest Hematology Hepatobiliary Gastrointestinal Urogenital Gynecology Obstetrics Breast Musculoskeletal Spine Vascular Oncology Trauma Pediatrics 8 Image modalities used in daily clinic routine: X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Digital Subtraction Angiography (DSA), Mammography, Ultrasound & Pathology

    • +7

Explore categories