The Value of Multi-Modal Data in Genomics

Explore top LinkedIn content from expert professionals.

Summary

Multi-modal data in genomics refers to integrating various types of biological information, such as DNA, RNA, protein, and imaging data, to provide a comprehensive understanding of complex biological systems. This approach is revolutionizing genomics research and precision medicine by uncovering previously hidden insights into disease mechanisms and treatment strategies.

  • Combine complementary data: Use genomic, proteomic, and imaging data together to uncover patterns and relationships that single data types cannot reveal alone.
  • Adopt advanced tools: Leverage AI models and computational approaches to process multi-modal data for more accurate disease predictions and personalized treatment plans.
  • Focus on integration: Employ methods like spatial alignment or graph-based frameworks to synthesize diverse datasets, enabling more precise biological insights and clinical applications.
Summarized by AI based on LinkedIn member posts
  • View profile for Joseph Steward

    Medical, Technical & Marketing Writer | Biotech, Genomics, Oncology & Regulatory | Python Data Science, Medical AI & LLM Applications | Content Development & Management

    36,852 followers

    Spatial multi-omics technologies are transforming  our understanding of tissue and cell biology by allowing for the simultaneous analysis of multiple data modalities, including transcriptome, epigenome, proteome, and metabolome on the same tissue section. A recent review by @Paul Kiessling & Christoph Kuppe analyzed ways in which  multi-omics methods are being used to improve our understanding of the molecular mechanisms underlying the development of cardiovascular diseases and these methods can potentially be used for personalized approaches to cardiovascular therapies. Spatial multi-omics: novel tools to study the complexity of cardiovascular diseases. https://lnkd.in/e-DFdbtx Background on Cardiovascular Cell Biology: The researchers highlighted the complex cellular composition of the heart, noting that while cardiomyocytes make up most of the heart by volume, they are outnumbered by a diverse mix of other cell types. They discussed how this organization is disturbed in various cardiovascular diseases, often in similar patterns across different conditions. The authors emphasized the need for methods to provide unbiased insights into spatial molecular changes in localized disease processes. Spatial multi-omics technologies at cellular resolution: The authors described the development of high-throughput transcriptome-wide assays using arrayed oligo-nucleotide barcoded spots, which formed the basis for commercial products like Visium by 10x Genomics. The authors then detailed their own spatial multi-omic study of human myocardial infarction, which utilized techniques such as single-cell gene expression sequencing, chromatin accessibility sequencing, and spatial transcriptomics to build a molecular map of cardiac remodeling. They also discussed other studies that have used spatial transcriptomics to investigate cardiac regeneration, remodeling processes after myocardial infarction, and inflammatory processes in the heart. Non-NGS-based spatial multi-omics: The researchers described recent progress in generating multiplex proteomics datasets from tissues, including antibody-based multiplexed imaging technologies like IMC-Cytof and MIBI-TOF. They also discussed fluorescence-based technologies such as 4i, IBEX, and Immuno-SABER, which have made spatial proteomics more accessible. The authors highlighted the development of highly sensitive LC-MS-based proteomics approaches, such as deep visual proteomics (DVP). Computational approaches for spatial multi-omics: The researchers described various deep-learning-based segmentation algorithms and their limitations, particularly when dealing with tissues composed of cells with vastly different morphologies. The researchers explained how well-integrated datasets can improve cell typing and discussed tools tailored to identify patterns in feature expression across regions of interest and analyze the position of RNA inside cells.

  • View profile for David A. Moser

    𝐷𝑖𝑔𝑖𝑡𝑎𝑙 𝐻𝑒𝑎𝑙𝑡ℎ | 𝐷𝑖𝑎𝑔𝑛𝑜𝑠𝑡𝑖𝑐𝑠 | 𝘎𝘦𝘯𝘦𝘵𝘪𝘤𝘴 | 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑀𝑒𝑑𝑖𝑐𝑖𝑛𝑒

    6,487 followers

    #PrecisionMedicine promises the right treatment at the right time But what if something's missing? I spent years working in Genomics, where the focus is on understanding what mutations exist in a patient’s DNA. Working in Digital Pathology now, I see that understanding how those mutations manifest in living tissue is just as crucial. What I didn’t expect was how these two worlds are coming together. 🧬 Genomics tells us what mutations exist. 🔬 Pathology (Imaging) reveals how those mutations shape disease. And what’s even more exciting? These data streams no longer have to be separate. 💡Combining Genomics, Digital Pathology, and AI changes everything. - AI-powered tissue analysis can detect genetic mutations directly, making genomic testing faster and more accessible. - Combining Digital Pathology with genomic data improves biomarker discovery, helping match patients to the right therapies. - Multimodal AI models predict treatment response with greater accuracy, leading to more personalized, effective care. After spending years in the industry, It's easy to see it now: 🔹 Imaging alone is essential. 🔹 Genomics alone is powerful. 🔹 Together, it creates something groundbreaking. Multimodal data ties it all together, DNA, Tissue, & Clinical Insights, unlocking knowledge that no single data stream can provide. This is how we move towards truly personalized medicine. The puzzle is coming together!

  • View profile for Olivier Elemento

    Director, Englander Institute for Precision Medicine & Associate Director, Institute for Computational Biomedicine

    9,516 followers

    I am excited to highlight the great work by my colleagues Bishoy Morris Faltas and Fei Wang and their teams in their newly published paper in npj Digital Medicine! While everyone talks about multimodal biomarkers in healthcare AI, this team actually delivered - they created a Graph-based Multimodal Late Fusion (GMLF) deep learning framework that combines histopathology images with gene expression data to predict response to neoadjuvant chemotherapy in muscle-invasive bladder cancer (using data from an actual clinical trial). What makes this work stand out: ✅ True Multimodality: Integrates standard H&E images with gene expression profiles in a way that outperforms any single data modality ✅ Interpretable by Design: Unlike most "black box" AI, this model reveals the biological drivers behind its predictions, such as alterations of certain genes (TP63, CCL5, and DCN) ✅ Technically Sophisticated: Uses graph neural networks to capture spatial relationships in tumor architecture This kind of approach could transform how we combine routine clinical data with molecular profiling for treatment selection - in this case, helping identify patients most likely to benefit from chemotherapy while sparing others from unnecessary treatment. Congrats to the entire team on this outstanding work! This is the kind of translational AI that bridges computational innovation with real potential clinical impact. Link: https://lnkd.in/gZDhS2DC

  • View profile for Kevin Matthew Byrd

    Founder & CEO | Scientist | Inventor | Lecturer | Advisor and Consultant for Biotech and Biopharma |

    4,913 followers

    To unlock the full potential of spatial omics in understanding disease, we need to integrate multiple molecular layers such as RNA, protein, and metabolites across tissue sections that rarely align perfectly. Our team, led by @Aditya Pratapa, Rohit Singh, and Purushothama Rao Tata developed SAME, or Spatial Alignment of Multimodal Expression, a method that flexibly aligns spatial data even when sections are distorted by tears, folds, or biological variation. This enables more accurate and interpretable maps of how cells behave and interact in complex environments like tumors and mucosa. Read the preprint: https://lnkd.in/eUQGAXFM Code available at: https://lnkd.in/e9H9njCd Most existing alignment tools assume tissue sections are structurally preserved, which often is not the case. SAME introduces a new concept called space-tearing transforms, a controlled mathematical approach that accommodates local disruptions while preserving overall tissue layout. This makes it possible to integrate diverse spatial modalities with high fidelity, revealing biologically meaningful patterns that are often missed by single-modality or rigid alignment methods. Key innovations: --Controlled topological flexibility for real tissue architectures Phenotype-based cross-modal matching, independent of raw feature correlation --Support for diverse modalities including spatial RNA (Vizgen MERSCOPE, 10x Genomics Xenium), protein (Akoya Biosciences, Inc. PhenoCycler), and metabolite (MALDI-MSI/Bruker Spatial Biology) data What this enables: --Identification of cryptic immune niches in healthy and diseased tissues (e.g., functionally distinct T cell programs in oral mucosa and lung adenocarcinoma) --Discovery of spatially localized metabolic crosstalk, such as mevalonic acid enrichment within tumor-macrophage microenvironments --Integration-ready experimental designs, allowing each tissue section to be optimized for its specific assay Grateful to contribute work with an outstanding team from Duke University, Virginia Commonwealth University/VCU School of Dentistry, the @Max Planck Institute, @German Center for Lung Research, Thoraxklinik am Universitätsklinikum Heidelberg, and Justus Liebig University Giessen. #spatialbiology #multiomics #computationalbiology #bioinformatics #cancerresearch #tissuearchitecture #StraticaBiosciences

  • View profile for Heather Couture, PhD

    Making vision AI work in the real world • Consultant, Applied Scientist, Writer & Host of Impact AI Podcast

    15,726 followers

    🔬 𝐄𝐧𝐡𝐚𝐧𝐜𝐢𝐧𝐠 𝐇&𝐄-𝐛𝐚𝐬𝐞𝐝 𝐀𝐈 𝐌𝐨𝐝𝐞𝐥𝐬 𝐰𝐢𝐭𝐡 𝐌𝐮𝐥𝐭𝐢𝐦𝐨𝐝𝐚𝐥 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 A question from my recent webinar on foundation models for pathology: Can AI models trained on H&E images plus additional data modalities outperform H&E-only models? Recent research suggests the answer is often yes! Here's why: 1. 𝐂𝐨𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐫𝐲 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 Additional modalities like gene expression data can provide rich molecular context to supplement visual features from H&E images. 2. 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐑𝐞𝐩𝐫𝐞𝐬𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 Multimodal training can help models develop more robust and informative slide representations. 3. 𝐂𝐚𝐬𝐞 𝐒𝐭𝐮𝐝𝐢𝐞𝐬: 𝐓𝐀𝐍𝐆𝐋𝐄 𝐚𝐧𝐝 𝐌𝐀𝐃𝐄𝐋𝐄𝐈𝐍𝐄 The TANGLE model, trained on both H&E slides and gene expression data, performed better on slide-level tasks than H&E-only models. Similar benefits were found with the MADELEINE model, trained on adjacent H&E and IHC slides. 4. 𝐓𝐚𝐬𝐤-𝐃𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 The effectiveness of multimodal training can vary based on the specific task and dataset. 5. 𝐄𝐦𝐞𝐫𝐠𝐢𝐧𝐠 𝐓𝐫𝐞𝐧𝐝 Several recent studies explore combining H&E with spatial transcriptomics or other modalities for enhanced performance. 💡 𝐊𝐞𝐲 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲: While results can vary, incorporating additional modalities during training often leads to AI models that perform better on H&E-only tasks at inference time. 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐜𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲: Have you experimented with multimodal training in pathology AI? What challenges and benefits have you encountered? Share your experiences below! 👇 #DigitalPathology #AIinHealthcare #MultimodalLearning #MachineLearning #ComputationalPathology

  • View profile for Ryan Fukushima

    COO at Tempus AI | Cofounder of Pathos AI

    10,889 followers

    Explainable AI is essential for precision medicine—but here's what many are missing My latest blog post unpacks a fascinating Nature Cancer paper from showing multimodal AI outperforming traditional clinical tools by up to 34% in predicting outcomes. What surprised me most? Elevated C-reactive protein—typically a concerning marker—actually indicates LOWER risk when combined with high platelet counts. Some physicians may do this in their heads but they simply cannot do this same analysis across thousands of variables systematically.  With the right multimodal data and AI systems, we can create a fundamental shift in how we develop therapies and treat patients. Here's the twist: many argue we need randomized trials before implementing these AI tools. But that’s the wrong framework entirely. Google Maps doesn't drive your car—it gives you better navigation. Similarly, clinical AI doesn't treat patients—it reveals biological patterns that already exist. The real question: Can we afford to ignore these multimodal patterns and connections in precision medicine? Or should we use AI as a tool to uncover them and help inform our decision making? Read my full analysis here: https://lnkd.in/gGA4KTip -- I'd love to hear from others working at this intersection: How is your organization approaching multimodal data integration in precision medicine? #PrecisionMedicine #HealthCareAI #CancerCare

  • View profile for Ron Alfa, MD, PhD

    Co-Founder/CEO NOETIK.ai • Former Recursion (RXRX) • Stanford MD•PHD • Build the future

    10,227 followers

    People are seeing very limited performance gains from H&E only models, even when adding a very significant quantity of additional data. Early signs suggesting that scaling up these types of data is going to be less fruitful. By comparison, paired multimodal data produces more significant gains and enables foundation models to access new aspects of biology (e.g. cellular, molecular). But these types of data are expensive to generate and not publicly available- cannot be cobbled together- reinforcing the importance of generating fit-for-purpose data to support frontier model development.

  • View profile for Vidith Phillips MD, MS

    Imaging AI Researcher, St Jude Children’s Research Hospital | Healthcare AI Strategist | Committee Member, American Neurological Association

    16,066 followers

    Genentech’s SpatialAgent might be the most capable LLM-based research agent to date 🧬 🧫 🎯 ❇️ What if an AI agent could independently design spatial experiments, analyze complex multimodal datasets, and generate novel biological hypotheses without human prompting? SpatialAgent is an autonomous AI system built on large language models with integrated memory, planning, and tool execution. It represents a significant advancement in the automation of spatial biology research. 👉 Key takeaways 1️⃣ Expert-level performance: SpatialAgent outperformed leading computational tools and 90-95% of human experts in gene panel design and spatial prediction across datasets of the human brain, heart, and mouse colon. 2️⃣ Multimodal annotation: It accurately annotated cell types and tissue niches using a combination of expression data, histology, and spatial references, generalizing across species and data modalities. 3️⃣ Autonomous insight generation: In a blinded analysis of mouse colitis data, SpatialAgent independently recovered published findings and identified additional pathways such as TGF-β and IL-11 signaling that were not emphasized in the original study. 4️⃣ Human-AI collaboration: When used in co-pilot mode, SpatialAgent enhanced human-designed gene panels, improving outcomes in 80-90% of cases and demonstrating the value of integrating AI into expert workflows. 5️⃣ Real-world applicability: In an ongoing prostate cancer study, SpatialAgent refined a 5k pan-tissue panel by selecting 100 additional genes, improving resolution of stromal and immune compartments and refining cell-cell interaction mapping. 🎯 This work sets a new standard for how autonomous agents can assist and accelerate biological discovery, bridging AI reasoning with scientific intuition. ______________________________________________________________ #research #ai #machinelearning #biology #data #artificialintelligence

Explore categories