We are at a pivotal moment where our traditional expertise meets the infinite possibilities of #AI - not just to maintain our leadership, but to reinvent it. I often say that #insurance is first and foremost about People, Tech and Data. Today, AI is reshaping our industry by amplifying the power of this equation. The greatest challenge we face? I believe it is not technological 🤖 but profoundly human. AI is not here to replace the insurer. It's here to empower us to focus on what is important: to protect what matters with more precision and empathy than ever before. Thanks to a thoughtful balance between innovation and ethics, automation and human contact, we are shaping a new future for insurance. AI should not be a simple race for productivity - it is also about enriching our experiences and relationships as human beings. The results we are starting to see speak for themselves: more relevant solutions, enriched customer journeys, strengthened distribution networks. The insurance of tomorrow is taking shape before our eyes: hyper-personalized policies, advanced geospatial analysis, claims processed in minutes thanks to computable contracts... This vision is not a distant dream 💭 - we are building it today. The pace and quality of this evolution will depend on our ability to use our data responsibly and, consequently, to train our teams to ensure conscious decision-making. The road ahead will be one of continuous adaptation and collaboration to bring our deep industry knowledge and AI’s potential together. With the right balance of innovation and responsibility we will open new possibilities for the future of insurance – with the aim of creating solutions that genuinely serve our customers and partners. I’m optimistic and excited about what we can achieve all together.
Innovation and Data Analytics
Explore top LinkedIn content from expert professionals.
-
-
Talk to Data, "the future of conversations with data" Imagine asking your data a question and getting a clear, contextual answer that cites sources, shows the key numbers and explains itself in plain language. That future is closer than you think. Conversational access to data will change how organisations decide, act and who gets to make informed choices. Key trends to watch • Voice and multimodal interfaces Talking to data will include voice and visual inputs so people can point at charts, refine results naturally and get answers without a steep learning curve. • Domain tuned models and retrieval augmented generation General models will be combined with domain specific retrieval so answers are both fluent and grounded in verified sources. • Real time contextual analytics Conversations will keep context and work with streaming data, surfacing changes and anomalies as they happen. • Knowledge graphs and semantic layers Semantic models will resolve ambiguity, link concepts and deliver richer, explainable responses across datasets. • Agents and workflow integration Conversations will not just inform. They will trigger actions, schedule reports and integrate with business workflows. • Governance, provenance and explainability Provenance, audit trails and clear explanations will be essential to build trust and meet regulatory requirements. Practical tip: Start with a focused pilot, pair it with strong governance and measure time to decision. Small wins build trust and scope for bigger change. How is your organisation preparing to talk to data? #TalkToData #Data #AI #Analytics #KnowledgeGraphs
-
Organizations today don’t struggle with a lack of data. The real challenge is turning that data into 𝘁𝗿𝘂𝘀𝘁𝗲𝗱, 𝗮𝗰𝘁𝗶𝗼𝗻𝗮𝗯𝗹𝗲 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲. What’s becoming clear is that traditional dashboards are no longer enough. Leaders need: • 𝗚𝗼𝘃𝗲𝗿𝗻𝗲𝗱 𝗱𝗮𝘁𝗮 they can rely on • 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗔𝗜 that makes insights transparent • 𝗥𝗲𝗮𝗹-𝘁𝗶𝗺𝗲 𝗲𝘅𝗽𝗹𝗼𝗿𝗮𝘁𝗶𝗼𝗻 to answer business questions instantly • 𝗠𝗖𝗣 𝘀𝗲𝗿𝘃𝗲𝗿 integration to build data-driven agents and applications that are context-aware through the Jedify MCP server The shift is toward platforms that combine these elements—simplifying analysis while ensuring confidence in outcomes. Instead of static reports, businesses gain a continuous flow of intelligence to guide decisions. One example of this direction is Jedify’s approach, which emphasizes explainability and trust while enabling powerful, context-aware analytics - https://bit.ly/415Z8PS As data becomes the foundation of competitive advantage, the question isn’t just how much data you have, but 𝗵𝗼𝘄 𝗰𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝘁𝗹𝘆 𝘆𝗼𝘂 𝗰𝗮𝗻 𝗮𝗰𝘁 𝗼𝗻 𝗶𝘁.
-
GenAI’s black box problem is becoming a real business problem. Large language models are racing ahead of our ability to explain them. That gap (the “representational gap” for the cool kids) is no longer just academic, and is now a #compliance and risk management issue. Why it matters: • Reliability: If you can’t trace how a model reached its conclusion, you can’t validate accuracy. • Resilience: Without interpretability, you can’t fix failures or confirm fixes. • Regulation: From the EU AI Act to sector regulators in finance and health care, transparency is quickly becoming non-negotiable. Signals from the frontier: • Banks are stress-testing GenAI the same way they test credit models, using surrogate testing, statistical analysis, and guardrails. • Researchers at firms like #Anthropic are mapping millions of features inside LLMs, creating “control knobs” to adjust behavior and probes that flag risky outputs before they surface. As AI shifts from answering prompts to running workflows and making autonomous decisions, traceability will move from optional to mandatory. The takeaway: Interpretability is no longer a nice-to-have. It is a license to operate. Companies that lean in will not only satisfy regulators but also build the trust of customers, partners, and employees. Tip of the hat to Alison Hu Sanmitra Bhattacharya, PhD, Gina Schaefer, Rich O'Connell and Beena Ammanath's whole team for this great read.
-
Just got back from a great week in New Orleans for #AMS25. Three observations about where things are headed with weather data in 2025. 1. The private-sector presence in atmospheric science will continue to grow. On the “data provider” side, there is an incredible amount of private-sector innovation happening around AI-driven weather forecasting, from big tech companies (Google DeepMind, NVIDIA, Microsoft) to startups (Brightband, Excarta, Silurian AI, Salient, Zeus AI, Jua.ai, etc) to more established companies breaking into this field (e.g. Spire, Tomorrow.io). On the “end user” side, we’re seeing growing sophistication by commodities and energy traders in their use of weather data, with many companies building significant in-house capacity for analytics and modeling by building data infrastructure and hiring meteorologists. 2. AI is changing the requirements for data systems. Whereas the main infrastructure requirement for weather forecasting used to be a hard-core HPC system for compute-bound workloads, AI requires smaller, GPU-based clusters, plus a massive trove of clean, AI-ready data upon which to train. I/O bottlenecks are surpassing compute bottlenecks for many teams. Ingestion, curation, and optimization of training and evaluation datasets is becoming a major priority. In this new world, modeling is starting to look a lot more like data analytics and visualization in terms of infrastructure requirements. 3. The Pangeo Community stack, built on the foundation of Xarray and Zarr, continues to expand its impact in this new AI-centric world. Nearly every team training AI weather models is doing so from Zarr data. However, practices vary widely in terms of on-disk data layout and data loader architecture. The coming year will likely see some convergence. European Centre for Medium-Range Weather Forecasts - ECMWF's Anemoi framework caught my eye as an interesting new approach. A talk by Alfonso Ladino-Rincon about analysis-ready Zarr-based radar data was another highlight. Meanwhile, the centrality of GRIB files in operational meteorology systems will continue to create friction for practitioners by requiring slow and costly data transformations. Will 2025 be the year we see operational forecasts in Zarr from a public-sector data provider? Or will the VirtualiZarr approach make this question moot? Overall, this conference was extremely fun and stimulating for me. We had a ton of traffic at the Earthmover booth. I’m more convinced than ever that solving “boring” data infrastructure problems can help accelerate work across the weather and climate enterprise. Feeling fired up about the opportunities ahead!
-
AI is transforming insurance! Here’s how: Generative AI is revolutionizing predictions. With 34% of insurers finding it most effective in predictive analytics and which in turn now enables better demand analysis, ensures companies are prepared for market changes. Automated customer advice is another game-changer. Personalized experiences are now possible, enhancing customer satisfaction and loyalty. Natural language processing (NLP) and voice recognition improve underwriting processes, making them faster and more accurate. Fraud detection has seen significant advancements with AI-driven image recognition. This technology helps identify suspicious activities quickly, reducing financial losses and enhancing security. Productivity has notably increased in countries like Germany, Spain, and Austria. A 0.5% boost in productivity can lead to a 1% decrease in labor costs. This is crucial as the EU-27 workforce is expected to shrink by 20% by 2050 due to an aging population. Contrary to popular belief, AI is not a job killer. Allianz Research shows AI is more likely to boost productivity and skills rather than cause mass job losses. AI can help address labor shortages and aging workforce challenges. AI in insurance is about balancing innovation with regulation. It’s about leveraging AI’s benefits while addressing concerns. The goal is to enhance efficiency, improve customer experiences, and maintain robust security. If you’re in the insurance sector and want to harness the power of AI, let’s talk. Our team at CellStrat is here to help you navigate this transformation and solve your unique challenges. Reach out to us today for a consultation!
-
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our research of understanding how knowledge graphs increase the accuracy of LLM-powered question answering systems over 2 years ago! The intersection of knowledge graphs and large language models (LLMs) isn’t theoretical anymore. It's been a game-changer for enterprise question answering and now everyone is talking about it and many are doing it. 🚀 This new paper is a summary of our lessons learned of implementing this technology in data.world and working with customers, and outline the opportunities for future research contributions and where the industry needs to go (guess where the data.world AI Lab is focusing). Sneak peek and link in the comments Lessons Learned ✅ Knowledge engineering is essential but underutilized: Across organizations, it’s often sporadic and inconsistent, leading to assumptions and misalignment. It’s time to systematize this critical work. ✅ Explainability builds trust: Showing users exactly how an answer is derived, including auto-corrections, increases transparency and confidence. ✅ Governance matters: Aligning answers with an organization’s business glossary ensures consistency and clarity. ✅ Avoid “boiling the ocean”: don’t tackle too many questions at once A pay-as-you-go approach ensures meaningful progress without overwhelm. ✅ Testing matters: Non-deterministic systems like LLMs require new frameworks to test ambiguity and validate responses effectively. Where the Industry Needs to Go 🌟 Simplified knowledge engineering: Tools and methodologies must make this foundational work easier for everyone. 🌟 User-centric explainability: Different users have different needs so we need to focus on “explainable to whom?”. 🌟 Testing non-deterministic systems: The deterministic models of yesterday won’t cut it. We need innovative frameworks to ensure quality in LLMs powered software applications. 🌟 Small semantics vs. Larger semantics: The concept of semantics is being increasingly referenced in industry in the context of “semantic layers” for BI and Analytics. Let’s close the gap between the small semantics (fact/dimension modeling) and large semantics (ontologies, taxonomies) 🌟 Multi-agent systems: break down the problem into smaller, more manageable components. Should an agent deal with the core task of answering questions and managing ambiguity, or should these be split into separate agents? This research reflects our commitment to co-innovate with customers to solve real-world challenges in enterprise AI. 💬 What do you think? How are knowledge graphs shaping your AI strategies?
-
Hello Friends, continuing our AI & ML Journey in Atmospheric Sciences. Previously, we introduced the core methods of AI and machine learning for weather and climate prediction, laying the groundwork for transforming raw atmospheric data into actionable insights. Today, let's dive deeper into the data preprocessing stage—a crucial step that bridges raw data and robust ML models. In weather and climate research, raw data from stations and satellites is noisy, incomplete, and heterogeneous. That’s why building a robust machine learning model starts with solid data preprocessing. Starting Simple: • Data Cleaning:Remove duplicates and flag outliers using methods like Z-score analysis (e.g., flag values if |(x – mean)/std| > 3). • Missing Data Imputation: Replace missing values with the mean or median. • Scaling & Encoding: Use basic methods like min-max scaling to map data into [0,1] and one-hot encoding for categorical variables. While these simple methods are easy to implement, they have limitations. They often fail when data is highly skewed, when outliers distort averages, or when relationships between variables matter. Advancing Your Approach: • Robust Outlier Detection: Use median absolute deviation (MAD) or Isolation Forests for more resilient outlier handling. • Model-Based Imputation: Go beyond simple averages with kNN, MICE, or even autoencoder-based methods to capture complex relationships. • Advanced Scaling/Transformation: Apply robust scaling, Box–Cox or Yeo–Johnson transformations to better stabilize variance. • Sophisticated Feature Extraction: Use PCA, ICA, t-SNE, or UMAP to reduce dimensionality while preserving key patterns. • Enhanced Categorical Encoding & Imbalance Handling: Employ target encoding or frequency encoding, and use techniques like SMOTE or ADASYN to balance rare events. • Time Series & Data Fusion: For time series, decompose data seasonally or use differencing; combine multiple data sources using Kalman filters. How do you handle complex atmospheric data? Let’s discuss! please check out the PDF for more detailed discussion and classification. #MachineLearning #DataPreprocessing #ClimateScience #Meteorology #DataScience
-
Quantum-Inspired Algorithm Revolutionizes Weather Forecasting and Turbulence Simulations Overview • Researchers at the University of Oxford have developed a quantum-inspired algorithm that dramatically speeds up weather forecasting and fluid turbulence simulations. • This algorithm can reduce computation times from several days on a supercomputer to just hours on a regular laptop. • Unlike traditional quantum computers, this method runs on classical machines while borrowing key quantum principles. Why It Matters • Improved Weather Forecasting • More efficient simulations could lead to faster and more accurate weather predictions, helping governments and businesses prepare for extreme weather events. • Advancements in Industrial Efficiency • Turbulence simulations are critical in aerospace, automotive, and energy industries to optimize fluid dynamics, reduce fuel consumption, and enhance design processes. • Bridging Classical and Quantum Computing • The approach mimics quantum computing advantages without requiring fully developed quantum hardware. • It simplifies turbulence modeling by using tensor networks, an advanced mathematical framework used in quantum mechanics. The Bigger Picture • Accelerating Computational Science: This breakthrough aligns with the broader trend of quantum-inspired computing driving innovation in science, engineering, and meteorology. • Future of Quantum Algorithms: While true quantum computing is still developing, quantum-inspired techniques are already proving useful for real-world applications. • AI and Quantum Synergy: Combining this algorithm with AI-driven climate models could further enhance weather prediction and environmental modeling. Bottom Line This quantum-inspired algorithm marks a major leap in computational efficiency, enabling faster weather forecasts and fluid simulations on classical computers. By applying quantum principles to classical computing, researchers are bridging the gap between today’s technology and future quantum breakthroughs.
-
📌 The Future of Agentic Analytics in BI There’s a growing misconception right now... That layering AI into your dashboards will magically transform your analytics. There’s a lot of hype around AI agents in analytics: ⤷ Natural language interfaces. ⤷ Auto-generated insights. ⤷ Chat-based dashboards. You might’ve even heard of the term Agentic Analytics The promise is that business users will be able to “ask anything” and get instant answers from data. But here’s the problem no one’s talking about: Most organizations aren’t ready for AI agents yet. Not because the tech isn’t mature. But because their data context is broken. → If your KPIs are misaligned across teams… → If your semantic layer is missing or incomplete… → If no one trusts how metrics are calculated… Then all an AI agent will do is generate faster wrong answers. You’ll get output but not outcomes. Before you invest in Agentic Analytics, ask yourself: 1) Do we have a single source of truth for our KPIs? 2) Is our semantic layer well-structured and governed? 3) Are stakeholders confident in the meaning behind the metrics? 4) Can business users explore data on their own? If not, the priority isn’t AI. It’s trust, structure, and shared understanding. That’s why the recent Salesforce acquisition of Informatica makes perfect sense. While the market chases the next flashy analytics tool, Salesforce is investing in the fundamentals: → Data integration → Metadata → Governance Because they understand this: AI is only as effective as the context it runs on. Here’s what I’ve seen work in the real world: 1️⃣ 𝐒𝐭𝐚𝐫𝐭 𝐰𝐢𝐭𝐡 𝐲𝐨𝐮𝐫 𝐬𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐥𝐚𝐲𝐞𝐫 Define your KPIs, dimensions, and filters like you’re building a product. 2️⃣ 𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐥𝐨𝐠𝐢𝐜 Explain what each metric means and where it comes from. 3️⃣ 𝐀𝐥𝐢𝐠𝐧 𝐚𝐜𝐫𝐨𝐬𝐬 𝐝𝐞𝐩𝐚𝐫𝐭𝐦𝐞𝐧𝐭𝐬 Marketing, sales, ops should all speak the same data language. 4️⃣ 𝐁𝐮𝐢𝐥𝐝 𝐝𝐚𝐭𝐚 𝐭𝐫𝐮𝐬𝐭 Through consistency, transparency, and usage-based feedback. 5️⃣ 𝐃𝐞𝐩𝐥𝐨𝐲 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 Then and only then you can explore AI as a layer on top of a solid foundation. BI without context is just noise. And AI without structure is just risk at scale. If you’re serious about improving decision-making in your business, fix your foundations first. The tools will come and go. Context is what makes them useful. #DataStrategy #BusinessIntelligence #DataGovernance