🌍 Climate scientists often face a trade-off: Global Climate Models (GCMs) are essential for long-term climate projections — but they operate at coarse spatial resolution, making them too crude for regional or local decision-making. To get fine-scale data, researchers use Regional Climate Models (RCMs). These add crucial spatial detail, but come at a very high computational cost, often requiring supercomputers to run for months. ➡️ A new paper introduces EnScale — a machine learning framework that offers an efficient and accurate alternative to running full RCM simulations. Instead of solving the complex physics from scratch, EnScale "learns" the relationship between GCMs and RCMs by training on existing paired datasets. It then generates high-resolution, realistic, and diverse regional climate fields directly from GCM inputs. What makes EnScale stand out? ✅ It uses a generative ML model trained with a statistically principled loss (energy score), enabling probabilistic outputs that reflect natural variability and uncertainty ✅ It is multivariate – it learns to generate temperature, precipitation, radiation, and wind jointly, preserving spatial and cross-variable coherence ✅ It is computationally lightweight – training and inference are up to 10–20× faster than state-of-the-art generative approaches ✅ It includes an extension (EnScale-t) for generating temporally consistent time series – a must for studying events like heatwaves or prolonged droughts This approach opens the door to faster, more flexible generation of regional climate scenarios, essential for risk assessment, infrastructure planning, and climate adaptation — especially where computational resources are limited. 📄 Read the full paper: EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules ---> https://lnkd.in/dQr5rmWU (code: https://lnkd.in/dQk_Jv8g) 👏 Congrats to the authors — a strong step forward for ML-based climate modeling! #climateAI #downscaling #generativeAI #machinelearning #climatescience #EnScale #RCM #GCM #ETHZurich #climatescenarios
Next steps for climate model development
Explore top LinkedIn content from expert professionals.
Summary
Next-steps-for-climate-model-development refers to ongoing advancements aimed at making climate models more accurate, detailed, and accessible, using new technologies like artificial intelligence and improved data resolution. This work helps scientists and policymakers better predict, understand, and adapt to regional and global climate changes.
- Embrace AI innovation: Integrating machine learning and new neural network architectures can dramatically speed up climate modeling and improve accuracy, especially for predicting rainfall and extreme events.
- Invest in fine detail: Supporting the creation of high-resolution climate models allows for more reliable local and regional forecasts, which are crucial for risk assessment and planning in areas most affected by climate change.
- Share data and tools: Making climate modeling frameworks and datasets open and accessible empowers more researchers, governments, and organizations to build and apply new solutions for climate adaptation and mitigation.
-
-
The future of climate modelling? Global climate modelling - the type of modelling that the Coupled Model Intercomparison (CMIP) undertakes, and the type of modelling that underpins regional climate projections uses grid resolutions of roughly 100 x 100 km pixels. While you can downscale those in a variety of ways, there are always uncertainties, and these include the detail of how our large-scale climate responds to global warming. If we get that wrong it means the information fed into regional models is wrong and that is a problem to say the least. A summit held in Berlin recently explored ways forward, and there is a very nice report from that summit that proposes solution that is of order 3 billion euros ($5 billion Australian dollars) a year for each of 3-5 global modelling centres. This sounds a lot - but relative to the costs of climate change it is an investment with potentially large returns. The goal would be 3-5 modelling systems at kilometre resolution built with full-scale software engineering standards providing high quality projections for all countries This is not to replace the many existing modelling centres, rather it recognises that the requirements for kilometre resolution models are beyond the capability of most countries. There is a nice report on this at: https://lnkd.in/gmhjQZTb and the actual statement from the summit is available here: https://eve4climate.org/ As a footnote - if you are using kilometre resolution data sourced from climate models, ask why the community that builds our existing global models are arguing for large investment in creating tools to create kilometre resolution data. This rather opens up a question of how robust global modellers believe products purporting to provide kilometre resolution climate projections might be.
-
Further progress in AI+climate modeling "Applying the ACE2 Emulator to SST Green's Functions for the E3SMv3 Global Atmosphere Model". Building on ACE2 model which uses our spherical Fourier neural operator (SFNO) architecture, this work shows that ACE2 can replicate climate model responses to sea surface temperature perturbations with high fidelity at a fraction of the cost. This accelerates climate sensitivity research and helps us better understand radiative feedbacks in the Earth system. Background: The SFNO architecture was first used in training FourCastNet weather model, whose latest version (v3) has state-of-art probabilistic calibration. AI+Science is not just about blindly applying the standard transformer/CNN "hammer". It is about carefully designing neural architectures that incorporate domain constraints like geometry and multiple scales, while being expressive and easy to train. SFNO accomplishes both: it incorporates multiple scales, and it respects the spherical geometry and this is critical for success in climate modeling. Unlike short-term weather, which requires only a few autoregressive steps for rollout, climate modeling requires long rollouts with thousands or even greater number of time steps. All other AI-based models fail for long-term climate modeling including Pangu and GraphCast which ignore the spherical geometry. Distortions start building up at the poles since the models assume domain is a rectangle, and they lead to catastrophic failures. Structure matters in AI+Science!
-
You might have seen news from our Google DeepMind colleagues lately on GenCast, which is changing the game of weather forecasting by building state-of-the-art weather models using AI. Some of our teams started to wonder – can we apply similar techniques to the notoriously compute-intensive challenge of climate modeling? General circulation models (GCMs) are a critical part of climate modeling, focused on the physical aspects of the climate system, such as temperature, pressure, wind, and ocean currents. Traditional GCMs, while powerful, can struggle with precipitation – and our teams wanted to see if AI could help. Our team released a paper and data on our AI-based GCM, building on our Nature paper from last year - specifically, now predicting precipitation with greater accuracy than prior state of the art. The new paper on NeuralGCM introduces 𝗺𝗼𝗱𝗲𝗹𝘀 𝘁𝗵𝗮𝘁 𝗹𝗲𝗮𝗿𝗻 𝗳𝗿𝗼𝗺 𝘀𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗱𝗮𝘁𝗮 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝗲 𝗺𝗼𝗿𝗲 𝗿𝗲𝗮𝗹𝗶𝘀𝘁𝗶𝗰 𝗿𝗮𝗶𝗻 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻𝘀. Kudos to Janni Yuval, Ian Langmore, Dmitrii Kochkov, and Stephan Hoyer! Here's why this is a big deal: 𝗟𝗲𝘀𝘀 𝗕𝗶𝗮𝘀, 𝗠𝗼𝗿𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆: These new models have less bias, meaning they align more closely with actual observations – and we see this both for forecasts up to 15 days, and also for 20-year projections (in which sea surface temperatures and sea ice were fixed at historical values, since we don’t yet have an ocean model). NeuralGCM forecasts are especially performant around extremes, which are especially important in understanding climate anomalies, and can predict rain patterns throughout the day with better precision. 𝗖𝗼𝗺𝗯𝗶𝗻𝗶𝗻𝗴 𝗔𝗜, 𝗦𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗜𝗺𝗮𝗴𝗲𝗿𝘆, 𝗮𝗻𝗱 𝗣𝗵𝘆𝘀𝗶𝗰𝘀: The model combines a learned physics model with a dynamic differentiable core to leverage both physics and AI methods, with the model trained directly on satellite-based precipitation observations. 𝗢𝗽𝗲𝗻 𝗔𝗰𝗰𝗲𝘀𝘀 𝗳𝗼𝗿 𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲! This is perhaps the most exciting news! The team has made their pre-trained NeuralGCM model checkpoints (including their awesome new precipitation models) available under a CC BY-SA 4.0 license. Anyone can use and build upon this cutting-edge technology! https://lnkd.in/gfmAx_Ju 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀: Accurate predictions of precipitation are crucial for everything from water resource management and flood mitigation to understanding the impacts of climate change on agriculture and ecosystems. Check out the paper to learn more: https://lnkd.in/geqaNTRP
-
This recent pre-print from DeepMind shows not only impressive skill at forecasting tropical cyclones, but also demonstrates the strength of scoring rule based losses for training stochastic weather models, an approach we also use for training stochastic versions of NeuralGCM, and that ECMWF (in AIFS ENS) and Nvidia (in FourCastNet3) have recently adopted as well to build SOTA weather models. Scoring rule minimization is simpler to setup than diffusion models (all you need are stochastic inputs or weights, with a proper scoring rule like CRPS), more flexible (e.g., supporting training on auto-regressive roll-outs) and also more performant out of the box (no need for iterative denoising). I think they may be a big part of the future of AI weather and climate models! https://lnkd.in/gAcaBuHT
-
Coupled chemistry-climate models (CCMs) are essential tools for understanding chemical variability in the climate system, but they are extraordinarily expensive to run. Eric Mei's recent paper shows that linear inverse models (LIMs) can be used to emulate CCMs at a fraction of the computational cost (laptop vs HPC). This opens up new opportunities for strongly-coupled chemistry-climate data assimilation, large ensembles, hypothesis testing, and cost/benefit analysis for nonlinear machine learning emulators of CCMs. In constrast to ML emulators, LIMs have transparent explainability, illustrated by the figure below showing the coupled time-evolving relationship between sea-surface temperature, ozone, and hydroxyl radical for the El Nino mode in the model. Link to the paper: https://lnkd.in/d4DcJHVZ Supported by Schmidt Futures
-
Scientists Combine Climate Models For More Accurate Projections -- https://lnkd.in/ga-82Kaw <-- shared technical article -- https://lnkd.in/gHFTDAYj <-- shared paper -- Researchers... have created a new method for statistically analyzing climate models that projects future conditions with more fidelity. The method provides a way to adjust for models with high temperature sensitivities—a known problem in the community. By assigning different weights to models and combining them, the researchers estimate that the global temperature will increase between 2 and 5° Celsius by the end of the century. This projection, published in Nature Communications Earth & Environment [link above], aligns with previous projections, although this novel framework is more inclusive, avoiding the rejection of models that was common practice in previous methods... A key parameter for these models—known as equilibrium climate sensitivity or ECS—describes the relationship between change in carbon dioxide and corresponding warming. Although the Earth system has a true ECS, it is not a measurable quantity. Different lines of evidence can provide a plausible picture of the Earth's true ECS, which can alleviate the uncertainty of simulation models. However, many models assume a high ECS and predict higher temperatures in response to more atmospheric carbon dioxide than occurs in the real Earth system. Because these models provide estimates about future conditions to scientists and policymakers, it is important to ensure that they represent the conditions of the Earth as faithfully as possible. Previous methods mitigated this issue by eliminating models with a high ECS value. "That was a heavy-handed approach," said Massoud. "The models that were thrown out might have good information that we need, especially for understanding the extreme ends of things." "Instead, we adopted a tool called Bayesian Model Averaging, which is a way to combine models with varying influence when estimating their distribution," said Massoud. "We used this to constrain the ECS on these models, which enabled us to project future conditions without the 'hot model problem.'"... This new method provides a framework for how to best understand a collection of climate models. The model weights included in this research informed the Fifth National Climate Assessment, a report released on Nov. 14 that gauges the impacts of climate change in the United States. This project also supports the Earth System Grid Federation, an international collaboration led in the U.S. by DOE that manages and provides access to climate models and observed data…” #GIS #spatial #mapping #climatechange #spatialanalysis #spatiotemporal #model #modeling #numericmodeling #global #statistics #weighting #bayesian #modelaverging #climatesensivity #climatemodels #projection #ECS #earthsystem #ORNL
-
+1
-
My IPCC journal - Storms... could hold important advances in AR7, thanks to the development of coordinated high-resolution (1-5km) regional climate simulations and projections. This is long awaited as we know that many impactful hazards are due to small-scale phenomena (eg. violent windstorms, extreme rainfalls, hailstorms, tropical cyclones, derechoes, ...), which are usually not resolved by climate models. Such models are also called "Convection-permitting Models" (CPMs). Five years ago, capacities (computing, model development, coordination) did not allow multi-model experiments with CPMs. Since then, coordinated initiatives in several regions took place. The AR7 report should be a home for assessing those experiments and what we are learning about climate change impacts on storms that will, for sure, be of high interest to society. Some papers: https://lnkd.in/eqWKUQKT https://lnkd.in/exdVq6y9 https://lnkd.in/eaY2p3vy 🌾High-resolution simulations differ from previous ones in that they are now explicitely resolving and large clouds due to convection, leading to thunderstorms and other severe weather conditions. Convective clouds include complex interactions between dynamical (winds) and thermodynamical (temperature, humidity, microphysics) processes, that can only be represented empirically in lower-resolution models. 🌾 Beyond processes, such models are expected to tell more about the effect of climate change in complex terrain (mountains, coasts, urban areas). For urban areas, this will also bring great material to our Special Report on Climate Change and Cities. 🌾As I stopped running myself regional climate simulations a few years ago and participating to ensemble regional climate experiments, I am personally eager to see the advances brought by my colleagues in this area. Please continue good work and papers! 🌾 IPCC, outreach: - TSUs continue developing a strategy for pre-scoping activities, meant to inform the AR7 scoping meeting, feeding it with expectations and ideas from stakeholders and representatives of research groups and organizations. A survey will soon be sent ot research organizations - For me this week was also filled with outreach activities. I met with a number of representatives of companies engaged in transition, and was happy to present the main results of AR6 and challenges for AR7, and exchange. This, in turn, triggered new requests for presentations (I will need to be careful about available time 😅). Through these discussions, I could measure the engagement of many in transition and in designing plans and strategies for decarbonization and value sharing. Transition cannot work if equity is left out of the scheme, internationally, and within companies as well. My presentations are always available from this link: 👉 https://lnkd.in/entCAewQ