𝗝𝘂𝘀𝘁 𝗥𝗲𝗹𝗲𝗮𝘀𝗲𝗱: 𝗨𝗡𝗘𝗣 𝗙𝗜 & 𝗚𝗹𝗼𝗯𝗮𝗹 𝗖𝗿𝗲𝗱𝗶𝘁 𝗗𝗮𝘁𝗮 𝗦𝘂𝗿𝘃𝗲𝘆 𝘀𝗵𝗼𝘄𝘀 𝗰𝗹𝗶𝗺𝗮𝘁𝗲 𝗿𝗶𝘀𝗸 𝗶𝘀 𝗰𝗿𝗲𝗱𝗶𝘁 𝗿𝗶𝘀𝗸. 𝗕𝗮𝗻𝗸𝘀 𝗮𝗿𝗲 𝘀𝘁𝗶𝗹𝗹 𝗻𝗼𝘁 𝗱𝗼𝗶𝗻𝗴 𝗲𝗻𝗼𝘂𝗴𝗵. Regulators have raised the bar for climate disclosure, but banks are still a long way from embedding climate risks into their business. 🔸 Collateral value adjustment remains low. Just 12% of banks adjust collateral for physical risk, and only 4% for transition risk. 🔸 ESG integration is fragmented. Over half of banks have internal ESG scoring, but there's no consensus. Few banks tie ESG directly into credit decisions, methods vary and full integration into ratings is rare. 🔸 Scenario analysis is widespread, but validation is not. 85% of banks use NGFS climate scenarios, but fewer than 5% regularly backtest climate impacts in credit models. 🔸 Incorporating climate into key credit metrics is lagging. Metrics like Probability of Default (PD), Loss Given Default (LGD) and Internal ratings-based (IRB) models remain inconsistently or only partially integrated with climate risk considerations. 🔸 Adjustments to ECL (Expected Credit Loss), RWA (Risk-Weighted Assets) and Economic Capital remain low and still in early, exploratory stages. Most banks report financial impact for climate risks between 0-2.5%. For transition risks, this increases to 5-10% but this is not reflected in key metrics. There remains a significant gap in quantification and adoption for capital impact. 🔸 Many banks still rely on expert judgement over data-driven models. While climate risk is assessed across major portfolios, most banks depend on judgement, due to data and methodological constraints. 🔸 Data quality & granularity are key obstacles. Obstacles to robust, forward-looking climate data (especially Scope 3) push banks toward proxies and general averages. 𝗠𝘆 𝗧𝗮𝗸𝗲 The UNEP report shows the banking sector still struggles to consistently quantify and integrate climate risk in credit portfolios, capital models, and client processes. Most banks remain reliant on expert judgment and qualitative overlays, mainly due to the lack of granular, forward-looking data and practical scenario analytics. Scenario analysis exists but is rarely deeply embedded in major decisions, and backtesting is the exception. This is where data-driven platforms are critical. Delivering granular scenario analysis, data harmonisation, and dynamic simulation enables banks to move beyond overlays to defensible, auditable climate risk insights. The leaders will be the banks who industrialise scenario analytics and make regulatory pressure a driver of real competitive advantage. #ClimateRisk #CreditRisk #Banking #ESG #RiskManagement #SustainableFinance Source: https://lnkd.in/eC4S8mRN ___________ 𝘛𝘩𝘦𝘴𝘦 𝘷𝘪𝘦𝘸𝘴 𝘢𝘳𝘦 𝘮𝘺 𝘰𝘸𝘯. 𝘍𝘰𝘭𝘭𝘰𝘸 𝘮𝘦 𝘰𝘯 𝘓𝘪𝘯𝘬𝘦𝘥𝘐𝘯: Scott Kelly
Expert Judgment vs Data-Driven Climate Risk Models
Explore top LinkedIn content from expert professionals.
Summary
Expert judgment vs data-driven climate risk models refers to the comparison between relying on human expertise and experience versus using statistical analysis and large datasets for assessing climate-related risks, especially in banking and finance. While data-driven models offer measurable insights, expert judgment remains vital when data is limited or scenarios are highly complex.
- Balance approaches: Combine expert insights with data analysis to improve the accuracy of climate risk assessments, especially when dealing with rare or unprecedented events.
- Address data gaps: Use expert judgment to fill in areas where data is insufficient or unreliable, but continually seek ways to improve data quality and granularity.
- Refine decision-making: Recognize that both methods involve subjective decisions and regularly review model assumptions and interpretations to avoid biases.
-
-
The UNEP FI – Bridging Climate and Credit Risk report delves into how 32 global banks are incorporating climate-related risks into their credit risk frameworks. These banks evaluate physical and transition risks across sectors like real estate, energy, and transport, focusing on exposure classes such as large corporates and SMEs. While expert judgment remains crucial, there is a notable shift towards data-driven approaches. The outcomes of climate risk assessments impact regulatory reporting, credit decisions, and client interactions, influencing activities like loan repricing and risk ratings. Despite progress in integrating climate risks into Probability of Default (PD) and Loss Given Default (LGD) models, their integration into internal models like IRB or rank-ordering is still limited. While scenario analysis, including NGFS scenarios, is prevalent, challenges persist with Scope 3 emissions data. More than half of the banks surveyed employ ESG scoring frameworks, but the methods of integration vary due to issues like data quality, methodological constraints, and resource limitations. The report advocates for refining climate-credit risk models, strengthening data governance, and promoting closer engagement with regulators. It emphasizes the necessity for banks to embrace proactive measures like stress testing, margin of conservatism, and broader sustainability integration to effectively navigate long-term climate-related credit risks.
-
The “subjectivity beast” in risk analysis: Are statistical models better than expert opinions? This post is a matter of the heart. I have heard and read so many (misleading) statements about the superiority of “objective” (i.e., statistical) over “subjective” (i.e., expert opinion-based) risk analysis. It is a complex topic that deserves much more than a simple post. I can’t cover the complexity and nuances surrounding it. See it as a starting point for a hopefully great discussion. It is true that, for some risks, data-driven risk analysis and even simple quantitative algorithms regularly outperform experts, as clearly shown by the evidence. There are many reasons for this, like biases at play, an environment where experiences lead to learning, too little experience with certain risks, and many more. It is not true that statistical models mean “objective risk analysis.” Many decisions remain highly subjective, such as the choice of the statistical model, the choice of the sample, and assumptions about the causality embedded in the model. It is tempting to confuse objectivity with “quantitative” risk analysis and subjectivity with “qualitative risk analysis.” I'm afraid that's not right. Here is why: Pure quantitative statistical models can also entirely rely on subjective probability and impact distributions assessed by experts. For example, I can conduct a Monte Carlo simulation based on a triangular distribution in which experts guess the worst, best, and most likely scenarios. Also, statistical analysis results require human interpretation, which might be biased. A statistical model fails to ensure the analysis problems are correctly framed (e.g., risk scenarios that only cover short-term impacts). Statistical analysis starts and ends with subjective decisions. Specifically, in the case of rare risks, expert opinion may outperform statistical analysis just because no data exists. Remember that probability theory cannot be applied to assessing single-event risks that have yet to occur. Experts may hint at the wrong model assumptions, have some data, and have an educated opinion (the combination may be better than just relying on data). Experts may use scenario analysis to reveal wrongly framed risks. Experts may decompose complex risks by using event tree analysis. Experts may adjust the results of data-driven analysis. So what does that mean? Two things: First, there is no such thing as objective risk analysis, even if your risk management is fully “quantitative.” It may even lead to the paradox that quantitative risk analysis is more biased as it is believed to be objective. Second, for some risks, the dominant strategy is to rely on expert opinion. For a good reason: Experts may outperform statistical analysis in assessing rare (but detrimental) risks. Institut für Finanzdienstleistungen Zug IFZ Lucerne University of Applied Sciences and Arts #ifzriskmanagement