From the course: Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
Unlock the full course today
Join today to access over 24,900 courses taught by industry experts.
Using surrogate models for global explanations - KNIME Tutorial
From the course: Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
Using surrogate models for global explanations
- [Instructor] A surrogate model is simply a substitute. When you feel that your project is best served by a black box model, you can build a second model that explains the first. Here's the key difference. Your target variable changes. Your new target variable is the output of the first model. In his excellent book, "Interpretable Machine Learning: A Guide for Making Black Box Models Explainable," which I highly recommend by the way, Christoff Molnar outlines the generic steps in more detail than I will here. We will do the bare minimum at first. A bit later, we'll learn that KNIME can help us with surrogate models. So the first step is to get the predictions from the black box model, then train an interpretable model on the predictions, meaning that the predictions of first model become the dependent variable of the second model, then measure how well it replicates. When we do the demonstration in KNIME,…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
-
(Locked)
Providing global explanations with partial dependence plots4m 58s
-
(Locked)
Using surrogate models for global explanations1m 52s
-
(Locked)
Developing and interpreting a surrogate model with KNIME4m 52s
-
(Locked)
Permutation feature importance1m 11s
-
(Locked)
Global feature importance demo6m 54s
-
(Locked)
-
-
-