From the course: Data-Centric AI: Best Practices, Responsible AI, and More
Unlock the full course today
Join today to access over 24,900 courses taught by industry experts.
Code example: Model validation
From the course: Data-Centric AI: Best Practices, Responsible AI, and More
Code example: Model validation
- [Instructor] Now let's continue with the maternal health risk data example that we started with and see how we can apply explainability and interpretability in the model we built. In this section, I'm starting with a LIME explainer, so let's run this function. As you see, I'm using one particular example of my validation dataset and trying to explain the outputs it had. As we're using LIME, make sure that you understand this is a local explanation and not a global explanation. So this is an example of explainability, not interpretability. I had mentioned that explainability and interpretability is used interchangeably, but it has some subtle nuances, and hence I wanted to use this opportunity to explain to you using an example on what those nuance differences are. The example that you see here when I'm explaining a particular output of the model is an example of explainability. This is a local explanation, which is…