From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes (2022)
Unlock the full course today
Join today to access over 24,900 courses taught by industry experts.
Distributional shifts and incomplete testing
From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes (2022)
Distributional shifts and incomplete testing
- [Narrator] Understanding context is something people do remarkably well compared to machines. For example, if you purchase a television or other electrical appliance, you probably look for the UL, underwriter's laboratory certification to determine if the product has been safety tested for use. But you also know that electric devices shouldn't be dropped in water while they're plugged in. ML and AI systems need to be designed with context in mind and used in the same environment they were trained in. Otherwise, they are vulnerable to unintentional failures caused by distributional shifts and incomplete testing. Distributional shifts occur when there are mismatches between the data the system was trained on and the data it encounters during deployment. When distributional shifts become too wide, the performance and accuracy of the ML system goes down. A cleaning robot that was trained in a carpeted office environment that is…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.