From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes (2022)
Unlock the full course today
Join today to access over 24,900 courses taught by industry experts.
System manipulation
From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes (2022)
System manipulation
- If a model is looking for unusual patterns over time, one way to manipulate the system is for an attacker to train it that's something that is abnormal is actually normal behavior. Then when the attacker goes to download large amounts of data, rather flagging or alerting this as abnormal behavior, the system has been trained by the attacker to think that this is normal. That is why it is so important to monitor systems for drift and other system manipulation and then to retool or retrain them as needed to ensure ongoing and reliable operation. While the vision of ML and AI for many encompasses a super smart system that gets smarter and better over time the reality isn't quite so futuristic. Many machine learning systems are trained in highly controlled settings and don't actually do a lot of learning in production. For some systems that are learning continuously, system or model manipulation is a concern. Over time, machine…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
Perturbation attacks and AUPs3m 31s
-
(Locked)
Poisoning attacks3m 11s
-
(Locked)
Reprogramming neural nets1m 39s
-
(Locked)
Physical domain (3D adversarial objects)2m 34s
-
(Locked)
Supply chain attacks2m 42s
-
(Locked)
Model inversion3m 12s
-
(Locked)
System manipulation3m 2s
-
(Locked)
Membership inference and model stealing2m 3s
-
(Locked)
Backdoors and existing exploits2m 19s
-
-
-
-