From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes (2022)
Unlock the full course today
Join today to access over 24,900 courses taught by industry experts.
Reprogramming neural nets
From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes (2022)
Reprogramming neural nets
- [Instructor] Neural nets and deep learning systems are subsets of machine learning. Another way to attack these systems is to use a perturbation to reprogram the system to perform a task that it wasn't intended to. This is not to be confused with transfer learning, which is non-adversarial, and refers without when a machine learning system can transfer knowledge to a different problem space. When attackers attempt to reprogram neural nets, they send unintended queries to the model, inducing it to solve new or unintended tasks. That may not sound too bad, but researchers warn that this attack could be used to steal resources or fool systems. Consider the CAPTCHA. CAPTCHAs are 2D images that humans look at and then identify as things like a crosswalk or a traffic light or a mountain. But if an image classifier was able to accurately identify these 2D images as well as, or even better than, a human, that bot would…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
Perturbation attacks and AUPs3m 31s
-
(Locked)
Poisoning attacks3m 11s
-
(Locked)
Reprogramming neural nets1m 39s
-
(Locked)
Physical domain (3D adversarial objects)2m 34s
-
(Locked)
Supply chain attacks2m 42s
-
(Locked)
Model inversion3m 12s
-
(Locked)
System manipulation3m 2s
-
(Locked)
Membership inference and model stealing2m 3s
-
(Locked)
Backdoors and existing exploits2m 19s
-
-
-
-