The Dark Sides of AI
Where AI went wrong and how to handle it.
Alexander Pospiech ,
Data Engineer/Scientist @ inovex
Security and Privacy Apologist
Ɣ Father of One
Dinghy-Sailor
 Nerd
2
What is trust? 1
1
What will it take for us to trust AI? by Alan Finkel3
trust
noun
the belief that you can trust someone or something
trust
verb
to believe that someone is good and honest and will not harm you, or that
something is safe and reliable 2
2
https://dictionary.cambridge.org/dictionary/english/trust4
People are comfortable with algorithmic
decisions, sometimes more than human
advice. 3
3
Do People Trust Algorithms More Than Companies Realize?5
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
6
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
7
Sugar bananas by Maksym Kozlenko licensed as CC-BY-SA-4.0
The Cautious Path to Strate-
gic Advantage: How Militaries
Should Plan for AI
by Peter Eckersley
Chinese facial recognition system
confuses bus ad with a jaywalker
by Jon Fingas
Turkish is a gender neutral language by Alex Shams (28.11.2017)
11
Google is fixing gender bias in its Translate service by Ivan Mehta
12
Google’s technology will not suggest
gender-based pronouns because the risk is
too high that its “Smart Compose”
technology might predict someone’s sex or
gender identity incorrectly and offend
users, product leaders ... 4
4
Fearful of bias, Google blocks gender-based pronouns from new AI tool by Paresh Dave13
The Machine Fired Me - No human could do
a thing about it!
Necessary orders are sent automatically
and each order completion triggers another
order.
4
idiallo.com/blog/when-a-machine-fired-me by Ibrahim Diallo14
Responses to Critiques on Machine Learning of Criminality Perceptions by Xiaolin Wu, Xi Zhang
15
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
16
Racing Stripes Car Top View by qubodup
original art: Autonomous Trap 001 by James Bridle (2017)
17
The Moral Machine experiment
by Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François
Bonnefon & Iyad Rahwan18
https://twitter.com/TayandYou (2016)
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.
by Nguyen A, Yosinski J, Clune J. at Evolving AI Lab, University of Wyoming
20
Explaining and harnessing adversarial examples.
by Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. at OpenAI
21
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects
by Michael A. Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, Anh Nguyen
22
Fooling Neural Networks in the Physi-
cal World with 3D Adversarial Objects
by Anish Athalye, Logan Engstrom,
Andrew Ilyas & Kevin Kwok at LabSix
Adversarial Reprogramming of Neural Networks.
by Gamaleldin F. Elsayed, Ian Goodfellow and Jascha Sohl-Dickstein at Google Brain
24
Robust Physical-World Attacks on Deep
Learning Models
by Kevin Eykholt, Ivan Evtimov, Ear-
lence Fernandes, Bo Li, Amir Rahmati,
Chaowei Xiao, Atul Prakash, Tadayoshi
Kohno, Dawn Song
Accessorize to a Crime: Real and Stealthy Attacks on
State-of-the-Art Face Recognition
by Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael
K. Reiter
Juggalo by twitter.com/tahkion27
Malware:
DeepLocker - Concealing Targeted Attacks with AI Locksmithing
DEF CON 25 - Hyrum Anderson - Evading next gen AV using AI
Intrusion Detection:
Here’s The Truth, GANs Easily Can Fool Intrusion Detection Systems
Anti-Virus:
G Data uses nerual network based DeepRay for detecting malware 5
5
Antiviren-Software mit KI-Technik: G Data setzt auf DeepRay28
Fake fingerprints can imitate real ones
in biometric systems – research
by Philip Bontrager, Aditi Roy, Julian
Togelius, Nasir Memon, Arun Ross
Giphy30
America’s biggest body-camera company
says facial recognition isn’t accurate
enough for policing decisions
5
America’s biggest body-camera company says facial recognition isn’t accurate enough for policing
decisions by Dave Gershgorn31
Face Recognition Field Test at Südkreuz 12
by C.Suthorn
under CC-BY-SA-4.0
While the authors applaud the force for
attempting to develop an ethically sound
and legally compliant approach to
predictive policing, they warn that the
ethical principles in the proposal are not
developed enough to deal with the broad
challenges this kind of technology could
throw up, and that “frequently the details
are insufficiently fleshed out and important
issues are not fully recognized.” 6
6
Britain Is Developing an AI-Powered Predictive Policing System33
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
34
Gender Shades
by Joy Buolamwini (2018) and her
MIT group
Google Photos, y’all fucked up by Jacky Alcine (28.06.2015)
36
Amazon’s Disturbing Plan to Add Face Surveillance to Your Front Door37
Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots
38
Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots39
I tested 14 sentences for "perceived toxicity" using Perspectives. jessamyn west (25.08.2017)
40
Design Justice, A.I., and Escape from the Matrix of Domination by Sasha Costanza-Chock
41
Amazon.com Inc’s machine-learning
specialists uncovered a big problem: their
new recruiting engine did not like women.
6
Reuters - RPT-INSIGHT-Amazon scraps secret AI recruiting tool that showed bias against women42
Predictive Policing
43
... predictive models reinforce existing police
practices ...
... patterns of police records, not patterns of
crime ...
... cannot predict patterns of crime that are
different ...
7
7
USA by Human Rights Data Analysis Group44
... the differences in arrest rates by ethnic
group between predictive policing and
standard patrol practices were not
statistically significant, ..."
8
8
Field-data Study Finds No Evidence of Racial Bias in Predictive Policing (2018)
by Forensic Magazine45
Predictice Judgement
46
Positive Positive: equal chance
False Positives/Negatives: unfair 9
9
How to Fight Bias with Predictive Policing (2018)
by Eric Siegel in Scientific American47
... COMPAS is no more accurate or fair than
predictions made by people with little or
no criminal justice expertise. 10
10
The accuracy, fairness, and limits of predicting recidivism (2018)
by Julia Dressel and Hany Farid in Science Advances48
... despite COMPAS’s collection of 137
features, the same accuracy can be
achieved with a simple linear classifier with
only two features.11
11
The accuracy, fairness, and limits of predicting recidivism (2018)
by Julia Dressel and Hany Farid in Science Advances49
The “bias” comes from base rates. 12
12
AI Ethics, Impossibility Theorems and Tradeoffs50
Faceception
...recognizing “High IQ”, “White-Collar Offender”, “Pedophile”, and “Terrorist” ...
According to Social and Life Science research personalities are affected by genes.
Our face is a reflection of our DNA. 13
13
Faception51
Predictive Staatsschutz
GES-3D - Multi-Biometrische Gesichtserkennung
MisPel-Projekt - Multi-Biometrisierte Forensische Personensuche in Lichtbild-
und Videomassendaten
INTEGER - Visuelle Entscheidungsunterstützung bei der Auswertung von
Daten aus sozialen Netzwerken
PANDORA - Propaganda, Mobilisierung und Radikalisierung zur Gewalt in der
virtuellen und realen Welt
X-SONAR - Extremistische Bestrebungen in Social Media Netzwerken
RADIG-Z -Radikalisierung im digitalen Zeitalter
RISKANT - Risikoanalyse bei islamistisch motivierten Tatgeneigten
Survey: Predictive Staatsschutz auf Telepolis von Matthias Becker
52
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
53
Everbody has a deep learning paper
Thousands of papers ( 20.000/year)
big part have low quality
faked results or just lack of knowledge
54
Agenda
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion55
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
56
Quadrants
Intended Unintended
Inside Black Hat AI Bias
a.k.a Vendor/User killer robots bad validation
Cheating14 wrong use of model
Outside Adversarial Attacks Bias
a.k.a Attacker/User data poisoning
14
US border agents hacked their “risk assessment” system to recommend detention 100% of the
time57
Roles & Accountability
Researchers
Developers
Vendors
Operators
Users
Regulators
Attacker
58
Accountability - A scenario
build an AI for fraud detection for a bank
AI blocks tranaction of company A
A goes bankrupt
A sues the bank
developer (you) is questioned as a witness
Can you provide answers?
15
15
Hype-Tech by Felix von Leitner (translated from german)59
Accountability - and how?
reporting process?
transparency to the public?
power to decide on necessary changes?
provability of work afterwards?
60
Cost of Misbehaving AI
Why should the vendor change something?
legal consequences
ethical/morally behaviour
loss of reputation
loss of opportunities
loss of money
61
Cost of Misbehaving AI
Why should the users demand change?
loss of money
loss of security
loss of freedom
62
Defining and Researching "The Game"
Defending is always harder than attacking 16
AI for/against:
Detection
Prediction
Prevention
Response
16
Is attacking machine learning easier than defending it?63
Regulation - GDPR
"Right to be forgotten/"Right to erasure"
Älgorithmic Fairnessänd "The Right to Explanation"
64
Regulation - Understanding
White House report: Preparing for the future of Artificial Intelligence
House of Lords report: AI in the UK: ready, willing and able?
Bundestag then: some talk and a list of experts
Bundestag now: KI-Eckpunktepapier
65
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
66
Specification
Design - bugs, ambiguities, side effects, high level language, preference
learning, design protocols
Emergent - wireheading, delusions, meta-Learning and sub-agents, detecting
emerging behavior, reward hacking
16
Building safe artificial intelligence: specification, robustness, and assurance by DeepMind Safety
Research - Pedro A. Ortega, Vishal Maini, and the DeepMind safety team67
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
68
Robustness
Risk and prevention - limitation, risk sensitivity, uncertainty estimates, safety
margins, safe exploration, adversaries, cautious generalisation, verification
Recovery and stability - availability, instability, error correction, fail-safe,
distributional shift, graceful degradation
16
Building safe artificial intelligence: specification, robustness, and assurance by DeepMind Safety
Research - Pedro A. Ortega, Vishal Maini, and the DeepMind safety team69
Robustness - Adversaries
image, video, malware, health records, ...
Find, investigate, train on and robustify against attack vectors.
defenses tend to be broken easily
learn the adversaries as counter examples
use regulization
cryptography 17
17
Defense against adversarial attacks using machine learning and cryptography by Ingrid Fadelli70
Robustness - Availability
Availability of the processing?
Can I DOS a Neural Network?
Availability of predcitions or decisions?
Is a poisoning attack a DOS?
71
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
72
Ethics - discussional approach
include other groups than engineers
use checklists, like deon - An ethics checklist for data scientists
there is no, I repeat NO, programmable ethics!
ethics is a discussion and by this an ongoing process
73
Ethics - technical approach
enforce consent, clarity, consistency, control, consequences 18
18
Care about AI ethics? What you can do, starting today by Steven Adler74
Chris Anderson: “with enough data, the
numbers speak for themselves.”
18
The Hidden Biases in Big Data by Kate Crawford75
Kate Crawford: ẞadly, they can’t. Data and
data sets are not objective; they are
creations of human design."
18
The Hidden Biases in Big Data by Kate Crawford76
Fairness - Definition
Anti-classification
Classification parity
Calibration: If an algorithm produces a “score,” that “score” should mean the
same thing for different groups.
77
Fairness - Tools
Aequitas - open source bias audit toolkit
IBM - AI Fairness 360
Facebook creates internal tool - Fairness Flow
"Microsoft is creating an oracle for catching biased AI algorithms"
Tool: What-If tool
deon - An ethics checklist for data scientists
Fix bias in AI with more tech (probably AI) sigh
78
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
79
Assurance
Monitoring - interpretability, behavioural screening, traces, estimates of causal
influence, tripwires & honeypots
Enforcement - testability, interuptibility, boxing, physical security, encryption,
signatures, authorisation, human override
18
Building safe artificial intelligence: specification, robustness, and assurance by DeepMind Safety
Research - Pedro A. Ortega, Vishal Maini, and the DeepMind safety team80
Assurance - Testing
why not currently?
what to build?
what to test?
continuous regression tests?
special items?
biases?
81
Assurance - Physical Security
A neural network is some files on hardware.
Can be copied, stolen, modified, ...
82
Assurance - Interpretability
The Mythos of Model Interpretability 19
expect superhuman standards, we only tend to have (post-rationalization)
but needed for Debugging, Verification, Fairness, Transparency
19
The Mythos of Model Interpretability by Zachary C. Lipton83
Assurance - Interpretability
intrinsic vs. posthoc
outcome: statistics, visualization, weights, data points, intrinsical model
model-specific vs. model agnostic
local vs. global
remind the the needs and the knowledge of the different roles
84
Introduction to Local Interpretable Model-Agnostic Explanations (LIME) (2016)
by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin in O’Reilly
Introduction to Local Interpretable Model-Agnostic Explanations (LIME) (2016)
by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin in O’Reilly
Question: I have an unexplainable AI that
detects 1000 tumors and an explainable one
that detects 100. Which one do I prefer?
87
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
88
Development
ETL - biases, fairness, privacy, data poisoning, ...
Reproducibility - model management, data management, result management, ...
89
In many real-world cases, the researcher
won’t have made notes or remember exactly
what she did, so even she won’t be able to
reproduce the model. 20
20
The Machine Learning Reproducibility Crisis (2018) by Pete Warden90
Yet AI researchers say the incentives are still
not aligned with reproducibility.
21
21
Missing data hinder replication of artificial intelligence studies (2018) by Matthew Hutson in
Science91
Privacy - Confidentiality
Privacy
Encryption
Possible tools:
Differential Privacy
Homomorphic Encryption
Federated learning 22
22
Federated Learning: Collaborative Machine Learning without Centralized Training Data92
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
93
I would say: It’s not the TÜV!
94
KI Bundesverband?
95
How it already has gone wrong - some Examples
Specification
Robustness
Ethics & Fairness
Reproducibility
What now?
Frame
Specification
Robustness
Ethics & Fairness
Assurance
Development
Certificates/Verification
Conclusion
96
Education Educate AI basics as part of data
literacy in school and college.
97
What can you do? Educate
Warn
Support
Research
Develop
98
Without our trust
AI will grow regardlessly.
99
With the stated advancements
AI will have our trust
and may work like expected.
100
Thank you
Alexander Pospiech
Data Engineer
inovex GmbH
Lindberghstraß e 3
80939 München
alexander.pospiech@inovex.de
+49 153 - 31 81 051
Conferences and Meetings I
Specific on the Dark Sides:
Conference on Fairness, Accountability, and Transparency
FATML - Fairness, Accountability, and Transparency in Machine Learning
Interpretable ML Symposium @NIPS
NIPS 2017 Tutorial - Fairness in Machine Learning
Reproducibility in ML Workshop, ICML’18
IEEE 1st Deep Learning and Security Workshop
Data Ethics workshop, KDD 2014
MAKE-Explainable AI
Advances on Explainable Artificial Intelligence
102
Conferences and Meetings II
Nemesis ’18 - 1st Workshop on Recent Advances in Adversarial Machine
Learning
NIPS 2018 Workshop on Security in Machine Learning
Generic on AI:
AI for Good Global Summit
103
Conferences and Meetings I
General on Security:
CCC
DefCon
SHA
BlackHat
104
Research Groups and Organizations I
AI specific:
AINow - A research institute examining the social implications of artificial
intelligence
Evolving AI Lab, University of Wyoming
OpenAI
LabSix
EFF on Artificial Intelligence & Machine Learning
EFF - AI Progress Measurement
EvalAI - Evaluating state of the art in AI
EvadeML - Machine Learning in the Presence of Adversaries
Adversarial Machine Learning, Università degli Studi di Cagliari
105
Research Groups and Organizations II
SunBlaze at UCB
Diskriminierung durch KI (Künstliche Intelligenz) (DiKI)
Algorithmische Gegenmacht
Center for Human-Compatible AI
Technische und rechtliche Betrachtungen algorithmischer
Entscheidungsverfahren by GI
Algorithmenethik
106
Research Groups and Organizations I
General:
Trusted AI - IBM Research is building and enabling AI solutions people can trust
Ethics/Fairness:
SIENNA: Technology, ethics and human rights
Human Rights Data Analysis Group
AlgorithmWatch
Netzpolitik on Predictive Policing
107
Projects I
Interpretability:
Project: DARPAs Explainable Artificial Intelligence
Project: Explain AI
Project: DALEX: Descriptive mAchine Learning EXplanations
108
Communities I
OpenMined
109
Classes I
CS 294: Fairness in Machine Learning, UC Berkeley
18739 Security and Fairness of Deep Learning, Carnegie Mellon
Adversarial and Secure Machine Learning
IEEE’s Artificial Intelligence and Ethics in Design
The Ethics and Governance of Artificial Intelligence
Attacking Networks with Adversarial Examples
CS 294-149: Safety and Control for Artificial General Intelligence (Fall 2018)
Machine-learning - crash-course -fairness
110
Themensammlung I
Netzpolitik on Predictive Policing
EFF on Artificial Intelligence & Machine Learning
EFF - AI Progress Measurement
EvalAI - Evaluating state of the art in AI
AI safety resources
Ethics:
AI Ethics Resources
111
Github with Lists I
General:
Awful AI
Ethics:
Data Science and Ethics Resources
Global Data Ethics Pledge (GDEP)
ML and Security
Machine Learning for Cyber Security Awesome
Awesome AI Security
Adversarial Attacks:
Awesome Adversarial Examples for Deep Learning
Awesome Adversarial Machine Learning
112
Github with Lists II
Awesome Machine Learning for Cyber Security
Privacy;
awesome-ai-privacy
Private-Ai-Resources
Machine Learning Ethics References
Fairness:
Toward ethical, transparent and fair AI/ML: a critical reading list for engineers,
designers, and policy makers
Interpretability;
Awesome Interpretable Machine Learning Awesome
awesome-machine-learning-interpretability Awesome
113
Github with Code I
Ethics:
deon - An ethics checklist for data scientists
An Open Standard for Ethical Enterprise-Grade AI
Interpretability:
H20.ai: Machine Learning Interpretability (MLI)
Explanation Explorer
Interpretable Machine Learning with Python
iml: interpretable machine learning
ML Insights
Fairness:
Comparing fairness-aware machine learning techniques.
114
Github with Code II
Themis ML - Fairness-aware Machine Learning
Adversarial Attacks:
Introduction to Adversarial Machine Learning
DeepFool
CleverHans
EvadeML-Zoo
AdvFlow
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
IBM adversarial-robustness-toolbox
deep-pwning
FoolBox,
115
Papers I
LIME, Local Interpretable Model-Agnostic Explanations (2016)
116
Studies I
European expert group seeks feedback on draft ethics guidelines for
trustworthy artificial intelligence
117
Blogs I
a blog about security and privacy in machine learning
MLSec
covert.io security + big data + machine learning
Data Driven Security
Automating OSINT
BigSnarf
Security of Machine Learning
118
Slides I
General:
Hype-Tech by Felix von Leitner
Adversarial Attacks:
How CLEVER is your neural network? Robustness evaluation against
adversarial examples by Pin-Yu Chen at IBM Research AI
Privacy:
Machine Learning & Privacy: It’s Complicated
Fairness:
AI Ethics, Impossibility Theorems and Tradeoffs
119
Blog Posts I
General:
6 core falsehoods about the digital sphere
Nein, Ethik kann man nicht programmieren
You created a machine learning application. Now make sure it’s secure.
Tooling:
Deep automation in machine learning
Fainess:
Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias
(December 2018)
Security:
Attacks against machine learning — an overview
120
Blog Posts II
Machine Learning for Cybersecurity 101
Interpretability:
Testing machine learning explanation techniques
Explainable AI and the Legality of Autonomous Weapon Systems
Adversarial Attacks:
The Definitive Security Data Science and Machine Learning Guide
Discussion and Survey of Adversarial Examples and Robustness in Deep
Learning
A Brief Introduction to Adversarial Examples
Adversarial Sample Generation: Making Machine Learning Systems Robust for
Security
Survey on Security and Privacy of Machine Learning
Attacking Machine Learning Detectors: the state of the art review121
Podcasts I
Fairness:
O’Reilly Data Show Podcast - Why it’s hard to design fair machine learning
models
Privacy:
How privacy-preserving techniques can lead to more robust machine learning
models
122
Videos - general I
General:
Youtube: Stephen Fry describing our future with artificial intelligence and
robots
34c3 - Beeinflussung durch Künstliche Intelligenz
34c3 - Deep Learning Blindspots
SHA2017 - The Security and Privacy Implications of AI and Machine Learning
Youtube - DEF CON 24 - Clarence Chio - Machine Duping 101: Pwning Deep
Learning Systems
Youtube: Do You Trust This Computer?
TED - The era of blind faith in big data must end
Managing Risk in Machine Learning - Ben Lorica (O’Reilly Media)
123
Videos - general II
Ethics:
Ethik der Algorithmen - Tom Hillenbrand zum "Schreckensszenario KI"
Interpretability:
[HUML16] 06: Zackary C. Lipton, The mythos of model interpretability
"Why Should I Trust you?" Explaining the Predictions of Any Classifier, KDD
2016
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD),
Data Scientist, Aviva
TWiML - Trust in Prediction of Machine Learning Models - EMEA Meetup #4 -
December 2018
Explaining Complex Machine Learning Models With LIME - Shirin Glander
(codecentric AG)
124
Videos - general III
Fairness:
Tutorial: 21 fairness definitions and their politics
Adversarial Examples:
Is "Adversarial Examples" an Adversarial Example?
How to Successfully Harness AI to Combat Fraud and Abuse - RSA 2018
Adversarial Robustness, Theory and Practice
Accountability:
Lack of Accountability in Data Science: Why We Should All Be Building
Inclusive AI
125
Adversarial Examples - Examples I
Audio Adversarial Examples
126
Adversarial Attack Competitions I
MNIST Adversarial Examples Challenge
NIPS 2017 Competition: Non-targeted Adversarial Attack
Introducing the Unrestricted Adversarial Examples Challenge
127
Lists I
Reward Hacking
Specification gaming examples in AI
128
News I
Autonomous Driving
Uber self-driving car kills a pedestrian
Incidents with Tesla Autopilot
Even the cute Waymo
129
Online Books I
Interpretable Machine Learning
Fairness and machine learning
130

Data Reliability Challenges with Spark by Henning Kropp (Spark & Hadoop User Group)

  • 1.
    The Dark Sidesof AI Where AI went wrong and how to handle it. Alexander Pospiech ,
  • 2.
    Data Engineer/Scientist @inovex Security and Privacy Apologist Ɣ Father of One Dinghy-Sailor  Nerd 2
  • 3.
    What is trust?1 1 What will it take for us to trust AI? by Alan Finkel3
  • 4.
    trust noun the belief thatyou can trust someone or something trust verb to believe that someone is good and honest and will not harm you, or that something is safe and reliable 2 2 https://dictionary.cambridge.org/dictionary/english/trust4
  • 5.
    People are comfortablewith algorithmic decisions, sometimes more than human advice. 3 3 Do People Trust Algorithms More Than Companies Realize?5
  • 6.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 6
  • 7.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 7
  • 8.
    Sugar bananas byMaksym Kozlenko licensed as CC-BY-SA-4.0
  • 9.
    The Cautious Pathto Strate- gic Advantage: How Militaries Should Plan for AI by Peter Eckersley
  • 10.
    Chinese facial recognitionsystem confuses bus ad with a jaywalker by Jon Fingas
  • 11.
    Turkish is agender neutral language by Alex Shams (28.11.2017) 11
  • 12.
    Google is fixinggender bias in its Translate service by Ivan Mehta 12
  • 13.
    Google’s technology willnot suggest gender-based pronouns because the risk is too high that its “Smart Compose” technology might predict someone’s sex or gender identity incorrectly and offend users, product leaders ... 4 4 Fearful of bias, Google blocks gender-based pronouns from new AI tool by Paresh Dave13
  • 14.
    The Machine FiredMe - No human could do a thing about it! Necessary orders are sent automatically and each order completion triggers another order. 4 idiallo.com/blog/when-a-machine-fired-me by Ibrahim Diallo14
  • 15.
    Responses to Critiqueson Machine Learning of Criminality Perceptions by Xiaolin Wu, Xi Zhang 15
  • 16.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 16
  • 17.
    Racing Stripes CarTop View by qubodup original art: Autonomous Trap 001 by James Bridle (2017) 17
  • 18.
    The Moral Machineexperiment by Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon & Iyad Rahwan18
  • 19.
  • 20.
    Deep Neural Networksare Easily Fooled: High Confidence Predictions for Unrecognizable Images. by Nguyen A, Yosinski J, Clune J. at Evolving AI Lab, University of Wyoming 20
  • 21.
    Explaining and harnessingadversarial examples. by Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. at OpenAI 21
  • 22.
    Strike (with) aPose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects by Michael A. Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, Anh Nguyen 22
  • 23.
    Fooling Neural Networksin the Physi- cal World with 3D Adversarial Objects by Anish Athalye, Logan Engstrom, Andrew Ilyas & Kevin Kwok at LabSix
  • 24.
    Adversarial Reprogramming ofNeural Networks. by Gamaleldin F. Elsayed, Ian Goodfellow and Jascha Sohl-Dickstein at Google Brain 24
  • 25.
    Robust Physical-World Attackson Deep Learning Models by Kevin Eykholt, Ivan Evtimov, Ear- lence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
  • 26.
    Accessorize to aCrime: Real and Stealthy Attacks on State-of-the-Art Face Recognition by Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter
  • 27.
  • 28.
    Malware: DeepLocker - ConcealingTargeted Attacks with AI Locksmithing DEF CON 25 - Hyrum Anderson - Evading next gen AV using AI Intrusion Detection: Here’s The Truth, GANs Easily Can Fool Intrusion Detection Systems Anti-Virus: G Data uses nerual network based DeepRay for detecting malware 5 5 Antiviren-Software mit KI-Technik: G Data setzt auf DeepRay28
  • 29.
    Fake fingerprints canimitate real ones in biometric systems – research by Philip Bontrager, Aditi Roy, Julian Togelius, Nasir Memon, Arun Ross
  • 30.
  • 31.
    America’s biggest body-cameracompany says facial recognition isn’t accurate enough for policing decisions 5 America’s biggest body-camera company says facial recognition isn’t accurate enough for policing decisions by Dave Gershgorn31
  • 32.
    Face Recognition FieldTest at Südkreuz 12 by C.Suthorn under CC-BY-SA-4.0
  • 33.
    While the authorsapplaud the force for attempting to develop an ethically sound and legally compliant approach to predictive policing, they warn that the ethical principles in the proposal are not developed enough to deal with the broad challenges this kind of technology could throw up, and that “frequently the details are insufficiently fleshed out and important issues are not fully recognized.” 6 6 Britain Is Developing an AI-Powered Predictive Policing System33
  • 34.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 34
  • 35.
    Gender Shades by JoyBuolamwini (2018) and her MIT group
  • 36.
    Google Photos, y’allfucked up by Jacky Alcine (28.06.2015) 36
  • 37.
    Amazon’s Disturbing Planto Add Face Surveillance to Your Front Door37
  • 38.
    Amazon’s Face RecognitionFalsely Matched 28 Members of Congress With Mugshots 38
  • 39.
    Amazon’s Face RecognitionFalsely Matched 28 Members of Congress With Mugshots39
  • 40.
    I tested 14sentences for "perceived toxicity" using Perspectives. jessamyn west (25.08.2017) 40
  • 41.
    Design Justice, A.I.,and Escape from the Matrix of Domination by Sasha Costanza-Chock 41
  • 42.
    Amazon.com Inc’s machine-learning specialistsuncovered a big problem: their new recruiting engine did not like women. 6 Reuters - RPT-INSIGHT-Amazon scraps secret AI recruiting tool that showed bias against women42
  • 43.
  • 44.
    ... predictive modelsreinforce existing police practices ... ... patterns of police records, not patterns of crime ... ... cannot predict patterns of crime that are different ... 7 7 USA by Human Rights Data Analysis Group44
  • 45.
    ... the differencesin arrest rates by ethnic group between predictive policing and standard patrol practices were not statistically significant, ..." 8 8 Field-data Study Finds No Evidence of Racial Bias in Predictive Policing (2018) by Forensic Magazine45
  • 46.
  • 47.
    Positive Positive: equalchance False Positives/Negatives: unfair 9 9 How to Fight Bias with Predictive Policing (2018) by Eric Siegel in Scientific American47
  • 48.
    ... COMPAS isno more accurate or fair than predictions made by people with little or no criminal justice expertise. 10 10 The accuracy, fairness, and limits of predicting recidivism (2018) by Julia Dressel and Hany Farid in Science Advances48
  • 49.
    ... despite COMPAS’scollection of 137 features, the same accuracy can be achieved with a simple linear classifier with only two features.11 11 The accuracy, fairness, and limits of predicting recidivism (2018) by Julia Dressel and Hany Farid in Science Advances49
  • 50.
    The “bias” comesfrom base rates. 12 12 AI Ethics, Impossibility Theorems and Tradeoffs50
  • 51.
    Faceception ...recognizing “High IQ”,“White-Collar Offender”, “Pedophile”, and “Terrorist” ... According to Social and Life Science research personalities are affected by genes. Our face is a reflection of our DNA. 13 13 Faception51
  • 52.
    Predictive Staatsschutz GES-3D -Multi-Biometrische Gesichtserkennung MisPel-Projekt - Multi-Biometrisierte Forensische Personensuche in Lichtbild- und Videomassendaten INTEGER - Visuelle Entscheidungsunterstützung bei der Auswertung von Daten aus sozialen Netzwerken PANDORA - Propaganda, Mobilisierung und Radikalisierung zur Gewalt in der virtuellen und realen Welt X-SONAR - Extremistische Bestrebungen in Social Media Netzwerken RADIG-Z -Radikalisierung im digitalen Zeitalter RISKANT - Risikoanalyse bei islamistisch motivierten Tatgeneigten Survey: Predictive Staatsschutz auf Telepolis von Matthias Becker 52
  • 53.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 53
  • 54.
    Everbody has adeep learning paper Thousands of papers ( 20.000/year) big part have low quality faked results or just lack of knowledge 54
  • 55.
    Agenda How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion55
  • 56.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 56
  • 57.
    Quadrants Intended Unintended Inside BlackHat AI Bias a.k.a Vendor/User killer robots bad validation Cheating14 wrong use of model Outside Adversarial Attacks Bias a.k.a Attacker/User data poisoning 14 US border agents hacked their “risk assessment” system to recommend detention 100% of the time57
  • 58.
  • 59.
    Accountability - Ascenario build an AI for fraud detection for a bank AI blocks tranaction of company A A goes bankrupt A sues the bank developer (you) is questioned as a witness Can you provide answers? 15 15 Hype-Tech by Felix von Leitner (translated from german)59
  • 60.
    Accountability - andhow? reporting process? transparency to the public? power to decide on necessary changes? provability of work afterwards? 60
  • 61.
    Cost of MisbehavingAI Why should the vendor change something? legal consequences ethical/morally behaviour loss of reputation loss of opportunities loss of money 61
  • 62.
    Cost of MisbehavingAI Why should the users demand change? loss of money loss of security loss of freedom 62
  • 63.
    Defining and Researching"The Game" Defending is always harder than attacking 16 AI for/against: Detection Prediction Prevention Response 16 Is attacking machine learning easier than defending it?63
  • 64.
    Regulation - GDPR "Rightto be forgotten/"Right to erasure" Älgorithmic Fairnessänd "The Right to Explanation" 64
  • 65.
    Regulation - Understanding WhiteHouse report: Preparing for the future of Artificial Intelligence House of Lords report: AI in the UK: ready, willing and able? Bundestag then: some talk and a list of experts Bundestag now: KI-Eckpunktepapier 65
  • 66.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 66
  • 67.
    Specification Design - bugs,ambiguities, side effects, high level language, preference learning, design protocols Emergent - wireheading, delusions, meta-Learning and sub-agents, detecting emerging behavior, reward hacking 16 Building safe artificial intelligence: specification, robustness, and assurance by DeepMind Safety Research - Pedro A. Ortega, Vishal Maini, and the DeepMind safety team67
  • 68.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 68
  • 69.
    Robustness Risk and prevention- limitation, risk sensitivity, uncertainty estimates, safety margins, safe exploration, adversaries, cautious generalisation, verification Recovery and stability - availability, instability, error correction, fail-safe, distributional shift, graceful degradation 16 Building safe artificial intelligence: specification, robustness, and assurance by DeepMind Safety Research - Pedro A. Ortega, Vishal Maini, and the DeepMind safety team69
  • 70.
    Robustness - Adversaries image,video, malware, health records, ... Find, investigate, train on and robustify against attack vectors. defenses tend to be broken easily learn the adversaries as counter examples use regulization cryptography 17 17 Defense against adversarial attacks using machine learning and cryptography by Ingrid Fadelli70
  • 71.
    Robustness - Availability Availabilityof the processing? Can I DOS a Neural Network? Availability of predcitions or decisions? Is a poisoning attack a DOS? 71
  • 72.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 72
  • 73.
    Ethics - discussionalapproach include other groups than engineers use checklists, like deon - An ethics checklist for data scientists there is no, I repeat NO, programmable ethics! ethics is a discussion and by this an ongoing process 73
  • 74.
    Ethics - technicalapproach enforce consent, clarity, consistency, control, consequences 18 18 Care about AI ethics? What you can do, starting today by Steven Adler74
  • 75.
    Chris Anderson: “withenough data, the numbers speak for themselves.” 18 The Hidden Biases in Big Data by Kate Crawford75
  • 76.
    Kate Crawford: ẞadly,they can’t. Data and data sets are not objective; they are creations of human design." 18 The Hidden Biases in Big Data by Kate Crawford76
  • 77.
    Fairness - Definition Anti-classification Classificationparity Calibration: If an algorithm produces a “score,” that “score” should mean the same thing for different groups. 77
  • 78.
    Fairness - Tools Aequitas- open source bias audit toolkit IBM - AI Fairness 360 Facebook creates internal tool - Fairness Flow "Microsoft is creating an oracle for catching biased AI algorithms" Tool: What-If tool deon - An ethics checklist for data scientists Fix bias in AI with more tech (probably AI) sigh 78
  • 79.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 79
  • 80.
    Assurance Monitoring - interpretability,behavioural screening, traces, estimates of causal influence, tripwires & honeypots Enforcement - testability, interuptibility, boxing, physical security, encryption, signatures, authorisation, human override 18 Building safe artificial intelligence: specification, robustness, and assurance by DeepMind Safety Research - Pedro A. Ortega, Vishal Maini, and the DeepMind safety team80
  • 81.
    Assurance - Testing whynot currently? what to build? what to test? continuous regression tests? special items? biases? 81
  • 82.
    Assurance - PhysicalSecurity A neural network is some files on hardware. Can be copied, stolen, modified, ... 82
  • 83.
    Assurance - Interpretability TheMythos of Model Interpretability 19 expect superhuman standards, we only tend to have (post-rationalization) but needed for Debugging, Verification, Fairness, Transparency 19 The Mythos of Model Interpretability by Zachary C. Lipton83
  • 84.
    Assurance - Interpretability intrinsicvs. posthoc outcome: statistics, visualization, weights, data points, intrinsical model model-specific vs. model agnostic local vs. global remind the the needs and the knowledge of the different roles 84
  • 85.
    Introduction to LocalInterpretable Model-Agnostic Explanations (LIME) (2016) by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin in O’Reilly
  • 86.
    Introduction to LocalInterpretable Model-Agnostic Explanations (LIME) (2016) by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin in O’Reilly
  • 87.
    Question: I havean unexplainable AI that detects 1000 tumors and an explainable one that detects 100. Which one do I prefer? 87
  • 88.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 88
  • 89.
    Development ETL - biases,fairness, privacy, data poisoning, ... Reproducibility - model management, data management, result management, ... 89
  • 90.
    In many real-worldcases, the researcher won’t have made notes or remember exactly what she did, so even she won’t be able to reproduce the model. 20 20 The Machine Learning Reproducibility Crisis (2018) by Pete Warden90
  • 91.
    Yet AI researcherssay the incentives are still not aligned with reproducibility. 21 21 Missing data hinder replication of artificial intelligence studies (2018) by Matthew Hutson in Science91
  • 92.
    Privacy - Confidentiality Privacy Encryption Possibletools: Differential Privacy Homomorphic Encryption Federated learning 22 22 Federated Learning: Collaborative Machine Learning without Centralized Training Data92
  • 93.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 93
  • 94.
    I would say:It’s not the TÜV! 94
  • 95.
  • 96.
    How it alreadyhas gone wrong - some Examples Specification Robustness Ethics & Fairness Reproducibility What now? Frame Specification Robustness Ethics & Fairness Assurance Development Certificates/Verification Conclusion 96
  • 97.
    Education Educate AIbasics as part of data literacy in school and college. 97
  • 98.
    What can youdo? Educate Warn Support Research Develop 98
  • 99.
    Without our trust AIwill grow regardlessly. 99
  • 100.
    With the statedadvancements AI will have our trust and may work like expected. 100
  • 101.
    Thank you Alexander Pospiech DataEngineer inovex GmbH Lindberghstraß e 3 80939 München alexander.pospiech@inovex.de +49 153 - 31 81 051
  • 102.
    Conferences and MeetingsI Specific on the Dark Sides: Conference on Fairness, Accountability, and Transparency FATML - Fairness, Accountability, and Transparency in Machine Learning Interpretable ML Symposium @NIPS NIPS 2017 Tutorial - Fairness in Machine Learning Reproducibility in ML Workshop, ICML’18 IEEE 1st Deep Learning and Security Workshop Data Ethics workshop, KDD 2014 MAKE-Explainable AI Advances on Explainable Artificial Intelligence 102
  • 103.
    Conferences and MeetingsII Nemesis ’18 - 1st Workshop on Recent Advances in Adversarial Machine Learning NIPS 2018 Workshop on Security in Machine Learning Generic on AI: AI for Good Global Summit 103
  • 104.
    Conferences and MeetingsI General on Security: CCC DefCon SHA BlackHat 104
  • 105.
    Research Groups andOrganizations I AI specific: AINow - A research institute examining the social implications of artificial intelligence Evolving AI Lab, University of Wyoming OpenAI LabSix EFF on Artificial Intelligence & Machine Learning EFF - AI Progress Measurement EvalAI - Evaluating state of the art in AI EvadeML - Machine Learning in the Presence of Adversaries Adversarial Machine Learning, Università degli Studi di Cagliari 105
  • 106.
    Research Groups andOrganizations II SunBlaze at UCB Diskriminierung durch KI (Künstliche Intelligenz) (DiKI) Algorithmische Gegenmacht Center for Human-Compatible AI Technische und rechtliche Betrachtungen algorithmischer Entscheidungsverfahren by GI Algorithmenethik 106
  • 107.
    Research Groups andOrganizations I General: Trusted AI - IBM Research is building and enabling AI solutions people can trust Ethics/Fairness: SIENNA: Technology, ethics and human rights Human Rights Data Analysis Group AlgorithmWatch Netzpolitik on Predictive Policing 107
  • 108.
    Projects I Interpretability: Project: DARPAsExplainable Artificial Intelligence Project: Explain AI Project: DALEX: Descriptive mAchine Learning EXplanations 108
  • 109.
  • 110.
    Classes I CS 294:Fairness in Machine Learning, UC Berkeley 18739 Security and Fairness of Deep Learning, Carnegie Mellon Adversarial and Secure Machine Learning IEEE’s Artificial Intelligence and Ethics in Design The Ethics and Governance of Artificial Intelligence Attacking Networks with Adversarial Examples CS 294-149: Safety and Control for Artificial General Intelligence (Fall 2018) Machine-learning - crash-course -fairness 110
  • 111.
    Themensammlung I Netzpolitik onPredictive Policing EFF on Artificial Intelligence & Machine Learning EFF - AI Progress Measurement EvalAI - Evaluating state of the art in AI AI safety resources Ethics: AI Ethics Resources 111
  • 112.
    Github with ListsI General: Awful AI Ethics: Data Science and Ethics Resources Global Data Ethics Pledge (GDEP) ML and Security Machine Learning for Cyber Security Awesome Awesome AI Security Adversarial Attacks: Awesome Adversarial Examples for Deep Learning Awesome Adversarial Machine Learning 112
  • 113.
    Github with ListsII Awesome Machine Learning for Cyber Security Privacy; awesome-ai-privacy Private-Ai-Resources Machine Learning Ethics References Fairness: Toward ethical, transparent and fair AI/ML: a critical reading list for engineers, designers, and policy makers Interpretability; Awesome Interpretable Machine Learning Awesome awesome-machine-learning-interpretability Awesome 113
  • 114.
    Github with CodeI Ethics: deon - An ethics checklist for data scientists An Open Standard for Ethical Enterprise-Grade AI Interpretability: H20.ai: Machine Learning Interpretability (MLI) Explanation Explorer Interpretable Machine Learning with Python iml: interpretable machine learning ML Insights Fairness: Comparing fairness-aware machine learning techniques. 114
  • 115.
    Github with CodeII Themis ML - Fairness-aware Machine Learning Adversarial Attacks: Introduction to Adversarial Machine Learning DeepFool CleverHans EvadeML-Zoo AdvFlow Evaluating and Understanding the Robustness of Adversarial Logit Pairing IBM adversarial-robustness-toolbox deep-pwning FoolBox, 115
  • 116.
    Papers I LIME, LocalInterpretable Model-Agnostic Explanations (2016) 116
  • 117.
    Studies I European expertgroup seeks feedback on draft ethics guidelines for trustworthy artificial intelligence 117
  • 118.
    Blogs I a blogabout security and privacy in machine learning MLSec covert.io security + big data + machine learning Data Driven Security Automating OSINT BigSnarf Security of Machine Learning 118
  • 119.
    Slides I General: Hype-Tech byFelix von Leitner Adversarial Attacks: How CLEVER is your neural network? Robustness evaluation against adversarial examples by Pin-Yu Chen at IBM Research AI Privacy: Machine Learning & Privacy: It’s Complicated Fairness: AI Ethics, Impossibility Theorems and Tradeoffs 119
  • 120.
    Blog Posts I General: 6core falsehoods about the digital sphere Nein, Ethik kann man nicht programmieren You created a machine learning application. Now make sure it’s secure. Tooling: Deep automation in machine learning Fainess: Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias (December 2018) Security: Attacks against machine learning — an overview 120
  • 121.
    Blog Posts II MachineLearning for Cybersecurity 101 Interpretability: Testing machine learning explanation techniques Explainable AI and the Legality of Autonomous Weapon Systems Adversarial Attacks: The Definitive Security Data Science and Machine Learning Guide Discussion and Survey of Adversarial Examples and Robustness in Deep Learning A Brief Introduction to Adversarial Examples Adversarial Sample Generation: Making Machine Learning Systems Robust for Security Survey on Security and Privacy of Machine Learning Attacking Machine Learning Detectors: the state of the art review121
  • 122.
    Podcasts I Fairness: O’Reilly DataShow Podcast - Why it’s hard to design fair machine learning models Privacy: How privacy-preserving techniques can lead to more robust machine learning models 122
  • 123.
    Videos - generalI General: Youtube: Stephen Fry describing our future with artificial intelligence and robots 34c3 - Beeinflussung durch Künstliche Intelligenz 34c3 - Deep Learning Blindspots SHA2017 - The Security and Privacy Implications of AI and Machine Learning Youtube - DEF CON 24 - Clarence Chio - Machine Duping 101: Pwning Deep Learning Systems Youtube: Do You Trust This Computer? TED - The era of blind faith in big data must end Managing Risk in Machine Learning - Ben Lorica (O’Reilly Media) 123
  • 124.
    Videos - generalII Ethics: Ethik der Algorithmen - Tom Hillenbrand zum "Schreckensszenario KI" Interpretability: [HUML16] 06: Zackary C. Lipton, The mythos of model interpretability "Why Should I Trust you?" Explaining the Predictions of Any Classifier, KDD 2016 Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data Scientist, Aviva TWiML - Trust in Prediction of Machine Learning Models - EMEA Meetup #4 - December 2018 Explaining Complex Machine Learning Models With LIME - Shirin Glander (codecentric AG) 124
  • 125.
    Videos - generalIII Fairness: Tutorial: 21 fairness definitions and their politics Adversarial Examples: Is "Adversarial Examples" an Adversarial Example? How to Successfully Harness AI to Combat Fraud and Abuse - RSA 2018 Adversarial Robustness, Theory and Practice Accountability: Lack of Accountability in Data Science: Why We Should All Be Building Inclusive AI 125
  • 126.
    Adversarial Examples -Examples I Audio Adversarial Examples 126
  • 127.
    Adversarial Attack CompetitionsI MNIST Adversarial Examples Challenge NIPS 2017 Competition: Non-targeted Adversarial Attack Introducing the Unrestricted Adversarial Examples Challenge 127
  • 128.
    Lists I Reward Hacking Specificationgaming examples in AI 128
  • 129.
    News I Autonomous Driving Uberself-driving car kills a pedestrian Incidents with Tesla Autopilot Even the cute Waymo 129
  • 130.
    Online Books I InterpretableMachine Learning Fairness and machine learning 130