AI Colloquium by AAISI (online), 30.3.2021
Bias in AI systems: A multi-step approach
Eirini Ntoutsi
Free University Berlin
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
3
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Successful applications
4
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Recommendations Navigation
Severe weather alerts
Automation
Questionable uses/ failures
5
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Google flu trends failure Microsoft’s bot Tay taken offline
after racist tweets
IBM’s Watson for Oncology
cancelled
Facial recognition works better
for white males
Why AI-projects might fail?
 Back to basics: How machines learn
 Machine Learning gives computers the ability to learn without being
explicitly programmed (Arthur Samuel, 1959)
 We don’t codify the solution, we don’t even know it!
 DATA & the learning algorithms are the keys
6
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Mind the (hidden) assumptions
 Assumptions include: stationarity, independent & identically distributed
(iid) data, balanced class representation, ...
 In this talk, I will focus on the assumption/myth of algorithmic objectivity
1. The common misconception that humans are subjective, but data and
algorithms not and therefore they cannot discriminate.
7
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Reality check: Can algorithms discriminate?
 Bloomberg analysts compared Amazon same-day delivery areas with U.S.
Census Bureau data
 They found that in 6 major same-day delivery cities, the service area
excludes predominantly black ZIP codes to varying degrees.
 Shouldn’t this service be based on customer’s spend rather than race?
 Amazon claimed that race was not used in their models.
8
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/
Reality check cont’: Can algorithms discriminate?
 There have been already plenty of cases of algorithmic discrimination
 State of the art visions systems (used e.g. in autonomous driving) recognize
better white males than black women (racial and gender bias)
 Google’s AdFisher tool for serving personalized ads was found to serve
significantly fewer ads for high paid jobs to women than men (gender-bias)
 COMPAS tool (US) for predicting a defendant’s risk of committing another
crime predicted higher risks of recidivism for black defendants (and lower for
white defendants) than their actual risk (racial-bias)
9
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Dont blame (only) the AI
 “Bias is as old as human civilization” and “it is human nature for members
of the dominant majority to be oblivious to the experiences of other
groups”
 Human bias: a prejudice in favour of or against one thing, person, or group
compared with another usually in a way that’s considered to be unfair.
 Bias triggers (protected attributes): ethnicity, race, age, gender, religion, sexual
orientation …
 Algorithmic bias: the inclination or prejudice of a decision made by an AI
system which is for or against one person or group, especially in a way
considered to be unfair.
10
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Every bias is not necessarily a bad bias 1/2
 Inductive bias ”refers to a set of (explicit or implicit) assumptions made by
a learning algorithm in order to perform induction, that is, to generalize a
finite set of observation (training data) into a general model of the
domain. Without a bias of that kind, induction would not be possible,
since the observations can normally be generalized in many ways.”
(Hüllermeier, Fober & Mernberger, 2013)
 Bias-free learning is futile: A learner that makes no a priori assumptions
regarding the identity of the target concept has no rational basis for
classifying any unseen instances.
11
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Model
Future unknown instances
Every bias is not necessarily a bad bias 2/2
 Some biases are positive and helpful, e.g., making healthy eating choices
 Some biases help us to become more efficient
 E.g., start work early if you are a morning person
 We refer here to bias that might cause discrimination and unfair actions
to an individual or group on the basis of protected attributes like race or
gender.
 Bias and gender are examples of bias triggers, but not the only ones
 Combined triggers as well, e.g., black females in IT jobs
 Bias and discrimination depend on context
 Women in tech
 Males in ballet dancing
12
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
13
Eirini Ntoutsi Bias in AI systems: A multi-step approach
The fairness-aware machine learning domain
 A young, fast evolving, multi-disciplinary research field
 Bias/fairness/discrimination/… have been studied for long in philosophy, social
sciences, law, …
 Existing approaches can be divided into three categories
 Understanding bias
 How bias is created in the society and enters our sociotechnical systems, is
manifested in the data used by AI algorithms, and can be formalized.
 Mitigating bias
 Approaches that tackle bias in different stages of AI-decision making.
 Accounting for bias
 Approaches that account for bias proactively or retroactively.
14
Eirini Ntoutsi Bias in AI systems: A multi-step approach
E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder-
Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, S. Staab"Bias in
data-driven artificial intelligence systems—An introductory survey", WIREs Data Mining and Knowledge Discovery, 2020.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
16
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Understanding bias: Sociotechnical causes of bias
 AI-systems rely on data generated by humans (UGC) or collected via
systems created by humans.
 As a result human biases
 enter AI systems
 E.g., bias in word-embeddings (Bolukbasi et al, 2016)
 might be amplified by complex sociotechnical systems such as the Web and,
 new types of biases might be created
17
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Understanding bias: How is bias manifested in data?
 Protected attributes and proxies
 E.g., neighborhoods in U.S. cities are highly correlated with race
 Representativeness of data
 E.g., underrepresentation of women and people of color in IT developer
communities and image datasets
 E.g., overrepresentation of black people in drug-related arrests
 Depends on data modalities
18
Eirini Ntoutsi Bias in AI systems: A multi-step approach
https://incitrio.com/top-3-lessons-learned-from-
the-top-12-marketing-campaigns-ever/
https://ellengau.medium.com/emily-in-
paris-asian-women-i-know-arent-like-
mindy-chen-6228e63da333
Typical (batch) fairness-aware learning setup
 Input: D = training dataset drawn from a joint distribution P(F,S,y)
 F: set of non-protected attributes
 S: (typically: binary, single) protected attribute
 s (s ̄): protected (non-protected) group
 y = (typically: binary) class attribute {+,-} (+ for accepted, - for rejected)
 Goal of fairness-aware classification: Learn a mapping from f(F, S) → y
 achieves good predictive performance
 eliminates discrimination
19
Eirini Ntoutsi Bias in AI systems: A multi-step approach
F1 F2 S y
User1 f11 f12 s +
User2 f21 -
User3 f31 f23 s +
… … … … …
Usern fn1 +
We know how to measure this
According to some fairness measure
Measuring (un)fairness: some measures
 Statistical parity: If subjects in both protected and unprotected groups
should have equal probability of being assigned to the positive class
𝑃 ො
𝑦 = + 𝑆 = 𝑠 = 𝑃 ො
𝑦 = + 𝑆 = ҧ
𝑠
 Equal opportunity: There should be no difference in model’s prediction
errors regarding the positive class
𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠+ = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠+
 Disparate Mistreatment: There should be no difference in model’s
prediction errors between protected and non-protected groups for both
classes
𝛿FNR = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠+ − 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠+
𝛿FPR = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠− − 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠−
Disparate Mistreatment = 𝛿𝐹𝑁𝑅 + 𝛿𝐹𝑃𝑅
22
Eirini Ntoutsi Bias in AI systems: A multi-step approach
F1 F2 S y ෝ
𝒚
User1 f11 f12 s + -
User2 f21 - +
User3 f31 f23 s + -
… … … … … …
Usern fn1 + +
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
24
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
26
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: pre-processing approaches
 Intuition: making the data more fair will result in a less unfair model
 Idea: balance the protected and non-protected groups in the dataset
 Design principle: minimal data interventions (to retain data utility for the
learning task)
 Different techniques:
 Instance class modification (massaging), (Kamiran & Calders, 2009),(Luong,
Ruggieri, & Turini, 2011)
 Instance selection (sampling), (Kamiran & Calders, 2010) (Kamiran & Calders,
2012)
 Instance weighting, (Calders, Kamiran, & Pechenizkiy, 2009)
 Synthetic instance generation (Iosifidis & Ntoutsi, 2018)
 …
27
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: pre-processing approaches: Massaging
 Change the class label of carefully selected instances (Kamiran & Calders, 2009).
 The selection is based on a ranker which ranks the individuals by their probability to
receive the favorable outcome.
 The number of massaged instances depends on the fairness measure (group fairness)
28
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Image credit Vasileios Iosifidis
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
30
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: in-processing approaches
 Intuition: working directly with the algorithm allows for better control
 Idea: explicitly incorporate the model’s discrimination behavior in the
objective function
 Design principle: “balancing” predictive- and fairness-performance
 Different techniques:
 Regularization (Kamiran, Calders & Pechenizkiy, 2010),(Kamishima, Akaho,
Asoh & Sakuma, 2012), (Dwork, Hardt, Pitassi, Reingold & Zemel, 2012) (Zhang
& Ntoutsi, 2019)
 Constraints (Zafar, Valera, Gomez-Rodriguez & Gummadi, 2017)
 Training on latent target labels (Krasanakis, Xioufis, Papadopoulos &
Kompatsiaris, 2018)
 In-training altering of data distribution (Iosifidis & Ntoutsi, 2019)
 …
31
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: in-processing approaches: change the objective
function 1/2
 An example with decision tree (DTs) classifiers
 Traditional DTs use pureness of the data (in terms of class-labels) to decide on
which attribute to use for splitting
 Which attribute to choose for splitting? A1 or A2?
 The goal is to select the attribute that is most useful for classifying examples.
 helps us be more certain about the class after the split
 we would like the resulting partitioning to be as pure as possible
 Pureness evaluated via entropy - A partition is pure if all its instances belong to the same class.
 But traditional measures (Information Gain, Gini Index, …) ignore fairness
32
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: in-processing approaches: change the objective
function 2/2
 We introduce the fairness gain of an attribute (FG)
 Disc(D) corresponds to statistical parity (group fairness)
 We introduce the joint criterion, fair information gain (FIG) that evaluates
the suitability of a candidate splitting attribute A in terms of both
predictive performance and fairness.
33
Eirini Ntoutsi Bias in AI systems: A multi-step approach
D
D1
D2
W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019.
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
34
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: post-processing approaches
 Intuition: start with predictive performance
 Idea: first optimize the model for predictive performance and then tune
for fairness
 Design principle: minimal interventions (to retain model predictive
performance)
 Different techniques:
 Correct the confidence scores (Pedreschi, Ruggieri, & Turini, 2009), (Calders &
Verwer, 2010)
 Correct the class labels (Kamiran et al., 2010)
 Change the decision boundary (Kamiran, Mansha, Karim, & Zhang, 2018), (Hardt,
Price, & Srebro, 2016)
 Wrap a fair classifier on top of a black-box learner (Agarwal, Beygelzimer, Dudík,
Langford, & Wallach, 2018)
 …
35
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: pοst-processing approaches: shift the decision
boundary
 An example of decision boundary shift
36
Eirini Ntoutsi Bias in AI systems: A multi-step approach
V. Iosifidis, H.T. Thi Ngoc, E. Ntoutsi, “Fairness-enhancing interventions in stream classification", DEXA 2019.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
37
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Accounting for bias
 Algorithmic accountability refers to the assignment of responsibility for
how an algorithm is created and its impact on society (Kaplan et al, 2019).
 Many facets of accountability for AI-driven algorithms and different
approaches
 Proactive approaches:
 bias-aware data collection, e.g., for Web data, crowd-sourcing
 Bias-description and modeling, e.g., via ontologies
 ...
 Retroactive approaches:
 Explaining AI decisions in order to understand whether decisions are biased
 What is an explanation? Explanations w.r.t. legal/ethical grounds?
 Using explanations for fairness-aware corrections (inspired by Schramowski et al, 2020)
38
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
40
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Fairness with sequential learners (boosting)
 Sequential ensemble methods generate base learners in a sequence
 The sequential generation of base learners promotes the dependence between
the base learners.
 Each learner learns from the mistakes of the previous predictor
 The weak learners are combined to build a strong learner
 Popular examples: Adaptive Boosting (AdaBoost), Extreme Gradient Boosting
(XGBoost).
 Our base model is AdaBoost (Freund and Schapire, 1995), a sequential ensemble
method that in each round, re-weights the training data to focus on misclassified
instances.
41
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Round 1: Weak learner h1 Round 2: Weak learner h2 Round 3: Weak learner h3 Final strong learner H()
Intuition behind using boosting for fairness
 It is easier to make “fairness-related interventions” in simpler models
rather than complex ones
 We can use the whole sequence of learners for the interventions instead
of the current one
44
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Limitations of related work
 Existing works evaluate predictive performance in terms of the overall
classification error rate (ER), e.g., [Calders et al’09, Calmon et al’17, Fish et
al’16, Hardt et al’16, Krasanakis et al’18, Zafar et al’17]
 In case of class-imbalance, ER is misleading
 Most of the datasets however suffer from imbalance
 Moreover, Dis.Mis. is “oblivious” to the class imbalance problem
47
Eirini Ntoutsi Bias in AI systems: A multi-step approach
From Adaboost to AdaFair
 We tailor AdaBoost to fairness
 We introduce the notion of cumulative fairness that assesses the fairness of
the model up to the current boosting round (partial ensemble).
 We directly incorporate fairness in the instance weighting process
(traditionally focusing on classification performance).
 We optimize the number of weak learners in the final ensemble based on
balanced error rate thus directly considering class imbalance in the best model
selection.
48
Eirini Ntoutsi Bias in AI systems: A multi-step approach
𝐸𝑅 = 1 −
𝑇𝑃 + 𝑇𝑁
𝑇𝑃 + 𝐹𝑁 + 𝑇𝑁 + 𝐹𝑃
V. Iosifidis, E. Ntoutsi, “AdaFair: Cumulative Fairness Adaptive Boosting", ACM CIKM 2019.
AdaFair: Cumulative boosting fairness
 Let j: 1−T be the current boosting round, T is user defined
 Let be the partial ensemble, up to current round j.
 The cumulative fairness of the ensemble up to round j, is defined based
on the parity in the predictions of the partial ensemble between
protected and non-protected groups
 ``Forcing’’ the model to consider ``historical’’ fairness over all previous
rounds instead of just focusing on current round hj() results in better
classifier performance and model convergence.
50
Eirini Ntoutsi Bias in AI systems: A multi-step approach
AdaFair: fairness-aware weighting of instances
 Vanilla AdaBoost already boosts misclassified instances for the next round
 Our weighting explicitly targets fairness by extra boosting discriminated
groups for the next round
 The data distribution at boosting round j+1 is updated as follows
 The fairness-related cost ui of instances xi ϵ D which belong to a group
that is discriminated is defined as follows:
51
Eirini Ntoutsi Bias in AI systems: A multi-step approach
AdaFair: optimizing the number of weak learners
 Typically, the number of boosting rounds/ weak learners T is user-defined
 We propose to select the optimal subsequence of learners 1 … θ, θ ≤ T
that minimizes the balanced error rate (BER)
 In particular, we consider both ER and BER in the objective function
𝑎𝑟𝑔𝑚𝑖𝑛𝜃 𝑐 ∗ 𝐵𝐸𝑅𝜃 + 1 − 𝑐 𝐸𝑅𝜃 + 𝑀𝑖𝑠. 𝐷𝑖𝑠.
 The result of this optimization if a final ensemble model with Eq.Odds
fairness
53
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Experimental evaluation
 Datasets of varying imbalance
 Baselines
 AdaBoost [Sch99]: vanilla AdaBoost
 SMOTEBoost [CLHB03]: AdaBoost with SMOTE for imbalanced data.
 Krasanakis et al. [KXPK18]: Boosting method which minimizes Dis.Mis. by approximating
the underlying distribution of hidden correct labels.
 Zafar et al.[ZVGRG17]: Training logistic regression model with convex-concave
constraints to minimize Dis.Mis.
 AdaFair NoCumul: Variation of AdaFair that computes the fairness weights based on
individual weak learners.
54
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Experiments: Predictive and fairness performance
 Adult census income (ratio 1+:3-)  Bank dataset (ratio 1+:8-)
Eirini Ntoutsi Bias in AI systems: A multi-step approach 55
Larger values are better, for Dis.Mis. lower values are better
 Our method achieves high balanced accuracy and low discrimination (Dis.Mis.) while maintaining high
TPRs and TNRs for both groups.
 The methods of Zafar et al and Krasanakis et al, eliminate discrimination by rejecting more positive
instances (lowering TPRs).
Cumulative vs non-cumulative fairness
 Cumulative vs non-cumulative fairness impact on model performance
 Cumulative notion of fairness performs better
 The cumulative model (AdaFair) is more stable than its non-cumulative
counterpart (standard deviation is higher)
56
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Please note: Eq.Odds Dis.Mis.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
57
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Wrapping-up, ongoing work and future directions
 In this talk I focused on the myth of algorithmic objectivity and
 the reality of algorithmic bias and discrimination and how algorithms can pick biases
existing in the input data and further reinforce them
 A large body of research already exists but
 focuses mainly on fully-supervised batched learning with single-protected (and typically
binary) attributes with binary classes
 Moving from batch learning to online learning
 targets bias in some step of the analysis-pipeline, but biases/errors might be propagated
and even amplified (unified approached are needed)
 Moving from isolated approaches (pre-, in- or post-) to combined approaches
58
Eirini Ntoutsi Bias in AI systems: A multi-step approach
V. Iosifidis, E. Ntoutsi, “FABBOO - Online Fairness-aware Learning under Class Imbalance", DS 2020.
T. Hu, V. Iosifidis, W. Liao, H. Zang, M. Yang, E. Ntoutsi,B. Rosenhahn, "FairNN - Conjoint Learning of Fair Representations for Fair
Decisions”, DS 2020.
Wrapping-up, ongoing work and future directions
 Moving from single-protected attribute fairness-aware learning to multi-
fairness
 Existing legal studies define multi-fairness as compound, intersectional and
overlapping [Makkonen 2002].
 Moving from fully-supervised learning to unsupervised and reinforcement
learning
 Moving from myopic (maximize short-term/immediate performance) solutions
to non-myopic ones (that consider long-term performance) [Zhang et al,2020]
 Actionable approaches (counterfactual generation)
59
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Thank you for you attention!
60
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Questions?
https://nobias-project.eu/
@NoBIAS_ITN
https://lernmint.org/
Feel free to contact me:
• eirini.ntoutsi@fu-berlin.de
• @entoutsi
• https://www.mi.fu-
berlin.de/en/inf/groups/ag-KIML/index.html
https://www.bias-project.org/

AAISI AI Colloquium 30/3/2021: Bias in AI systems

  • 1.
    AI Colloquium byAAISI (online), 30.3.2021 Bias in AI systems: A multi-step approach Eirini Ntoutsi Free University Berlin
  • 2.
    Outline  Introduction  Dealingwith bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 3 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 3.
    Successful applications 4 Eirini NtoutsiBias in AI systems: A multi-step approach Recommendations Navigation Severe weather alerts Automation
  • 4.
    Questionable uses/ failures 5 EiriniNtoutsi Bias in AI systems: A multi-step approach Google flu trends failure Microsoft’s bot Tay taken offline after racist tweets IBM’s Watson for Oncology cancelled Facial recognition works better for white males
  • 5.
    Why AI-projects mightfail?  Back to basics: How machines learn  Machine Learning gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959)  We don’t codify the solution, we don’t even know it!  DATA & the learning algorithms are the keys 6 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data
  • 6.
    Mind the (hidden)assumptions  Assumptions include: stationarity, independent & identically distributed (iid) data, balanced class representation, ...  In this talk, I will focus on the assumption/myth of algorithmic objectivity 1. The common misconception that humans are subjective, but data and algorithms not and therefore they cannot discriminate. 7 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 7.
    Reality check: Canalgorithms discriminate?  Bloomberg analysts compared Amazon same-day delivery areas with U.S. Census Bureau data  They found that in 6 major same-day delivery cities, the service area excludes predominantly black ZIP codes to varying degrees.  Shouldn’t this service be based on customer’s spend rather than race?  Amazon claimed that race was not used in their models. 8 Eirini Ntoutsi Bias in AI systems: A multi-step approach Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/
  • 8.
    Reality check cont’:Can algorithms discriminate?  There have been already plenty of cases of algorithmic discrimination  State of the art visions systems (used e.g. in autonomous driving) recognize better white males than black women (racial and gender bias)  Google’s AdFisher tool for serving personalized ads was found to serve significantly fewer ads for high paid jobs to women than men (gender-bias)  COMPAS tool (US) for predicting a defendant’s risk of committing another crime predicted higher risks of recidivism for black defendants (and lower for white defendants) than their actual risk (racial-bias) 9 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 9.
    Dont blame (only)the AI  “Bias is as old as human civilization” and “it is human nature for members of the dominant majority to be oblivious to the experiences of other groups”  Human bias: a prejudice in favour of or against one thing, person, or group compared with another usually in a way that’s considered to be unfair.  Bias triggers (protected attributes): ethnicity, race, age, gender, religion, sexual orientation …  Algorithmic bias: the inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair. 10 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 10.
    Every bias isnot necessarily a bad bias 1/2  Inductive bias ”refers to a set of (explicit or implicit) assumptions made by a learning algorithm in order to perform induction, that is, to generalize a finite set of observation (training data) into a general model of the domain. Without a bias of that kind, induction would not be possible, since the observations can normally be generalized in many ways.” (Hüllermeier, Fober & Mernberger, 2013)  Bias-free learning is futile: A learner that makes no a priori assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. 11 Eirini Ntoutsi Bias in AI systems: A multi-step approach Model Future unknown instances
  • 11.
    Every bias isnot necessarily a bad bias 2/2  Some biases are positive and helpful, e.g., making healthy eating choices  Some biases help us to become more efficient  E.g., start work early if you are a morning person  We refer here to bias that might cause discrimination and unfair actions to an individual or group on the basis of protected attributes like race or gender.  Bias and gender are examples of bias triggers, but not the only ones  Combined triggers as well, e.g., black females in IT jobs  Bias and discrimination depend on context  Women in tech  Males in ballet dancing 12 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 12.
    Outline  Introduction  Dealingwith bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 13 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 13.
    The fairness-aware machinelearning domain  A young, fast evolving, multi-disciplinary research field  Bias/fairness/discrimination/… have been studied for long in philosophy, social sciences, law, …  Existing approaches can be divided into three categories  Understanding bias  How bias is created in the society and enters our sociotechnical systems, is manifested in the data used by AI algorithms, and can be formalized.  Mitigating bias  Approaches that tackle bias in different stages of AI-decision making.  Accounting for bias  Approaches that account for bias proactively or retroactively. 14 Eirini Ntoutsi Bias in AI systems: A multi-step approach E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder- Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, S. Staab"Bias in data-driven artificial intelligence systems—An introductory survey", WIREs Data Mining and Knowledge Discovery, 2020.
  • 14.
    Outline  Introduction  Dealingwith bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 16 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 15.
    Understanding bias: Sociotechnicalcauses of bias  AI-systems rely on data generated by humans (UGC) or collected via systems created by humans.  As a result human biases  enter AI systems  E.g., bias in word-embeddings (Bolukbasi et al, 2016)  might be amplified by complex sociotechnical systems such as the Web and,  new types of biases might be created 17 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 16.
    Understanding bias: Howis bias manifested in data?  Protected attributes and proxies  E.g., neighborhoods in U.S. cities are highly correlated with race  Representativeness of data  E.g., underrepresentation of women and people of color in IT developer communities and image datasets  E.g., overrepresentation of black people in drug-related arrests  Depends on data modalities 18 Eirini Ntoutsi Bias in AI systems: A multi-step approach https://incitrio.com/top-3-lessons-learned-from- the-top-12-marketing-campaigns-ever/ https://ellengau.medium.com/emily-in- paris-asian-women-i-know-arent-like- mindy-chen-6228e63da333
  • 17.
    Typical (batch) fairness-awarelearning setup  Input: D = training dataset drawn from a joint distribution P(F,S,y)  F: set of non-protected attributes  S: (typically: binary, single) protected attribute  s (s ̄): protected (non-protected) group  y = (typically: binary) class attribute {+,-} (+ for accepted, - for rejected)  Goal of fairness-aware classification: Learn a mapping from f(F, S) → y  achieves good predictive performance  eliminates discrimination 19 Eirini Ntoutsi Bias in AI systems: A multi-step approach F1 F2 S y User1 f11 f12 s + User2 f21 - User3 f31 f23 s + … … … … … Usern fn1 + We know how to measure this According to some fairness measure
  • 18.
    Measuring (un)fairness: somemeasures  Statistical parity: If subjects in both protected and unprotected groups should have equal probability of being assigned to the positive class 𝑃 ො 𝑦 = + 𝑆 = 𝑠 = 𝑃 ො 𝑦 = + 𝑆 = ҧ 𝑠  Equal opportunity: There should be no difference in model’s prediction errors regarding the positive class 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = 𝑠+ = 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = ҧ 𝑠+  Disparate Mistreatment: There should be no difference in model’s prediction errors between protected and non-protected groups for both classes 𝛿FNR = 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = 𝑠+ − 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = ҧ 𝑠+ 𝛿FPR = 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = 𝑠− − 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = ҧ 𝑠− Disparate Mistreatment = 𝛿𝐹𝑁𝑅 + 𝛿𝐹𝑃𝑅 22 Eirini Ntoutsi Bias in AI systems: A multi-step approach F1 F2 S y ෝ 𝒚 User1 f11 f12 s + - User2 f21 - + User3 f31 f23 s + - … … … … … … Usern fn1 + +
  • 19.
    Outline  Introduction  Dealingwith bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 24 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 20.
    Mitigating bias  Goal:tackling bias in different stages of AI-decision making 26 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 21.
    Mitigating bias: pre-processingapproaches  Intuition: making the data more fair will result in a less unfair model  Idea: balance the protected and non-protected groups in the dataset  Design principle: minimal data interventions (to retain data utility for the learning task)  Different techniques:  Instance class modification (massaging), (Kamiran & Calders, 2009),(Luong, Ruggieri, & Turini, 2011)  Instance selection (sampling), (Kamiran & Calders, 2010) (Kamiran & Calders, 2012)  Instance weighting, (Calders, Kamiran, & Pechenizkiy, 2009)  Synthetic instance generation (Iosifidis & Ntoutsi, 2018)  … 27 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 22.
    Mitigating bias: pre-processingapproaches: Massaging  Change the class label of carefully selected instances (Kamiran & Calders, 2009).  The selection is based on a ranker which ranks the individuals by their probability to receive the favorable outcome.  The number of massaged instances depends on the fairness measure (group fairness) 28 Eirini Ntoutsi Bias in AI systems: A multi-step approach Image credit Vasileios Iosifidis
  • 23.
    Mitigating bias  Goal:tackling bias in different stages of AI-decision making 30 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 24.
    Mitigating bias: in-processingapproaches  Intuition: working directly with the algorithm allows for better control  Idea: explicitly incorporate the model’s discrimination behavior in the objective function  Design principle: “balancing” predictive- and fairness-performance  Different techniques:  Regularization (Kamiran, Calders & Pechenizkiy, 2010),(Kamishima, Akaho, Asoh & Sakuma, 2012), (Dwork, Hardt, Pitassi, Reingold & Zemel, 2012) (Zhang & Ntoutsi, 2019)  Constraints (Zafar, Valera, Gomez-Rodriguez & Gummadi, 2017)  Training on latent target labels (Krasanakis, Xioufis, Papadopoulos & Kompatsiaris, 2018)  In-training altering of data distribution (Iosifidis & Ntoutsi, 2019)  … 31 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 25.
    Mitigating bias: in-processingapproaches: change the objective function 1/2  An example with decision tree (DTs) classifiers  Traditional DTs use pureness of the data (in terms of class-labels) to decide on which attribute to use for splitting  Which attribute to choose for splitting? A1 or A2?  The goal is to select the attribute that is most useful for classifying examples.  helps us be more certain about the class after the split  we would like the resulting partitioning to be as pure as possible  Pureness evaluated via entropy - A partition is pure if all its instances belong to the same class.  But traditional measures (Information Gain, Gini Index, …) ignore fairness 32 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 26.
    Mitigating bias: in-processingapproaches: change the objective function 2/2  We introduce the fairness gain of an attribute (FG)  Disc(D) corresponds to statistical parity (group fairness)  We introduce the joint criterion, fair information gain (FIG) that evaluates the suitability of a candidate splitting attribute A in terms of both predictive performance and fairness. 33 Eirini Ntoutsi Bias in AI systems: A multi-step approach D D1 D2 W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019.
  • 27.
    Mitigating bias  Goal:tackling bias in different stages of AI-decision making 34 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 28.
    Mitigating bias: post-processingapproaches  Intuition: start with predictive performance  Idea: first optimize the model for predictive performance and then tune for fairness  Design principle: minimal interventions (to retain model predictive performance)  Different techniques:  Correct the confidence scores (Pedreschi, Ruggieri, & Turini, 2009), (Calders & Verwer, 2010)  Correct the class labels (Kamiran et al., 2010)  Change the decision boundary (Kamiran, Mansha, Karim, & Zhang, 2018), (Hardt, Price, & Srebro, 2016)  Wrap a fair classifier on top of a black-box learner (Agarwal, Beygelzimer, Dudík, Langford, & Wallach, 2018)  … 35 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 29.
    Mitigating bias: pοst-processingapproaches: shift the decision boundary  An example of decision boundary shift 36 Eirini Ntoutsi Bias in AI systems: A multi-step approach V. Iosifidis, H.T. Thi Ngoc, E. Ntoutsi, “Fairness-enhancing interventions in stream classification", DEXA 2019.
  • 30.
    Outline  Introduction  Dealingwith bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 37 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 31.
    Accounting for bias Algorithmic accountability refers to the assignment of responsibility for how an algorithm is created and its impact on society (Kaplan et al, 2019).  Many facets of accountability for AI-driven algorithms and different approaches  Proactive approaches:  bias-aware data collection, e.g., for Web data, crowd-sourcing  Bias-description and modeling, e.g., via ontologies  ...  Retroactive approaches:  Explaining AI decisions in order to understand whether decisions are biased  What is an explanation? Explanations w.r.t. legal/ethical grounds?  Using explanations for fairness-aware corrections (inspired by Schramowski et al, 2020) 38 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 32.
    Outline  Introduction  Dealingwith bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 40 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 33.
    Fairness with sequentiallearners (boosting)  Sequential ensemble methods generate base learners in a sequence  The sequential generation of base learners promotes the dependence between the base learners.  Each learner learns from the mistakes of the previous predictor  The weak learners are combined to build a strong learner  Popular examples: Adaptive Boosting (AdaBoost), Extreme Gradient Boosting (XGBoost).  Our base model is AdaBoost (Freund and Schapire, 1995), a sequential ensemble method that in each round, re-weights the training data to focus on misclassified instances. 41 Eirini Ntoutsi Bias in AI systems: A multi-step approach Round 1: Weak learner h1 Round 2: Weak learner h2 Round 3: Weak learner h3 Final strong learner H()
  • 34.
    Intuition behind usingboosting for fairness  It is easier to make “fairness-related interventions” in simpler models rather than complex ones  We can use the whole sequence of learners for the interventions instead of the current one 44 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 35.
    Limitations of relatedwork  Existing works evaluate predictive performance in terms of the overall classification error rate (ER), e.g., [Calders et al’09, Calmon et al’17, Fish et al’16, Hardt et al’16, Krasanakis et al’18, Zafar et al’17]  In case of class-imbalance, ER is misleading  Most of the datasets however suffer from imbalance  Moreover, Dis.Mis. is “oblivious” to the class imbalance problem 47 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 36.
    From Adaboost toAdaFair  We tailor AdaBoost to fairness  We introduce the notion of cumulative fairness that assesses the fairness of the model up to the current boosting round (partial ensemble).  We directly incorporate fairness in the instance weighting process (traditionally focusing on classification performance).  We optimize the number of weak learners in the final ensemble based on balanced error rate thus directly considering class imbalance in the best model selection. 48 Eirini Ntoutsi Bias in AI systems: A multi-step approach 𝐸𝑅 = 1 − 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝐹𝑁 + 𝑇𝑁 + 𝐹𝑃 V. Iosifidis, E. Ntoutsi, “AdaFair: Cumulative Fairness Adaptive Boosting", ACM CIKM 2019.
  • 37.
    AdaFair: Cumulative boostingfairness  Let j: 1−T be the current boosting round, T is user defined  Let be the partial ensemble, up to current round j.  The cumulative fairness of the ensemble up to round j, is defined based on the parity in the predictions of the partial ensemble between protected and non-protected groups  ``Forcing’’ the model to consider ``historical’’ fairness over all previous rounds instead of just focusing on current round hj() results in better classifier performance and model convergence. 50 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 38.
    AdaFair: fairness-aware weightingof instances  Vanilla AdaBoost already boosts misclassified instances for the next round  Our weighting explicitly targets fairness by extra boosting discriminated groups for the next round  The data distribution at boosting round j+1 is updated as follows  The fairness-related cost ui of instances xi ϵ D which belong to a group that is discriminated is defined as follows: 51 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 39.
    AdaFair: optimizing thenumber of weak learners  Typically, the number of boosting rounds/ weak learners T is user-defined  We propose to select the optimal subsequence of learners 1 … θ, θ ≤ T that minimizes the balanced error rate (BER)  In particular, we consider both ER and BER in the objective function 𝑎𝑟𝑔𝑚𝑖𝑛𝜃 𝑐 ∗ 𝐵𝐸𝑅𝜃 + 1 − 𝑐 𝐸𝑅𝜃 + 𝑀𝑖𝑠. 𝐷𝑖𝑠.  The result of this optimization if a final ensemble model with Eq.Odds fairness 53 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 40.
    Experimental evaluation  Datasetsof varying imbalance  Baselines  AdaBoost [Sch99]: vanilla AdaBoost  SMOTEBoost [CLHB03]: AdaBoost with SMOTE for imbalanced data.  Krasanakis et al. [KXPK18]: Boosting method which minimizes Dis.Mis. by approximating the underlying distribution of hidden correct labels.  Zafar et al.[ZVGRG17]: Training logistic regression model with convex-concave constraints to minimize Dis.Mis.  AdaFair NoCumul: Variation of AdaFair that computes the fairness weights based on individual weak learners. 54 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 41.
    Experiments: Predictive andfairness performance  Adult census income (ratio 1+:3-)  Bank dataset (ratio 1+:8-) Eirini Ntoutsi Bias in AI systems: A multi-step approach 55 Larger values are better, for Dis.Mis. lower values are better  Our method achieves high balanced accuracy and low discrimination (Dis.Mis.) while maintaining high TPRs and TNRs for both groups.  The methods of Zafar et al and Krasanakis et al, eliminate discrimination by rejecting more positive instances (lowering TPRs).
  • 42.
    Cumulative vs non-cumulativefairness  Cumulative vs non-cumulative fairness impact on model performance  Cumulative notion of fairness performs better  The cumulative model (AdaFair) is more stable than its non-cumulative counterpart (standard deviation is higher) 56 Eirini Ntoutsi Bias in AI systems: A multi-step approach Please note: Eq.Odds Dis.Mis.
  • 43.
    Outline  Introduction  Dealingwith bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 57 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 44.
    Wrapping-up, ongoing workand future directions  In this talk I focused on the myth of algorithmic objectivity and  the reality of algorithmic bias and discrimination and how algorithms can pick biases existing in the input data and further reinforce them  A large body of research already exists but  focuses mainly on fully-supervised batched learning with single-protected (and typically binary) attributes with binary classes  Moving from batch learning to online learning  targets bias in some step of the analysis-pipeline, but biases/errors might be propagated and even amplified (unified approached are needed)  Moving from isolated approaches (pre-, in- or post-) to combined approaches 58 Eirini Ntoutsi Bias in AI systems: A multi-step approach V. Iosifidis, E. Ntoutsi, “FABBOO - Online Fairness-aware Learning under Class Imbalance", DS 2020. T. Hu, V. Iosifidis, W. Liao, H. Zang, M. Yang, E. Ntoutsi,B. Rosenhahn, "FairNN - Conjoint Learning of Fair Representations for Fair Decisions”, DS 2020.
  • 45.
    Wrapping-up, ongoing workand future directions  Moving from single-protected attribute fairness-aware learning to multi- fairness  Existing legal studies define multi-fairness as compound, intersectional and overlapping [Makkonen 2002].  Moving from fully-supervised learning to unsupervised and reinforcement learning  Moving from myopic (maximize short-term/immediate performance) solutions to non-myopic ones (that consider long-term performance) [Zhang et al,2020]  Actionable approaches (counterfactual generation) 59 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 46.
    Thank you foryou attention! 60 Eirini Ntoutsi Bias in AI systems: A multi-step approach Questions? https://nobias-project.eu/ @NoBIAS_ITN https://lernmint.org/ Feel free to contact me: • eirini.ntoutsi@fu-berlin.de • @entoutsi • https://www.mi.fu- berlin.de/en/inf/groups/ag-KIML/index.html https://www.bias-project.org/