Not logged in.

Contribution Details

Type Journal Article
Scope Discipline-based scholarship
Title Bias, awareness, and ignorance in deep-learning-based face recognition
Organization Unit
Authors
  • Samuel Wehrli
  • Corinna Hertweck
  • Mohammadreza Amirian
  • Stefan Glüge
  • Thilo Stadelmann
Item Subtype Original Work
Refereed Yes
Status Published in final form
Language
  • English
Journal Title AI and Ethics
Publisher Springer
Geographical Reach international
ISSN 2730-5953
Volume 2
Number 3
Page Range 509 - 522
Date 2022
Abstract Text Face Recognition (FR) is increasingly influencing our lives: we use it to unlock our phones; police uses it to identify suspects. Two main concerns are associated with this increase in facial recognition: (1) the fact that these systems are typically less accurate for marginalized groups, which can be described as “bias”, and (2) the increased surveillance through these systems. Our paper is concerned with the first issue. Specifically, we explore an intuitive technique for reducing this bias, namely “blinding” models to sensitive features, such as gender or race, and show why this cannot be equated with reducing bias. Even when not designed for this task, facial recognition models can deduce sensitive features, such as gender or race, from pictures of faces—simply because they are trained to determine the “similarity” of pictures. This means that people with similar skin tones, similar hair length, etc. will be seen as similar by facial recognition models. When confronted with biased decision-making by humans, one approach taken in job application screening is to “blind” the human decision-makers to sensitive attributes such as gender and race by not showing pictures of the applicants. Based on a similar idea, one might think that if facial recognition models were less aware of these sensitive features, the difference in accuracy between groups would decrease. We evaluate this assumption—which has already penetrated into the scientific literature as a valid de-biasing method—by measuring how “aware” models are of sensitive features and correlating this with differences in accuracy. In particular, we blind pre-trained models to make them less aware of sensitive attributes. We find that awareness and accuracy do not positively correlate, i.e., that bias ≠ awareness. In fact, blinding barely affects accuracy in our experiments. The seemingly simple solution of decreasing bias in facial recognition rates by reducing awareness of sensitive features does thus not work in practice: trying to ignore sensitive attributes is not a viable concept for less biased FR.
Free access at DOI
Digital Object Identifier 10.1007/s43681-021-00108-6
Other Identification Number merlin-id:21976
PDF File Download from ZORA
Export BibTeX
EP3 XML (ZORA)
Keywords Fairness Convolutional neural networks Discrimination Ethnic bias Gender bias