Whether in the granting of loans, the selection of future personnel, or in predictive policing – thanks to comprehensive data sets, algorithms can prepare or execute decisions in more and more areas, with sometimes significant consequences. How great is the risk of discrimination associated with such decisions, and how can this risk be minimized?
In mid-September, ITAS published the study “Diskriminierungsrisiken durch Verwendung von Algorithmen” (Discrimination risks through the use of algorithms), financially supported by the Federal Anti-Discrimination Agency. According to the author Carsten Orwat, algorithms must distinguish between groups of people in many of the fields mentioned. However, this is problematic, especially if the distinction is made according to characteristics such as age, disability, ethnic origin, gender, or other legally protected characteristics.
Using examples, the study shows that these risks are quite real. For instance, there is a risk of unequal treatment if algorithms differentiate potential employees by gender, or if they reject credit seekers because of their ethnic origin.
Recommendations on regulation
The study also identifies possibilities to minimize these risks. One of its recommendations is to create preventive offers, such as advice to personnel and IT managers. In addition, Carsten Orwat recommends reforms and clarifications in anti-discrimination and data protection law as well as access options to documentation for anti-discrimination agencies in order to better protect the rights of those affected. (27.09.2019)
Further links and information:
- Study Diskriminierungsrisiken durch Verwendung von Algorithmen
- Project page Risks of discrimination by algorithms on the ITAS website
- Federal Anti-Discrimination Agency press release on the presentation of the study