Digital information and communication technologies permeate almost all areas of life, while the possibilities for processing and analyzing large amounts of data are increasing at the same time. Continuously recorded data streams allow it to draw conclusions regarding the probability of individuals buying a product, paying back a loan, voting for a party, or paying their rent. Knowledge which can be used for specific offers – and holds the danger of discrimination.
This is already the case if individuals or groups of people are excluded from certain offers or services in the first place or do not even get the offer at all. To make things worse, algorithms which are not transparent for laypersons more and more frequently make decisions with far-reaching impacts regarding their self-determination and personality development.
In the research project “Risks of discrimination by algorithms” ITAS scientists want to gather and analyze cases where discrimination by digital algorithms became public. The aim is to determine reasons, contexts, types, and dimensions of discrimination risks.
Based on this, needs for action in the sense of improved “algorithmic accountability” shall be identified, for example regarding necessary reforms of the existing legal framework. The researchers aim at transferring suggestions from international discussions to the institutional framework conditions in the Federal Republic of Germany and presenting a list of policy options.
The project is funded by the Federal Anti-Discrimination Agency for a period of one year. (09.04.2018)