Generative AI and changing human behavior

Project description

Generative artificial intelligence (AI) is increasingly being used in applications for recognizing, influencing, or controlling human behavior as well as for differentiating persons in a semi- or fully automated manner. With the use of generative AI, a wide range of mechanisms for influencing and controlling behavior are changing in terms of how they work and their individual and societal consequences. These mechanisms include, for example, social scoring, scoring and other AI-based risk assessments, nudging, political or commercial microtargeting, algorithmic price differentiation, dark patterns, recommendation systems, and the interaction design of AI agents (agentic AI).

However, the use of generative AI in recognizing and influencing human behavior involves a variety of risks, such as problems with the protection of human dignity, informational self-determination, free personal development, or risks of discrimination. In particular, there is a risk that human behaviors considered desirable by certain actors will be reduced and quasi-standardized. Furthermore, systemic risks can arise because, among other things, generative AI models can also be integrated into other AI systems, which can multiply the risks.

The aim of the research is to map the diverse governmental and private uses of generative AI that serve to recognize, influence, and control behavior as well as to differentiate individuals based on their behavior. It also aims to analyze its modes of action and the associated social risks and causes of risk, to review existing governance with its diverse regulations, and, if necessary, to identify areas where action is needed as well as options for action.

These investigations are conducted in the continuation phase of the interdisciplinary project “Systemic and Existential Risks of AI.” In this project phase, questions about specific societal consequences of generative AI in various application contexts will be addressed.

Contact

Dr. Carsten Orwat
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany

Tel.: +49 721 608-26116
E-mail