Systemic risks of artificial intelligence

Project description

So far, systemic risks have mainly become more widely known through the financial crisis or climate change. Although they are conceptualized very differently, complex relationships, interactions, and interdependencies often play a role, as do emergent effects, feedback loops, or tipping points. They can lead to widespread dysfunctions or even failure of a system. Systemic risks of AI applications have been little researched to date, especially concerning fundamental rights. However, there are already concrete indications of systemic risks arising from the coupling of AI-based services and products with foundation models, such as large language models. The subproject analyzes the various causes, specific mechanisms, and forms of damages of systemic risks of AI in order to derive suitable forms of governance and regulation based on these analyses.

This subproject on systemic risks is part of the two-part interdisciplinary project “Systemic and Existential Risks of AI”, which investigates such risks both theoretically and empirically. The aim is to combine knowledge from different disciplinary fields in order to enable sound and in-depth assessments and recommendations. In addition to our own research, external expert opinions on specific issues will also be commissioned.

In contrast to the direct consequences of AI for affected individuals or companies, there are no consistent approaches for assessing and acting on potential systemic or existential risks of AI. Recently, however, there have been concrete indications of systemic risks, for example due to the consequences of generative AI for society. Furthermore, the media and various stakeholders in the field of AI also point to existential risks for humanity as a whole that could be associated with the unlimited and uncontrolled development of AI. An early consideration of possible risks and concerns is essential for the successful and socially acceptable implementation of AI technologies and thus crucial for their economic success.

External assessments

As part of the project, external expert assessments on specific issues will be required. The detailed descriptions of expert assessments can be found in the following documents (for questions please refer to the contact provided in the document):

Expert assessment description: systemic environmental risks of artificial intelligence (PDF)

Expert assessment description: systemic risks of interacting artificial intelligence (PDF)

Publications


2025
Presentations
Gazos, A.
Kritische Infrastrukturen und transformative Resilienz – zwischen Stabilisierungsversuchen und der Notwendigkeit des Wandels
2025. "Transformationssoziologie konkret" : Schader-Forum (2025), Darmstadt, Germany, June 30–July 1, 2025 
2024
Book Chapters
Orwat, C.
Risiko
2024. Digitalität von A bis Z. Hrsg.: Florian Arnold, Johannes C. Bernhardt, Daniel Martin Feige, Christian Schröter, 291–300, transcript Verlag 
Presentations
Orwat, C.
Algorithmische Diskriminierungsrisiken - Ausweitung zu systemischen Risiken
2024, October 23. Digitale Räume, reale Risiken : Wie können wir digitale Räume gerechter und sicherer gestalten? (2024), Karlsruhe, Germany, October 23, 2024 
Orwat, C.
Zum Umgang mit Restrisiken der Diskriminierung durch KI
2024. FRAIM Midterm Konferenz (2024), Universität Bonn, June 17–18, 2024 
Orwat, C.; Staab, L.; Eppler, W.; Gazos, A.
Algorithmic bias and systemic risks of AI and platforms
2024. WIN Conference: Foundations and Perspectives of European Platform Regulation (2024), Heidelberg, Germany, September 19–20, 2024 

Contact

Dr. Carsten Orwat
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany

Tel.: +49 721 608-26116
E-mail