Uncontrollable artificial intelligence: An existential risk?

Project description

The subproject deals with the question of whether AI could escape control in the future and thus pose an existential risk to humanity. The discourse on this fear will be analyzed by systematizing and evaluating the arguments. The narratives and visions that provide meaning are also considered. In a narrative literature study, literature on “uncontrollable artificial intelligence” is synthesized and analyzed. Based on the literature study, a Delphi survey will be conducted to gather as detailed an opinion as possible. In addition, qualitative interviews with experts will be conducted to evaluate and further concretize the results of the Delphi survey and the literature study in relation to the German context.

The subproject “Uncontrollable artificial intelligence” is part of the two-part interdisciplinary project “Systemic and Existential Risks of AI”. In the interdisciplinary project, possible systemic and existential risks of AI are theoretically and empirically investigated in two subprojects. The aim is to combine knowledge from different disciplinary fields in order to enable sound and in-depth assessments and recommendations. In addition to our own research, external expert opinions on specific issues will also be commissioned.

In contrast to the direct consequences of AI for affected individuals or companies, there are no consistent approaches for assessing and acting on potential systemic or existential risks of AI. Recently, however, there have been concrete indications of systemic risks, for example due to the consequences of generative AI for society. Furthermore, the media and various stakeholders in the field of AI also point to existential risks for humanity as a whole that could be associated with the unlimited and uncontrolled development of AI. An early consideration of possible risks and concerns is essential for the successful and socially acceptable implementation of AI technologies and thus crucial for their economic success.


Journal Articles
Heil, R.; Wendt, K. von
Künstliche Intelligenz außer Kontrolle?
2024. TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, 33 (1), 64–67. doi:10.14512/tatup.33.1.64Full textFull text of the publication as PDF document
Heil, R.
Unkontrollierbare künstliche Intelligenz – ein existenzielles Risiko?
2023. Montagskonferenz des Instituts für Übersetzen und Dolmetschen (2023), Heidelberg, Germany, November 23, 2023 


Reinhard Heil
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe

Tel.: +49 721 608-26815