Interdisciplinary approaches to deepfakes

Project description

The term deepfake refers to realistic-looking media content (photo, audio, and video) that can be manipulated and largely autonomously generated using special AI techniques. Deepfake technologies can potentially find useful applications in many areas in the context of culture, medicine, and education, but can also be criminally abused. Designing the technology responsibly and considering the principles of technology assessment, prior to the further spread of deepfakes, is an urgent task here. Projects already underway at ITAS address the possible governance of AI (e.g., GOAL, funded by BMBF) and advise the European Parliament on options for dealing with deepfakes (STOA). Beyond these specific project tasks, this project will focus in particular on the added value of different disciplines in reflecting on this socio-techical phenomenon. To this end, the different disciplinary approaches from informatics, (science) communications research, and legal sciences as well as qualitative social research for identifying socially relevant issues are presented and integrated in a workshop. Topics such as the social and ethical consequences, the question of adequate regulation, and the impact of interventions in terms of awareness building are addressed. Aspects such as credibility of and trust in different institutions that are of great importance for information sharing and for attitudes and decisions based on it are also considered. This also raises the question of what role different societal actors (researchers, political decision makers, journalists, media companies, citizens) (should) play in the dissemination and detection of deepfakes. Exploring these different aspects, a joint interdisciplinary pilot study will be developed. This innovative, inclusive, and creative research approach is intended to unlock the long-term potential for the development of larger projects in an emerging, internationally visible field of research.

Publications


2023
Presentations
Jahnel, J.; Nierling, L.
Deepfakes: A growing threat to the EU institutions’ security
2023. Interinstitutional Security and Safety Days (2023), Online, March 2, 2023 
2022
Presentations
Jahnel, J.
KI in der Digitalen Kommunikation
2022. Medien Triennale Südwest "KI & Medien gemeinsam gestalten" (#MTSW 2022), Saarbrücken, Germany, October 12, 2022 
Jahnel, J.; Hägle, O.; Hauser, C.; Escher, S.; Heil, R.; Nierling, L.
Deepfakes & Co: Digital misinformation as a challenge for democratic societies
2022. 5th European Technology Assessment Conference "Digital Future(s). TA in and for a Changing World" (ETAC 2022), Karlsruhe, Germany, July 25–27, 2022 
Jahnel, J.; Nierling, L.
Über die Folgen von perfekten Täuschungen und die Herausforderungen für eine multidimensionale Regulierung
2022. "Wie viel Wahrheit vertragen wir?" - Ringvorlesung / Universität Köln (2022), Online, June 28, 2022 
Jahnel, J.; Nierling, L.
Deepfakes: die Kehrseite der Kommunikationsfreiheit
2022. Gastvortrag Universität Graz, Siebente Fakultät (2022), Online, June 1, 2022 
Jahnel, J.; Nierling, L.
Deepfakes: Zwischen Faszination, Fiktion und realer Gefährdung
2022. Informatics Inside ´22 - It´s future: Informatik Konferenz der Hochschule Reutlingen (2022), Online, May 11, 2022 
Jahnel, J.; Nierling, L.
More Fragile Than You Think - Herausforderungen durch Deepfakes für Personen, Organisationen und Gesellschaft
2022, March 25. Webinar des KIT-IRM für KIT-Business-Club (2022), Online, March 29, 2022 
Science Communication on the Internet
2021
Presentations
Jahnel, J.
Technikfolgenabschätzung von Deepfakes
2021. Fake News & Demokratie, Kick-Off zum Bürger*innenforum auf Facebook (2021), Online, May 31, 2021 

Contact

Dr. Jutta Jahnel
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany

Tel.: +49 721 608-26133
E-mail