Social trust in learning systems

Project description

Within the framework of the basic-funded in-house research, research questions are dealt with which, in the context of program-oriented funding, contribute to the profile development of ITAS in the Helmholtz Research Field Information in the thematic area “Learning Systems”.

The research focuses primarily on the development and framing of a context-specific understanding of processes, methods and consequences of learning systems as “enabling technologies” as well as possibilities for a responsible handling of potential risks in various fields of application. Since the overarching problem of trust in information systems is also affected by automated applications and decision-making systems, the research will focus in particular on relevant questions concerning societal trust in learning systems. This will build on previous and parallel work of the project members on risk research, autonomous driving, automation of the working world, adversarial AI, explainable AI and the philosophy of science of the computational sciences, which will be further developed or made fruitful for application within the thematic field. An important task of the project is to bring together the previously unconnected work on this topic at ITAS and to develop a common perspective. This also includes organizing the transfer of knowledge across project and research groups and intensifying the cooperation with KIT computer science institutes.

Publications


2022
Presentations
Bareis, J.
Trustworthy AI. What does it actually mean?
2022. WTMC PhD Spring Workshop "Trust and Truth" (2022), Deursen-Dennenburg, Netherlands, April 6–8, 2022 
Bareis, J.
Two layers of trust: Discussing the relationship of an ethical and a regulatory dimension towards trustworthy AI
2022. Trust in Information (TIIN) Research Group at the Höchstleistungsrechenzentrum (HLRS) Stuttgart (2022), Stuttgart, Germany, February 16, 2022 
Bareis, J.; Heil, R.
Trust (erosion) in AI regulation. Dimensions, Drivers, Contradictions?
2022. 20th Annual STS Conference : Critical Issues in Science, Technology and Society Studies (2022), Graz, Austria, May 2–4, 2022 
2021
Book Chapters
Heil, R.
Künstliche Intelligenz/Maschinelles Lernen
2021. Handbuch Technikethik. Hrsg.: A. Grunwald, 424–428, J.B. Metzler. doi:10.1007/978-3-476-04901-8_81
Presentations
Bareis, J.
Trust (erosion) in AI regulation : Dimensions, Drivers, Contradictions?
2021. International Lecture Series by Fudan University: Trust and AI (2021), Online, December 14, 2021 
Bareis, J.
Zwischen Agenda, Zwang und Widerspruch. Der liberale Staat und der Fall KI
2021. NTA9-TA21: Digital, Direkt, Demokratisch? Technikfolgenabschätzung und die Zukunft der Demokratie (2021), Online, May 10–12, 2021 
Heil, R.
Collecting Data, Tracing & Tracking
2021. Big Data-Hype: Aus den Augen, aus dem Sinn? „Das Öl des 21. Jahrhunderts“: Früh wieder still und zur Selbstverständlichkeit geworden ("Deep Dive" Online-Experten-Roundtable 2021), Cologne, Germany, March 10, 2021 
Jahnel, J.
Herausforderungen bei der Regulierung von Deepfakes
2021. Fachkonferenz „Vertrauen im Zeitalter KI-gestützter Bots und Fakes: Herausforderungen und mögliche Lösungen“ (2021), Online, November 4, 2021 
Nierling, L.
Technikfolgenabschätzung für eine digitale Arbeitswelt
2021. Zukunftsforum Schweinfurt : Robotik und digitale Produktion (2021), Schweinfurt, Germany, June 7, 2021 
Renftle, M.; Trittenbach, H.; Müssener, C.; Böhm, K.; Poznic, M.; Heil, R.
Evaluating the Effect of XAI on Understanding of Machine Learning Models
2021. Philosophy of Science meets Machine Learning (2021), Tübingen, Germany, November 9–12, 2021 
2020
Presentations
Bareis, J.; Bächle, T. C.
Sociotechnical weapons: AI myths as national power play
2020. Locating and Timing Matters: Significance and agency of STS in emerging worlds (EASST/4S 2020), Online, August 18–21, 2020 

Contact

Reinhard Heil
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany

Tel.: +49 721 608-26815
E-mail