Which methods and approaches can technology design take up from the field of technology assessment to make AI systems transparent and human-centered?

Project description

Artificial intelligence (AI) is a technology with a high potential to change social life on many levels. Various evaluations associate AI with technical, economic, or ecological progress or expect AI to have a positive impact on the coexistence and quality of life of the entire society. However, there is often a discrepancy in the perception and understanding of AI – on the one hand, among scientific, technical, and economic circles and, on the other hand, between these circles and citizens with less affinity for technology. Especially in the public perception, it can be observed that AI is perceived as very diffuse and consequently as difficult to understand. In the social and public media, at symposia, and in political debates, AI is therefore discussed very controversially, from different points of view, and with different levels of expertise.

It is therefore of great importance to present AI technology and systems that use AI methods in a way that is understandable and explainable for public and professional perception, i.e. transparent at all relevant levels. Therefore, it must be a high priority goal for technology design to make AI comprehensible and transparent depending on the application or use context. If technology design succeeds in realizing these requirements at every stage of the development of AI systems, society can better assess and discuss the trustworthiness of AI, close gaps in understanding, and develop an acceptable and desirable relationship with AI. Furthermore, it is of great importance to ensure that AI systems are not only realized because technical progress makes it possible, but also that findings from the human sciences are incorporated into the AI design process. This is important because it ensures that AI addresses human needs and characteristics and that the well-being of the user is put first. The concern of some citizens that “at some point, AI will no longer serve human beings, but vice versa” must be taken seriously and adequately addressed by technology design. If this does not happen, AI will be confronted with refusal of acceptance and mistrust on the part of the citizens.

The dissertation therefore aims to explore methods that allow a convergence between the technical possibilities of technology design and the compliance with social and socially established norms (technology ethics). Furthermore, the dissertation aims to open up AI for technology assessment (TA) and to research methods to make AI comprehensible and transparent. The findings of the dissertation will be prepared for the TA community and subsequently made available for technology design in the form of recommendations for action. Applying the knowledge of TA, the dissertation also aims to investigate whether a transparent design of AI can lead to avoidance of potentially undesirable consequences and changes, and how to facilitate coexistence with AI developed according to social and ethical standards.

Administrative data

Supervisor: Prof. Dr. Armin Grunwald
Advisor: Prof. Dr. Karsten Wendland
Doctoral students at ITAS: See Doctoral studies at ITAS

Contact

Pascal Vetter
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany

Tel.: +49 721 608-23839
E-mail