How is generative AI changing knowledge acquisition?
- Project team:
Heil, Reinhard (Project leader); Clemens Ackerl, Angelina Sophie Dähms, Wolfgang Eppler, Felix Gnisa, Jutta Jahnel (Project coordinator)
- Funding:
Federal Ministry of Research, Technology and Space
- Start date:
2026
- End date:
2026
- Research group:
Project description
The project addresses the question of how generative artificial intelligence (AI) is changing knowledge acquisition, particularly in problem-oriented research. Scientific knowledge acquisition is undergoing a period of profound transformation. With the advent of powerful AI methods, especially large language models (LLMs), traditional forms of literature review, knowledge representation, and synthesis are increasingly being supplemented or even replaced by automated processes. The discussion about the impact of AI on scientific knowledge acquisition cannot be conducted solely from a technical perspective but also has epistemological and social dimensions.
The following will be examined:
- Epistemic foundations of generative AI: Generative AI systems work with epistemic models of knowledge generation that are primarily based on statistical methods and machine learning. This distinguishes them significantly from human cognitive practices, in which meaning is socially situated and hermeneutically captured and produced through practices. An important prerequisite for the responsible use of generative AI in scientific knowledge acquisition is that users are aware of the basic functioning and characteristics of this technology. To ensure this, a generally understandable introduction to the functioning of generative AI, the associated type of knowledge, and the related challenges will be provided, with particular consideration given to the requirements of scientific practice.
- Dynamics of trust, trustworthiness, and transfer of responsibility: This research investigates how trust in generative AI arises, is adapted, and justified within knowledge acquisition processes. The focus is on the extent to which familiarity with generative AI represents a key moment in the transfer of epistemic responsibility and how anthropomorphization processes shape the development of trust. The aim is to gain a deeper understanding of the dynamics of trust, trustworthiness, and trust adjustment and to deduce how these relate to the transfer of responsibility in AI-supported knowledge processes. To this end, a narrative literature analysis on trust, trustworthiness, and trust adjustment in human-technology interaction will be conducted, with a particular focus on generative AI.
- Opportunities and risks of new divisions of labor in professional knowledge acquisition and changes in scientific knowledge production: The study examines the tasks for which practitioners and experts use generative AI in professional knowledge acquisition, the (new) skills required to do so, and the potential benefits and risks this poses for quality standards in research organizations and science. In addition, the study looks at the actual practices and experiences of professional practitioners in science and research to explore new divisions of labor between humans and technical systems, particularly focusing on the creation and maintenance of knowledge integrity and research sovereignty. To this end, expert interviews and focus group discussions will be conducted with scientific practitioners in various academic institutions and research facilities (scientific publishers and libraries, scientific organizations, open science, and open data).
These investigations are conducted in the continuation phase of the interdisciplinary project “Systemic and Existential Risks of AI”. In this project phase, questions about the concrete consequences of generative AI for processes, structures, and actors in various application contexts will be addressed.
Contact
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany
Tel.: +49 721 608-26815
E-mail
