Ethical aspects about artificial intelligence: impact of moral status on AI ethics and embedding Al ethics

Project description

The cumulative dissertation deals with the question of how to embed an ethics of artificial intelligence (AI). AI technologies provide society with new opportunities to make life easier in both private and professional contexts. However, these changes not only hold opportunities but also entail risks, uncertainties, and ignorance. Based on an objective evaluation framework, AI ethics can help protect what is valuable and prevent harm.

What does it mean when we say we ask for AI ethics? AI ethics essentially aims to implement ethical values and principles into AI systems, or at least develop approaches to such an implementation. Ethical values and principles can revolve around privacy, non-discrimination, responsibility, trust, safety, public welfare, sustainability, etc. This raises challenging questions: What specific values and principles should be embedded? How can these values and principles be embedded into AI systems? How should they be transferred into the systems so that they are comprehensible to users, the different stakeholders, or the legal system? How can we measure whether the relevant values and principles are actually implemented? How can this be guaranteed?

Current research does not sufficiently address these issues. First, many AI ethics guidelines do not explain what is meant by moral agency. If it is not clear to everyone involved who the moral agent is in relation to AI, it is not possible to understand the relationship between values, principles, and AI systems. The aim of the first work package is to answer the question of moral agency in AI systems from a philosophical perspective.

The second work package deals with the understanding and definition of values and principles in AI ethics. Focusing solely on ethical requirements is not enough to ensure the practical implementation of AI ethics. This also requires legal regulations for AI. To close a research gap, this work package will conduct a comparative study of AI ethics guidelines and legal regulations, particularly at the European level. The study will allow to identify the need for legal action to ensure the integration of ethical aspects in AI.

After defining essential values and principles, the question arises of how to implement them in a concrete AI system developed for a specific application. The aim of the third work package is therefore to develop an approach for implementation AI ethics into AI systems and to evaluate its application in various AI systems used in different domains. An AI-assisted rescue robot used for reconnaissance and defense against acute radiological hazards serves as a case study.

Contact

Désirée Martin, M.A.
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany

Tel.: +49 721 608-26919
E-mail