Deepfakes: Project puts those affected in the focus
The number of deepfakes shared on the internet is growing rapidly. While around 15,000 of them were identified in 2019, the figure had risen to almost 100,000 by 2023 (Securityhero, 2023). Today, it is likely to be in the millions. To make things worse, even experts can now hardly distinguish media content created with AI tools from real videos or audio recordings. These near-perfect fakes are increasingly used for harmful or fraudulent purposes, such as to obtain passwords, account details, or company secrets.
Around 98 percent of deepfakes are pornographic
The vast majority of deepfakes, however, are sexualized content with serious consequences for those directly affected. An estimated 98% of all deepfakes show pornographic content, 99% of them use images of women. Exact figures remain uncertain due to high numbers of unreported cases and rapid developments.
A current example is the mass publication of fake pornographic content based on photos of real people on the microblogging platform X. Due to its wide reach and low moderation barriers (see Dana Mahr: The algorithm that hates for our own good, 2025), this service is particularly susceptible to the spread of such content.
People who are affected by such abusive fakes that violate their personal rights are in the focus of the project “Protecting the privacy and other citizens’ rights from misuse through deepfakes,” or DEEP-PRISMA in short, which was launched at ITAS in February 2026. “So far, little is known about how victims actually deal with deepfake abuse, for example, whether they take legal action, seek psychological support, or change their online behavior. This is where DEEP-PRISMA comes in,” says Jutta Jahnel, who heads the project at ITAS.
Reliable data and “deepfake action kit”
The researchers want to collect reliable data on how those affected deal with deepfakes by conducting broad-based surveys among school students and young adults. Based on this, representatives of these potentially highly affected groups will be invited to participate in moderated discussions, known as focus groups.
Within this framework, they will develop a “deepfake action kit” together, providing target group-specific recommendations for action and practical information on initiatives and contact points – for example, for practical use in schools or youth work. “The action kit could, for instance, help a young woman who is confronted with a sexualized deepfake: It shows the first steps for securing evidence, refers to counseling centers, and explains the legal options available,” explains ITAS researcher Dana Mahr from the project team.
Legal rules under scrutiny
Together with its project partners, DEEP-PRISMA also evaluates existing legal regulations and their enforcement, for example, the Digital Services Act and the European Union’s AI Act. Jutta Jahnel explains the background: “There is a whole range of different regulations governing the creation and distribution of deepfakes. However, in practice, these are often too vaguely worded and not sufficient for the criminal prosecution of abuse.” Adding to the problem, perpetrators often remain anonymous in the digital space and many cases of abusive use of deepfakes are not even reported. DEEP-PRISMA intends to develop recommendations to further improve existing regulations.
The project is funded by the Federal Ministry of Research, Technology and Space for three years. Project partners are the Fraunhofer Institute for Systems and Innovation Research (ISI), the International Center for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen, and the Chair for Public Law, IT Law and Environmental Law at the University of Kassel. (23.02.2026)
Further links:
- DEEP-PRISMA project page

