News about Our Work -
© AI-generated
Digital violence: What Europe can do and what Germany still needs to learn
The latest case of actress Collien Fernandes shows the criminal protection gaps in dealing with digital violence in Germany. Her ex-husband Christian Ulmen is said to have abused her identity on the net for years, created fake profiles and conducted sexualized conversations with hundreds of men on her behalf, as well as disseminating pornographic material. Federal Minister of Justice Stefanie Hubig (SPD) has already reacted and announced a digital violence protection law, which is to be presented in the spring of 2026. What gaps the case reveals, what the legal situation is in Europe and where Germany urgently needs to improve - a classification.
What is digital sexual violence?
Digital sexual violence is not a marginal phenomenon. It covers a wide range of actions: The non-consensual distribution of intimate images (‘Revenge Porn’), the creation of AI-generated deepfake pornography, identity theft for sexualised communication, cyberstalking, sexualised harassment via social networks and so-called cyberflashing, the unsolicited sending of intimate photos.
What unites all these forms: They interfere deeply with the sexual self-determination, dignity and mental health of those affected - mostly women. Sexualized violence can also take place without physical assault and can also trigger fear, fainting and shame. Nevertheless, digital attacks are often perceived as ‘less real’ and taken less seriously, a problem that is also reflected in parts of the German legal system.
The European legal framework - an overview
In June 2024, the EU Directive on combating violence against women and domestic violence (Directive 2024/1385) entered into force - the first comprehensive legal instrument at EU level on the subject. The Directive requires all Member States to criminalise the non-consensual production or distribution of intimate images or manipulated material, including sexualised deepfakes. The transposition deadline is 14 June 2027.
In addition, the Directive criminalises cyberstalking, cyber harassment (including cyberflashing) and misogynistic hate speech online. This framework is complemented by the EU AI Act (Regulation 2024/1689), which introduces transparency obligations for AI-generated content, and the Digital Services Act (DSA), which requires platforms to carry out risk assessments and take appropriate measures. The legal framework at EU level is therefore in place, but transposition into national law is significantly lagging behind.
Examples in Europe: What already works
A look at other European countries shows that it is possible to deal more effectively with digital sexualised violence. The UK has established a particularly broad approach with the Online Safety Act 2023 and other reforms: Both sharing and producing non-consensual intimate images and deepfakes are punishable, while platforms are required to consistently address such content. At the same time, it becomes clear that legal tightening alone is not enough - so far, only a few cases actually lead to charges.
Spain’s consent approach (‘Yes is Yes only’) pursues a broad protection framework that explicitly includes digital violence, but still has gaps in the specific regulation of deepfakes. Italy is currently working on legislative adjustments to better target the misuse of AI-generated content. Even if an independent criminal offence has not yet been fully implemented, the political development clearly shows: Digital violence is increasingly recognised as a problem in its own right.
Taken together, it is clear: Effective protection is legally possible - when legislation, platform responsibility and law enforcement are considered together.
Germany: Strengths, weaknesses and the gap
A recent study by Ina Bieber and Nathalie Leitgöb-Guzy, published by the Federal Ministry of Family Affairs, Senior Citizens, Women and Youth, the Federal Ministry of the Interior and the Federal Criminal Police Office, shows: Women are affected by digital violence much more often than men. These include identity abuse, doxxing (the unauthorized disclosure of private information), the misuse of smart home devices for surveillance, image manipulation and deepfakes, online defamation and harassment in online gaming. Only 2.4% of affected women report these incidents. The need for action is therefore correspondingly high.
What the applicable law already covers
German criminal law is not unprotected. Even today, certain forms of digital violence can be legally prosecuted - such as insulting, reenactment (stalking) or the unauthorized distribution of images from the personal sphere of life. The distribution of child and adolescent pornographic content is also punishable, even if it was generated with the help of AI.
In addition, data subjects can in certain cases take civil action against the unauthorized use of their image.
The structural protection gap in Germany
However, there are significant gaps. Deepfakes do not steal a "real" image, but generate a deceptively real but completely fictional situation.
Specifically, German law so far lacks an independent criminal offence that:
- explicitly criminalises the production of sexualised deepfakes without consent (regardless of distribution);
- recognises identity theft for sexualised purposes in the digital space as an independent act;
- take due account of the specific digital nature of these acts - deception, remote effect, loss of control over one's own digital image.
Now is the time to act
Fernandes' case has sparked a long overdue social and political debate. It shows that digital sexualised violence is not an abstract phenomenon, but has real, profound consequences for those affected - psychologically, socially and professionally. At the same time, it becomes clear: German law has not yet kept up with the digital reality of the 21st century.
Germany needs an independent criminal offence for digital violence, which also covers systematic identity abuse for sexualised purposes. At the same time, the rights of those affected must be strengthened - for example, through better information claims against platforms for fake profiles.
The UK is showcasing how platform responsibility can be shaped in concrete terms: Advertisers who do not sufficiently address problematic content will face sanctions. Germany should ensure - within the framework of the DSA and beyond in national law - that reported content of digital sexualised violence is prioritised. Nudification apps, i.e. AI tools for the creation of non-consensual nude images, should also be regulated or banned.
At the same time: Laws alone are not enough. Investigative authorities need sufficient resources, technical know-how and specialised structures to effectively prosecute digital violence.
FEMNET e.V. will expand its work in this area in the future. Under the focus on ‘Digital Violence, Gender Bias and Data Workers’, the association focuses more on the impact of digital technologies on women* and other marginalised groups - both in the Global North and the Global South. The focus is on gender-based violence in the digital space, distortions in AI systems and the often invisible working conditions of click and data workers along digital value chains.
sources
03.03.2026:
Making digital spaces more feminist - FEMNET sets new priorities- Leitgöb-Guzy, Nathalie; Bieber, Ina (2026): Results of the dark field study ‘Life situation, safety and stress in everyday life (LeSuBiA)’ I: Experiences of violence inside and outside of (ex-)partnerships. Published by the Federal Ministry of Education, Family Affairs, Senior Citizens, Women and Youth, Federal Ministry of the Interior, Federal Criminal Police Office.