Britain is set to become the first country in the world to criminalize the use of AI to create child sexual abuse material (CSAM). UK makes use of AI tools to create child abuse material a crime, setting a global precedent for tackling AI-generated exploitation. The new laws will make it illegal to possess, create, or distribute AI-generated explicit images of children. These offenses will be included in the upcoming Crime and Policing Bill.
Existing laws in England and Wales already criminalize the possession, creation, and sharing of explicit images of minors. However, with advancements in AI, online predators have begun using software to “nudify” real images of children or generate realistic abuse material. Reports of AI-generated CSAM have surged nearly fivefold in 2024, according to the Internet Watch Foundation (IWF).
Home Secretary Yvette Cooper emphasized the urgency of addressing AI-driven exploitation. “Predators who exploit AI often escalate to real-world abuse. Tackling these emerging threats is essential to safeguarding children,” she stated.
Severe Penalties for Offenders
With the new legislation, the UK makes use of AI tools to create child abuse material a crime, ensuring stricter penalties for offenders. The legislation introduces strict penalties for those involved in AI-generated CSAM. Offenders found guilty of creating, possessing, or distributing such content could face up to five years in prison. Another key provision criminalizes the possession of AI-generated “paedophile manuals,” which offer instructions on using technology to exploit minors. Possession of these manuals will carry a sentence of up to three years.
The law will also target individuals operating websites that facilitate the distribution of child abuse material. Those found guilty of running such platforms could face up to 10 years behind bars.
Border Force officers will be granted new powers to inspect the digital devices of individuals suspected of posing a risk to children. Authorities can demand access to phones, laptops, and other digital devices at the UK border. Depending on the severity of the content found, offenders could face up to three years in prison.
AI and the Normalization of Child Abuse
Artificially generated CSAM includes both completely synthetic images and manipulated photographs. AI tools can alter real photos by adding explicit elements or replacing faces, creating realistic but fabricated images. Some cases even involve using a child’s real voice, further victimizing survivors.
Law enforcement officials warn that exposure to such material can encourage offenders to commit physical abuse. The National Crime Agency (NCA) reports that it makes approximately 800 arrests per month related to online child exploitation. An estimated 840,000 adults in the UK pose a risk to children, constituting 1.6% of the adult population.
Calls for Stronger Action
As AI technology advances, the UK makes use of AI tools to create child abuse material a crime, aiming to prevent further harm to children. While experts welcome the new measures, some believe additional steps are necessary. Professor Clare McGlynn, an expert in legal regulations on pornography and online abuse, pointed out gaps in the legislation. She urged a ban on “nudify” apps and called for action against mainstream adult websites that feature content portraying young-looking actors in childlike settings. Such material, she argued, normalizes child abuse.
The IWF reports a sharp rise in AI-generated CSAM. Its latest data shows a 380% increase in reports, with 245 confirmed cases in 2024 compared to 51 in 2023. Each case can involve thousands of images. The organization also discovered 3,512 AI-generated child abuse images on a single dark web platform within a month.