UK Tech Companies and Child Protection Agencies to Examine AI's Ability to Generate Abuse Content
Technology companies and child protection organizations will receive permission to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced British legislation.
Significant Increase in AI-Generated Harmful Material
The announcement coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will allow approved AI developers and child protection groups to inspect AI systems – the underlying systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from creating depictions of child exploitation.
"Ultimately about stopping exploitation before it happens," stated the minister for AI and online safety, adding: "Specialists, under strict conditions, can now identify the danger in AI systems early."
Tackling Legal Obstacles
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is designed to preventing that problem by helping to stop the creation of those images at source.
Legal Structure
The amendments are being added by the government as modifications to the criminal justice legislation, which is also establishing a ban on possessing, creating or sharing AI models designed to generate exploitative content.
Practical Impact
This week, the minister toured the London headquarters of Childline and heard a simulated call to advisors featuring a account of AI-based abuse. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.
"When I hear about young people experiencing extortion online, it is a cause of intense anger in me and justified anger amongst families," he stated.
Alarming Data
A leading online safety organization stated that instances of AI-generated exploitation content – such as webpages that may include multiple images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a vital step to guarantee AI tools are secure before they are released," commented the head of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the capability to make potentially endless amounts of sophisticated, lifelike exploitative content," she continued. "Material which additionally commodifies survivors' suffering, and makes young people, especially girls, less safe both online and offline."
Support Interaction Information
The children's helpline also published information of counselling interactions where AI has been referenced. AI-related harms mentioned in the sessions comprise:
- Employing AI to evaluate body size, physique and looks
- Chatbots discouraging young people from consulting trusted adults about harm
- Facing harassment online with AI-generated material
- Online extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing chatbots for support and AI therapeutic applications.