British Technology Firms and Child Protection Agencies to Test AI's Ability to Generate Abuse Content
Tech firms and child safety agencies will be granted permission to evaluate whether AI systems can produce child exploitation images under recently introduced UK legislation.
Significant Increase in AI-Generated Harmful Content
The announcement coincided with revelations from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the government will allow approved AI companies and child protection groups to inspect AI systems – the foundational systems for chatbots and visual AI tools – and verify they have sufficient protective measures to stop them from creating depictions of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the risk in AI systems promptly."
Tackling Legal Obstacles
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such content as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that issue by helping to halt the production of those materials at their origin.
Legislative Framework
The changes are being introduced by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or distributing AI models designed to generate exploitative content.
Practical Impact
This week, the official toured the London base of Childline and heard a simulated conversation to advisors involving a report of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised deepfake of himself, constructed using AI.
"When I hear about children facing blackmail online, it is a cause of intense frustration in me and justified anger amongst families," he stated.
Concerning Data
A prominent online safety organization reported that cases of AI-generated exploitation content – such as webpages that may include multiple images – had significantly increased so far this year.
Cases of category A content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of illegal AI depictions in 2025
- Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a vital step to guarantee AI tools are secure before they are released," stated the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, providing offenders the capability to make potentially endless amounts of advanced, lifelike child sexual abuse material," she added. "Material which further commodifies victims' suffering, and makes young people, particularly female children, less safe both online and offline."
Counseling Session Data
Childline also released information of support interactions where AI has been mentioned. AI-related harms discussed in the sessions include:
- Using AI to evaluate body size, physique and looks
- Chatbots dissuading children from talking to trusted adults about abuse
- Being bullied online with AI-generated content
- Digital blackmail using AI-faked pictures
Between April and September this year, Childline delivered 367 support interactions where AI, conversational AI and associated terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellbeing, including utilizing chatbots for support and AI therapeutic applications.