UK Technology Firms and Child Protection Officials to Examine AI's Ability to Generate Abuse Images

Technology companies and child safety organizations will receive authority to assess whether artificial intelligence tools can generate child abuse images under new UK legislation.

Substantial Rise in AI-Generated Harmful Content

The announcement came as revelations from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the government will permit designated AI developers and child protection groups to inspect AI systems – the foundational technology for conversational AI and image generators – and verify they have adequate protective measures to prevent them from creating images of child sexual abuse.

"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now identify the risk in AI systems early."

Tackling Legal Obstacles

The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to averting that issue by enabling to stop the creation of those images at their origin.

Legislative Framework

The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or distributing AI models developed to generate child sexual abuse material.

Practical Consequences

This recently, the minister toured the London base of Childline and heard a simulated call to counsellors featuring a account of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I hear about children experiencing extortion online, it is a cause of extreme frustration in me and justified anger amongst parents," he stated.

Concerning Statistics

A leading internet monitoring organization stated that instances of AI-generated abuse content – such as webpages that may contain numerous images – had significantly increased so far this year.

Cases of category A content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are launched," commented the chief executive of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised all over again with just a simple actions, providing offenders the ability to make possibly limitless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which further commodifies survivors' suffering, and makes children, particularly girls, less safe both online and offline."

Counseling Session Data

The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:

  • Employing AI to evaluate body size, body and appearance
  • Chatbots dissuading young people from consulting trusted adults about harm
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked images

Between April and September this year, the helpline delivered 367 counselling interactions where AI, conversational AI and related topics were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for support and AI therapeutic applications.

Anthony Campbell
Anthony Campbell

Felix is a seasoned betting analyst with over a decade of experience in the online gaming industry, specializing in sports odds and market trends.