Illegal trade in AI child sex abuse images exposed
Paedophiles are using artificial intelligence (AI) technology to create and sell life-like child sexual abuse material, the BBC has found.
Some are accessing the images by paying subscriptions to accounts on mainstream content-sharing sites such as Patreon.
Patreon said it had a "zero tolerance" policy about such imagery on its site.
The National Police Chief's Council said it was "outrageous" that some platforms were making "huge profits" but not taking "moral responsibility".
And GCHQ, the government's intelligence, security and cyber agency, has responded to the report, saying: "Child sexual abuse offenders adopt all technologies and some believe the future of child sexual abuse material lies in AI-generated content."
The makers of the abuse images are using AI software called Stable Diffusion, which was intended to generate images for use in art or graphic design.
AI enables computers to perform tasks that typically require human intelligence.
The Stable Diffusion software allows users to describe, using word prompts, any image they want - and the program then creates the image.
But the BBC has found it is being used to create life-like images of child sexual abuse, including of the rape of babies and toddlers.
UK police online child abuse investigation teams say they are already encountering such content.
Freelance researcher and journalist Octavia Sheepshanks has been investigating this issue for several months. She contacted the BBC via children's charity the NSPCC in order to highlight her findings.
"Since AI-generated images became possible, there has been this huge flood… it's not just very young girls, they're [paedophiles] talking about toddlers," she said.
A "pseudo image" generated by a computer which depicts child sexual abuse is treated the same as a real image and is illegal to possess, publish or transfer in the UK.
The National Police Chiefs' Council (NPCC) lead on child safeguarding, Ian Critchley, said it would be wrong to argue that because no real children were depicted in such "synthetic" images - that no-one was harmed.
He warned that a paedophile could, "move along that scale of offending from thought, to synthetic, to actually the abuse of a live child".
Abuse images are being shared via a three-stage process:
- Paedophiles make images using AI software
- They promote pictures on platforms such as Japanese picture sharing website called Pixiv
- These accounts have links to direct customers to their more explicit images, which people can pay to view on accounts on sites such as Patreon
Some of the image creators are posting on a popular Japanese social media platform called Pixiv, which is mainly used by artists sharing manga and anime.
But because the site is hosted in Japan, where sharing sexualised cartoons and drawings of children is not illegal, the creators use it to promote their work in groups and via hashtags - which indexes topics using key words.
A spokesman for Pixiv said it placed immense emphasis on addressing this issue. It said on 31 May it had banned all photo-realistic depictions of sexual content involving minors.
The company said it had proactively strengthened its monitoring systems and was allocating substantial resources to counteract problems related to developments in AI.
Ms Sheepshanks told the BBC her research suggested users appeared to be making child abuse images on an industrial scale.
"The volume is just huge, so people [creators] will say 'we aim to do at least 1,000 images a month,'" she said.
Comments by users on individual images in Pixiv make it clear they have a sexual interest in children, with some users even offering to provide images and videos of abuse that were not AI-generated.
Ms Sheepshanks has been monitoring some of the groups on the platform.
"Within those groups, which will have 100 members, people will be sharing, 'Oh here's a link to real stuff,'" she says.
Different pricing levels
Many of the accounts on Pixiv include links in their biographies directing people to what they call their "uncensored content" on the US-based content sharing site Patreon.
Patreon is valued at approximately $4bn (£3.1bn) and claims to have more than 250,000 creators - most of them legitimate accounts belonging to well-known celebrities, journalists and writers.
Fans can support creators by taking out monthly subscriptions to access blogs, podcasts, videos and images - paying as little as $3.85 (£3) per month.
But our investigation with Octavia Sheepshanks found Patreon accounts offering AI-generated, photo-realistic obscene images of children for sale, with different levels of pricing depending on the type of material requested.
One wrote on his account: "I train my girls on my PC," adding that they show "submission". For $8.30 (£6.50) per month, another user offered "exclusive uncensored art".
The BBC sent Patreon one example, which the platform confirmed was "semi realistic and violates our policies". It said the account was immediately removed.
Patreon said it had a "zero-tolerance" policy, insisting: "Creators cannot fund content dedicated to sexual themes involving minors."
The company said the increase in AI-generated harmful content on the internet was "real and distressing", adding that it had "identified and removed increasing amounts" of this material.
"We already ban AI-generated synthetic child exploitation material," it said, describing itself as "very proactive", with dedicated teams, technology and partnerships to "keep teens safe".
AI image generator Stable Diffusion was created as a global collaboration between academics and a number of companies, led by UK company Stability AI.
Several versions have been released, with restrictions written into the code that control the kind of content that can be made.
But last year, an earlier "open source" version was released to the public which allowed users to remove any filters and train it to produce any image - including illegal ones.
Stability AI told the BBC it "prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM (child sexual abuse material).
"We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes".
As AI continues developing rapidly, questions have been raised about the future risks it could pose to people's privacy, their human rights or their safety.
Jo [full name withheld for security reasons], GCHQ's Counter Child Sexual Abuse (CCSA) Mission Lead, told the BBC: "GCHQ supports law enforcement to stay ahead of emerging threats such as AI-generated content and ensure there is no safe space for offenders."
The NPCC's Ian Critchley said he was also concerned that the flood of realistic AI or "synthetic" images could slow down the process of identifying real victims of abuse.
He explains: "It creates additional demand, in terms of policing and law enforcement to identify where an actual child, wherever it is in the world, is being abused as opposed to an artificial or synthetic child."
Mr Critchley said he believed it was a pivotal moment for society.
"We can ensure that the internet and tech allows the fantastic opportunities it creates for young people - or it can become a much more harmful place," he said.
Children's charity the NSPCC called on Wednesday for tech companies to take notice.
"The speed with which these emerging technologies have been co-opted by abusers is breath-taking but not surprising, as companies who were warned of the dangers have sat on their hands while mouthing empty platitudes about safety," said Anna Edmundson, the charity's head of policy and public affairs.
"Tech companies now know how their products are being used to facilitate child sexual abuse and there can be no more excuses for inaction."
A spokesman for the government responded: "The Online Safety Bill will require companies to take proactive action in tackling all forms of online child sexual abuse including grooming, live-streaming, child sexual abuse material and prohibited images of children - or face huge fines."