AI and Child Sexual Abuse

With the recent report from the Internet Watch Foundation and their findings related to AI being used for child sexual abuse material (CSAM), we have to wonder where all this is going and what we need to be doing in schools.

Their latest report highlights an emerging concern which poses unique challenges for safeguarding professionals, law enforcement, and policymakers due to its distinct nature and rapid proliferation. The report’s findings are based on a detailed analysis of AI-generated images on a dark web CSAM forum, revealing both the extent of this problem and the complexities involved in addressing it. Observations include:

  • AI-generated content is a small but rapidly growing part of IWF’s work.
  • Tools for creating these images are legally available and can be used offline.
  • AI CSAM is increasingly realistic, often indistinguishable from actual CSAM.
  • There’s evidence of re-victimization of known abuse victims and targeting of famous children.
  • AI CSAM is beginning to be used for commercial exploitation.
  •  

The report and other news stories of offenders ‘declothing’ children using AI and students in school using AI to sexualise, bully and harass other students raises many complex questions for schools and for parents. But although there are many questions, at the time of writing this article there are no answers yet.

As always, education will play a key role. This includes education to students in an age appropriate manner, plus staff and parental awareness.

This 8 minute online safety video is a concise explanation for school staff to raise the awareness of this issue. It is part of my online safety subscription service where schools receive two updates every half-term: one short video for staff and a PDF newsletter for parents on a wide range of online safety topics. 

Would you like online safety updates in your email inbox every Wednesday? Sign up below, no marketing, no spam.