Updates
- Safer Internet Day 2024 – resources now available.
- NSPCC Report – Online Risks to Children.
- Report Harmful Content – button for your school website.
- AI CSAM.
- For parents – what are social media scams?
Safer Internet Day 2024 – Resources Now Available
With SID 2024 fast approaching (6th Feb 2024) the UK Safer Internet Centre have released their free resources for all age ranges (3-18 year olds) for you to use at school. These resources include lesson plans, assemblies and information to send home to parents.
All resources can be downloaded HERE.
NSPCC Report – Online Risks to Children
The NSPCC have released their latest evidence review based on online harms to children spanning the period 2017 – 2023 with a large focus on online sexual risks and the role of technology. There’s a lot of detail in the 83-page report going into the nature and scale of online risks, the platforms used, children most likely to be exposed to risk and children’s responses to encountering risks.
You can find the full review HERE and there is an executive summary which may be useful for staff briefing HERE
Report Harmful Content – Button for your School Website
The Report Harmful Content service is an easy way for students over the age of 13 in the UK to report harm where a report has been made to industry (e.g. social media, gaming platforms) and no action has been taken. I have reported many things to social media companies which I believed to be illegal, the large of majority of which have never received a response. This is really frustrating, more so for a young person who is being targeted.
There is lots of information on the website that you can share with students, and helpfully there is a ‘Report Harmful Content’ button which you can place on your own website so that students don’t have to search for it. Full details can be found HERE.
AI CSAM
I make no apologies for going on about this for a few weeks now, but the stark reality is that students being targeted by offenders, and students creating Artificial Intelligence generated CSAM of other students is now real and is being reported by schools worldwide, including here in the UK (see a short article in The Guardian HERE).
Very simplistically, two types of incidents are being seen:
- Offenders (often from other countries) de-clothing innocent images of students who have posted online, e.g. social media. These are then used for the purpose of blackmail, often with the threat of sharing with school friends/family. This is sextortion.
- Students using innocent images of other students (e.g. posted on social media), de-clothing them using AI, then sharing with other students.
So what’s the advice?
Firstly, the most important point of all is that any AI CSAM image/video is treated as real regardless of the circumstance. Forget the fact that it is AI-generated, you treat it as real media and you should follow your standard safeguarding policy/procedures plus the guidance for schools – Sharing Nudes and Semi-Nudes.
Secondly, talk to students about this and ensure (in a balanced, age-appropriate manner) what is happening. Explain the legality and re-iterate the importance of keeping accounts private where possible and to consider what they are posting online. You have to be careful with this one as it can unintentionally come across as victim-blaming.
Make sure you give students avenues for support, such as a member of staff to talk to, another trusted adult (e.g. parent, carer) and a really important one, the Childline/IWF Report Remove tool. This is particularly important as many students who are victims will not report to an adult for a variety of reasons. The IWF have confirmed that they will respond to and deal with AI generated imagery in the same way they would real imagery.