At the beginning of September the NHS urged young people to stop using AI chatbots as a substitute for therapy (link HERE) due to the potential for harmful and dangerous advice. Urging people to stop using bots is one thing, and the NHS is absolutely right to warn people, but when it’s almost impossible to get the help and support you need people are going to turn elsewhere, and for many that elsewhere will be an AI chatbot.
The warning from the NHS comes after research found more than 17 million TikTok posts about using ChatGPT as a substitute and a YouGov poll which found around a third of 18-24 year olds said they would be comfortable discussing mental health issues with an AI chatbot instead of a real therapist. Personally I’ve found this with children Year 5 upwards, albeit at much lower numbers.
It’s vitally important we talk to children and young people about the incredible positives of AI and balance that with the shortcomings and risks. If you haven’t already seen it, Internet Matters released a fantastic report called ‘Me, Myself and AI’ which gives information on the ways in which children are using AI chatbots and will help with those important conversations. The link to the report is HERE.
Fake Celebrity Chatbots – Harmful Content
Following on from the update above, chatbots continue to hit the headlines with more and more research showing the very significant concerns. The Center for Countering Digital Hate (CCDH) released a big report in August (link HERE) where a large scale safety test on ChatGPT was carried out with some alarming findings related to mental health, eating disorders and substance abuse with over half of the 1200 responses from ChatGPT gave harmful advice.
Following on from that, in September Sky reported on some research carried out by ParentsTogether and Heat Initiative with the CharacterAI bot where, during 50 hours of testing accounts registered to children aged 13-17 they found almost 700 harmful/dangerous interactions. Some of the bots on CharacterAI replicate a real or imaginary real person (e.g. a movie star), therefore adding an extra layer of realism/complexity.
There is a significant parasocial relationship concern here: if children and young people form a relationship with the bot, and that bot is giving harmful/dangerous advice, the consequences don’t need explaining.
This is definitely one to be aware of and perhaps share with parents. The link to the Sky News article is HERE.