A team of researchers from the University of Cambridge have called for tighter regulations to be placed on toys fitted with AI capabilities, following a study of how children play with them.
The research team examined how the interactions of a small sample of toddlers with an AI-powered toy called Gabbo. Fitted with a voice-controlled chatbot using OpenAI technology, Gabbo can be spoken to by children and provide responses.
According to the researchers, the responses offered by the AI model could lead to confusion during the early social development of young children.
As well as finding it difficult to understand and register child voices, researchers warned its responses to more emotional phrases from the children were either dismissive or confusing.
Discussing the study on BBC News, the study’s co-author Prof Jenny Gibson said: “There’s a lot of attention historically to physical safety – we don’t want toys where you can pull the eyes off and swallow them. “Now we need to start thinking about psychological safety too.”
The report encouraged any parents who have purchased AI toys for their children to only allow its use in their presence and have more broadly called for stricter testing and regulatory standards set for these products before they can be sold.
Commenting on the situation, RAIDS AI co-founder Nik Kairinos told UKTN: “When it comes to our children, the stakes for AI safety could not be higher.
“The Cambridge study highlights exactly what happens when AI systems interact with vulnerable users without adequate oversight. A toy that dismisses a child’s sadness or responds to affection with a compliance warning isn’t malfunctioning in the traditional sense; it’s behaving unpredictably within its design parameters. That’s the kind of behavioural anomaly that continuous, independent monitoring is built to catch.
“Regulation is necessary but not sufficient. Pre-deployment testing simply cannot anticipate every interaction a three-year-old will have with a generative AI system.”