An Ofcom report has revealed that large proportions of children as young as five are using social networks and other online platforms without supervision.
Ofcom’s annual study of children’s relationship with the media and online world has revealed around a quarter of those aged five to seven owns a smartphone, while more than three-quarters use a tablet.
Almost a third of parents of five-to-seven-year-olds said their child uses social media independently, with three in ten admitting they are likely to allow their child to have a social media profile before they reach the minimum age required. Almost half (48%) of five to seven-year-olds have personal profiles on YouTube or YouTube Kids.
However, about three quarters of parents of those aged five to seven said they have discussed how to stay safe online with their children.
Overall, the use of social media by this age group has increased by 8% year on year, with WhatsApp seeing the biggest annual growth. Online gaming has also had an annual increase of 7%, with those playing shooting games reaching a record-high following a 15% rise this past year.
The report also revealed a gap in communication between parents and older children – 8 to 17 years old – on their exposure to harmful content. A third of this age group said they had seen harmful content yet over the last year, but only one in five parents said their child had told them about seeing scary or upsetting content online.
Girls within this age group were also more likely to experience hurtful interactions online compared to boys. Also, more than nine in ten children aged eight to 17 also said they had at least one lesson on online safety at school, yet only three in ten said they had regular online safety lessons.
The findings come as Ofcom announces an additional area of focus for child safety, building on the measures set out in their draft Children’s Safety Code of Practice – on which the public will be invited to provide feedback on, in a consultation beginning next month. The regulatory authority also plans to conduct a consultation later this year on how automated tools, including artificial intelligence, can help detect illegal content and content harmful to children.
As the Online Safety Act passed into law in October, Ofcom formally took on responsibility for overseeing the implementation – and enforcement – of the legislation. The watchdog, which also regulates print, online and broadcast media, has indicated that the measures set out in the new laws will be rolled out incrementally, with the regulator intending to implement the regime in the most harmful areas first.
The regulations – which require online platforms “proportionate measures to effectively mitigate and manage the risks of harm from illegal content” – give Ofcom the power to issue fines of £18m or up to 10% of the worldwide turnover for any firm that breaches the rules. In the case of Twitter, this could equate to almost £300m, while Facebook could theoretically be hit with a near-£11bn penalty.
PublicTechnology revealed last week that Ofcom has acquired tranches of “sensitive material” to test the capabilities of the automation technology used by such platforms to detect illegal or harmful content.