Violent online content ‘unavoidable’ for UK children, Ofcom finds | Internet safety

Violent online content is now “unavoidable” for children in the UK, with many first exposed to it when they are still in primary school, research from the media watchdog has found.

Every single British child interviewed for the Ofcom study had watched violent material on the internet, ranging from videos of local school and street fights shared in group chats, to explicit and extreme graphic violence, including gang-related content.

Children were aware that even more extreme material was available in the deeper recesses of the web, but had not sought it out themselves, the report concluded.

The findings prompted the NSPCC to accuse tech platforms of sitting back and “ignoring their duty of care to young users”.

Rani Govender, a senior policy officer for child safety online, said: “It is deeply concerning that children are telling us that being unintentionally exposed to violent content has become a normal part of their online lives.

“It is unacceptable that algorithms are continuing to push out harmful content that we know can have devastating mental and emotional consequences for young people.”

The research, conducted by the Family, Kids and Youth agency, is part of Ofcom’s preparation for its new responsibilities under the Online Safety Act, passed last year, which handed the regulator the power to crack down on social networks that are failing to protect their users, particularly children.

Gill Whitehead, Ofcom’s online safety group director, said: “Children should not feel that seriously harmful content – including material depicting violence or promoting self-injury – is an inevitable or unavoidable part of their lives online.

“Today’s research sends a powerful message to tech firms that now is the time to act so they’re ready to meet their child protection duties under new online safety laws. Later this spring, we’ll consult on how we expect the industry to make sure that children can enjoy an age-appropriate, safer online experience.”

Almost every leading tech firm was mentioned by the children and young people interviewed by Ofcom, but Snapchat and Meta’s apps Instagram and WhatsApp came up most frequently.

“Children explained how there were private, often anonymous, accounts existing solely to share violent content – most commonly local school and street fights,” the report says. “Nearly all of the children from this research who had interacted with these accounts reported that they were found on either Instagram or Snapchat.”

“There’s peer pressure to pretend it’s funny,” one 11-year-old girl said. “You feel uncomfortable on the inside, but pretend it’s funny on the outside.” Another 12-year-old girl described feeling “slightly traumatised” after being shown a video of animal cruelty: “Everyone was joking about it.”

Many older children in the research “appeared to have become desensitised to the violent content they were encountering”. Professionals also expressed particular concern about violent content normalising violence offline, and reported that children tended to laugh and joke about serious violent incidents.

On some social networks, the exposure to graphic violence comes from the top. On Thursday, Twitter, now known as X after its purchase by Elon Musk, took down a graphic clip purporting to show sexual mutilation and cannibalism in Haiti after it had gone viral on the social network. The clip had been reposted by Musk himself, who tweeted it at news channel NBC in response to a report by the channel that accused him and other rightwing influencers of spreading unverified claims about the chaos in the country.

Other social platforms provide tools to help children avoid violent content, but little assistance. Many children, as young as eight, told the researchers that it was possible to report content they did not want to see, but there was a lack of trust that the system would work.

For private chats, they were concerned reporting would mark them out as “snitches”, leading to embarrassment or punishment from peers, and they did not trust that platforms would impose meaningful consequences for those who posted violent content.

The rise of powerful algorithmic timelines, like those of TikTok and Instagram, added an extra twist: there was a shared belief among children that if they spent any time on violent content (for example, while reporting it), they would be more likely to be recommended it.

Professionals in the study voiced concern that violent content was affecting children’s mental health. In a separate report released on Thursday, the children’s commissioner for England revealed that more than 250,000 children and young people were waiting for mental health support after being referred to NHS services, meaning one in every 50 children in England is on the waiting list. For the children who accessed support, the average waiting time was 35 days, but in the last year nearly 40,000 children experienced a wait of more than two years.

A Snapchat spokesperson said: “There is absolutely no place for violent content or threatening behaviour on Snapchat. When we find this type of content, we move quickly to remove it and take appropriate action on the offending account.

“We have easy-to-use, confidential, in-app reporting tools and work with the police to support their investigations. We support the aims of the Online Safety Act to help protect people from online harms and continue to engage constructively with Ofcom on the act’s implementation.”

Meta has been contacted for comment. X declined to comment.

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment