Social media platforms are prioritising profits over keeping our children safe from predators, according to a former police officer turned child safety advocate.
“It’s not if, it’s just when” predators will contact children on these platforms, Raven CEO and co-founder John Pizzuro told Scrolling 2 Death podcast host Nikki Reisberg.
Raven is a non-profit lobbying firm comprised of experts and retired law enforcement officers focused on legislative and policy solutions to stop child trafficking and child exploitation.
Know the news with the 7NEWS app: Download today
Pizzuro, who worked as a New Jersey State Police Officer for more than two decades, said social media platforms present a massive risk to children.
“Many parents think that online harms cannot reach their child but unfortunately that is not true,” Reisberg agreed.
The problem starts when children lie about their age to access social media, Pizzuro said.
Children can then “access anyone in the world” on these platforms, he said.
“We went from a small society to a global society,” Pizzuro said.
“We’ve arrested offenders from different countries, from California, all travelling, by the way, to find nine-year-olds, 10-year-olds and 11-year-olds.
“From a parent perspective, I think that’s what really they don’t understand.
“There (are) cases upon cases where you had someone who’s just sitting and talking to someone online, but they will go meet them.”
Predators can quickly form connections with children online by matching their language and talking about their interests, Pizzuro said.
They will go “where your children are”, he said.
In Pizzuro’s experience, predators have been found everywhere from standard social media platforms such as Instagram and Facebook to gaming platforms such as Kick.
“There is not one specific one,” he said.
Part of the challenge is these platforms are “not really moderated”, he said.
“There’s just too many users.”
Predators often have multiple accounts as well — meaning when you report one, they can use others.
The platforms need to do more to target these predators, Reisberg and Pizzuro said.
“Social media platforms are not doing a great job of removing the access between predators and our children,” Reisberg said.
A Harvard study, published earlier this year, found that collectively Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter), and YouTube earned nearly $11,000,000,000 in advertising revenue from children on these platforms in the United States in 2022.
Platforms are designed to give users more of the content they’re interested in, but it does not differentiate between innocent and predatory content, Pizzuro said.
“The problem is it doesn’t differentiate when it’s trying to protect children,” he said.
“So, if I consume that content, I’m more likely to have that content and I’m more likely to have friends (who make that content) recommended to me.”
Sextortion risks
In another podcast episode, Reisberg spoke with the parent of a child who died after being sextorted
“By giving him this phone in his bedroom with, not unrestricted access but access to just social media, we were opening a door to let a monster in, the most dangerous people on the planet, into our child’s bedroom,” Reisberg said of what the parent told her.
“It’s not even … it doesn’t even look scary, it looked like a teenage girl, a nice teenage girl.
“So, your child doesn’t have that fear response because they’re being approached by someone who can look like anyone.”
Sextortion schemes have previously been the focus of a 7NEWS Spotlight investigation, featuring the family of a young Australian boy who took his own life after being targeted by scammers.
These scammers — known as Yahoo Boys — target young social media users, most commonly teenaged boys, and befriend them before convincing them to send sexually explicit photos.
Generally, they befriend their victims by pretending to be attractive young girls, sometimes using hacked accounts.
Once their victims have sent the explicit photos through, the scammers then demand money to prevent these images from being released.
Pizzuro worries that artificial intelligence (AI) will allow these predators and scammers to create sexualised imagery of children for use in these schemes and for their own personal use.
AI could also be used to increase the reach of predators online by allowing them to target multiple children at once, he said.
Reisberg asked if it was safe, in Pizzuro’s opinion, to share any photos of your children online in this day and age.
“The short answer? No,” Pizzuro said.
Reisberg noted she does not let her children’s school publish photos of them on social media, which Pizzuro said is the safer option, given photos of children can be taken from these pages and manipulated as well.
More information needed
Encryption on apps such as Whatsapp, Facebook Messenger and other social media platforms also hinders the ability of parents and platforms themselves to keep children safe, Reisberg and Pizzuro said.
Safety teams cannot access conversations to ensure children’s safety when encryption is in place, but it also means they are not liable for any harm.
“That’s by design, right?” Pizzuro said.
Children’s social media should not be encrypted, he said, adding: “Without better information, we can’t protect children.”
Other experts have previously spoken out with similar concerns.
Cyber intelligence analyst Paul Raffile previously told 7NEWS tech giants need to be held accountable for the safety, or lack thereof, of their platforms.
He feared that government actions — such as lifting the age of access to social media, as is currently proposed — may absolve organisations of responsibility for ensuring their own platforms are safe.
Australia’s eSafety Commissioner Julie Inman Grant has given the tech industry until October 3 to create enforceable rules to stop children from viewing graphic pornographic content on social media platforms.
She has ordered social media companies, search engines, app stores and gaming sites to individually come up with industry codes that will prevent children from unintentionally seeing inappropriate material.
“It requires every single sector to do something to put some robust and reasonable protections in place so that there isn’t a single point of failure so that our children are protected at each level,” she said.
The watchdog wants safeguards including age checks, tools to filter or blur unwanted sexual content, and parental controls.
She says she will impose mandatory standards if the industry codes presented are not strong enough.
Social media companies are also now facing pressure to reveal how many Australian children are using their platforms.
Google, Facebook’s parent company Meta, TikTok, Reddit, Discord, Twitch and Snap must answer a series of questions from the online safety watchdog about the number of children on their platforms and the age assurance measures used to prevent access by under-aged kids.
Inman Grant said legally imposed age limits are on the table, but noted the online sphere offered some benefits to teenagers and said more must be understood about the potential effectiveness and unintended consequences of any restrictions.
The federal government has provided $6.5 million for a pilot program of age-assurance technology, but Prime Minister Anthony Albanese has said any age requirements must be proven to work.
Opposition Leader Peter Dutton has vowed to ban children under 16 accessing social media should the coalition win the next election.
– With AAP