‘It’s OK, everyone else is doing it’: how do we deal with role violence on social media played in UK riots? | Social media

Among those swiftly convicted and sentenced last week for their part in the racist rioting was Bobby Shirbon, who had left his 18th birthday party at a bingo hall in Hartlepool to join the mob roaming the town’s streets, targeting houses thought to be occupied by asylum seekers. Shirbon was arrested for smashing windows and throwing bottles at police. He was sentenced to 20 months in prison.

In custody, Shirbon had claimed that his actions had been justified by their ubiquity: “It’s OK,” he told officers, “everyone else is doing it.” That has, of course, been a consistent claim from those caught up in mass thuggery down the years, but for many of the hundreds of people now facing significant prison sentences, the “defence” has a sharper resonance.

Shirbon was distracted from his birthday celebration by alerts on his social media. Some of this was perhaps disinformation about the tragic events in Southport; but attached and embedded in that would have been the snippets and clips of videos that quickly became the contextless catalyst of the spreading violence.

Head and shoulders shot of Bobby Shirbon
Bobby Shirbon left his birthday party in Hartlepool to go to the scene of a riot after receiving social media alerts.
Photograph: Cleveland Police/PA

Anyone with a phone will probably have viewed those clips with mounting horror last week – the video of racists stopping cars at makeshift checkpoints in Middlesbrough; of the lone black man being set upon in a park in Manchester; of the drinker outside a pub in Birmingham assaulted by a gang intent on retribution. Visceral evidence of violence – a real-time sense of barbarity suddenly normalised – is, for some, the essential spark to get out on the streets: “everyone else is doing it”. In that sense, most of us now carry the triggers for Kristallnacht in our pockets.

In the course of the last week, I read through that quaint document from another era, the many pages of the BBC’s rigorous guidelines on the depiction of violence. It is worth reminding ourselves of what, for our national broadcaster, is allowable: “When real-life violence is shown,” the guidelines state, “we need to strike a balance between the demands of accuracy and the dangers of causing unjustified distress”. Particular editorial care must be taken with “violence that may reflect personal experience, for example domestic violence, pub brawls, football hooliganism”, and “we must ensure that verbal or physical violence that is easily imitable by children … is not broadcast in pre-watershed programmes”.

There is no watershed, of course, on social media. Or any efforts, in the search for anonymous clicks, to strike balances between accuracy and distress. Quite the opposite. Whole YouTube channels and X accounts with hundreds of thousands of followers are dedicated to providing a steady, daily stream of the most graphic gang fights and school fights and road rage from across the world. One of the first things that Elon Musk promoted when he bought Twitter – after firing most of its moderators – was a facility that allowed users to swipe up for an automatic stream of video content. He was inundated with complaints from people who found themselves inadvertently confronted with scenes of beatings and murder.

A couple of years on, if you showed an interest in the events of last week, you would probably find your timeline immediately filled with the most disturbing snippets of violence – including of an unrelated machete fight in Southend – framed in the most incendiary terms by political agitators (not least Musk himself, who seemed intent on promoting the idea of a British “civil war” to his 193 million followers).

Elon Musk seems intent on promoting the idea of ‘civil war’ on his own social media platform, X Photograph: Julia Nikhinson/AP

There is a reason that, in independently regulated broadcast media, images and films of such events are required to be contextualised and pixelated, and drip-fed into the news. More than thousands of words of reports, those images saturate our imaginations. The unregulated flow of them, chosen for their graphic nature, shared for outrage or LOLs, has consequences that come as no surprise to those who have been studying the issue most closely.

Dr Kaitlyn Regehr is a co-author of a large-scale study, Safer Scrolling, published this year, into how social media “gamifies” hate and misogyny in young people. She suggests: “The simple fact is that social media companies are in the business of selling attention. There have been numerous whistleblowers who have come out from these companies and also research, including my own, that points towards the fact that algorithms prioritise harm and disinformation because it is much more exciting and attention-gripping than the truth.”

Keir Starmer has in recent days talked about how the forthcoming Online Safety Act, due to come into force next year, may need strengthening in light of last week’s events. Regehr, who advised on the legislation, is in no doubt: “This is not an argument about free speech. We are talking about the way in which content is algorithmically distributed and fed and prioritised. There’s millions and millions of posts, and the algorithm decides the 100 that we see.” Regulators, she suggests, at the very least need to understand how those algorithms work.

Regehr agrees that it would be valuable, in this context, to take note of the recent social media feeds of those convicted of racist violence last week, to see the patterns in what they were viewing. “We need to make that link clearer to legislators and to the general public,” she says, so that “this can be understood as a much more generalised, systemic problem, which I think is reaching an existential crisis”.

The focus of this crisis is generally discussed as one of deliberate disinformation; research suggests that this neglects a critical component – the way in which that disinformation is routinely attached to the most graphic video content.

skip past newsletter promotion

For the past seven years, Shakuntala Banaji, professor of media culture and social change at the London School of Economics, has worked with researchers studying the ways that the sharing of short-form video clips has been a contributing factor in racial violence, lynching and pogroms across the world. “We watch a lot of TikToks,” Banaji says. “We’ve watched a lot of Instagram reels. And we’ve all had to go into therapy after … It is absolutely degrading and repulsive.”

The group collects and studies the effect of thousands of video clips of the kind that were spread last week: vicious street attacks with very little contextualisation, or with contextualisation which is deliberately false. The work has produced some surprising facts. One is that the audience most susceptible to this content is not teenagers and young adults but middle-class, middle-aged viewers.

The deliberate narrowness of political context is critical. “What was really, really interesting to us was that in some countries there was the same kind of graphic content circulating, but it didn’t result in street violence,” Banaji says. The key component in the places where racist violence occurred, she suggests, was the political framing of the material. “In India, in Myanmar, in Bolsonaro’s Brazil and in the UK after Brexit – where we saw a massive increase in Islamophobic attacks – the crucial difference was not the position that the government took in trying to regulate the internet, but in its tone towards the groups who were being targeted.”

Banaji’s research concludes that there is a “sort of triangle … in what makes this so dangerous. Only part of it is the content of the media. As important is, first, how the violence is captioned and edited and, second, what the mainstream media and politicians are saying about that content, tacitly or explicitly.” In these terms, she believes attempts to police these platforms, particularly by political figures who also seek to use them to stoke division, can only ever be counterproductive. Fully independent regulation, allied with political rhetoric that refuses racism and incendiary commentary, slowly takes power from the algorithms, she believes.

Regehr agrees that such changes cannot come soon enough. “Almost everything else we consume, including terrestrial TV and legitimate journalism and food and drugs and medications, are regulated,” she says. “Yet social media remains an unregulated space. I think we hide behind this idea that the technology is still new, that we’re still working it out. But the world wide web launched 30 years ago. For almost half the population, they’ve never lived without this.”

The consequences, last week, were all around us.

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment