Terrorists know they have the upper hand on social media
Posted on March 20th, 2019

Sanjana Hattotuwa Courtesy Stuff

Prime Minister Jacinda Ardern says the live streaming of the Christchurch killings is an issue she will take up with Facebook, writes Sanjana Hattotuwa.
GETTY IMAGESPrime Minister Jacinda Ardern says the live streaming of the Christchurch killings is an issue she will take up with Facebook, writes Sanjana Hattotuwa.

OPINION: Coming out of a long meeting, the first I heard of the violence in Christchurch was from those in Sri Lanka who had got breaking news alerts. I was both very disturbed and extremely intrigued.

Terrorism as popular theatre or spectacle is not new, and some academics would argue is a central aim of terrorists, who want their acts recorded and relayed, not redacted or restrained.

The use of social media to promote and incite hate, violence and prejudice is also not new. From ISIS to politicians elected into office through populist, prejudiced campaigns, social media is foundational in contemporary terrorist recruitment and political propaganda.

What events in Christchurch last Friday brought to light was something entirely different, new and very unlikely to be resolved easily or quickly. The killer’s intentional use of the internet will have far-reaching implications, requiring significant, urgent reform around the governance of large social media platforms as well as oversight mechanisms, including regulations, on parent companies.

Though Facebook New Zealand, Google and Twitter all issued statements hours after the attack that they were working with the New Zealand Police to take down content associated with the attack, the content had by then spread far and wide across the web.

The video moved from platform to platform, edited, freeze-framed, downloaded off public streams which risked being taken down and then re-uploaded to private servers, which in turn served up the video to thousands more.

As Washington Post journalist Drew Harwell noted, The New Zealand massacre was live-streamed on Facebook, announced on 8chan, reposted on YouTube, commentated about
on Reddit, and mirrored around the world before the tech companies could even react”.

The challenge is significant because of the scale of the platforms, with billions of users each creating or consuming vast amounts of content every second. Managing the platforms is now largely algorithmic, meaning that only machines can cope with the scale and scope of content produced every second. There are serious limitations to this approach.

Terrorists know and now increasingly exploit it, weaponising the unending global popularity of social media to seed and spread their ideology in ways that no existing form of curtailment, containment or control can remotely compete with. And that’s partly because of the way algorithms tasked with oversight of content are trained, which is entirely opaque.

It is entirely probable that algorithms trained to detect signs of radical Islamic terrorism are incapable of flagging a similar violent ideology or intent promoted in English, anchored to the language and symbolism of white supremacism or fascism.

In March 2018, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer noted that the company was using artificial intelligence (AI) to police its platform, and that it was fairly effective” in distinguishing and removing gore and graphic violence”.

Last Friday’s killings highlight the risible falsity of this claim. Hours after the killings, dozens of videos featuring the same grisly violence as the original live stream were on Facebook.

One had generated 23,000 views an hour, with nearly 240,000 seeing it. Though Facebook notes it blocked 1.5 million videos in the days after the killings from being uploaded, it has tellingly withheld statistics on how many the original live stream reached or why 300,000 related videos were not identified soon after upload, which means they too were viewed – even for a short time – by hundreds of thousands.

And this isn’t the first time graphic, wanton violence has resided on the platform for hours before it was taken down, by which time, the strategic aim and intention of producers has been met.

The use of social media to promote and incite hate, violence and prejudice is also not new, writes Sanjana Hattotuwa.
http://The use of social media to promote and incite hate, violence and prejudice is also not new, writes Sanjana Hattotuwa.

123RFThe use of social media to promote and incite hate, violence and prejudice is also not new, writes Sanjana Hattotuwa.

The problem doesn’t end there. Neal Mohan, YouTube’s Chief Product Officer, is on record saying how Christchurch brought the company’s moderation and oversight to its knees.

It was unable to deal with the tens of thousands of videos spawned across its platform that showed the grisly killings – one every second at its peak. In two unprecedented moves for the company based on the severity of the challenge, his team decided to block search functionality that allows users to search recent uploads and also completely bypass human moderation, trusting even with the possibility of false positives, content possibly linked to the violence in Christchurch flagged by its algorithmic agents.

Mohan has no final fix. The company just has no better way – even in the foreseeable future – to deal with another incident of this nature. Terrorists simply have the upper hand.

The Christchurch killer knew this and used it to his advantage. He won’t be the last. The appeal to internet subcultures, famous personalities, memes, the very choice of music, expressions, gestures and popular references are a new argot of communications intentionally designed to use online cultures as means to amplify and promote violent ideology (called red-pilling).

At the same time, malevolent producers can almost entirely bypass existing controls and checks on the distribution of such material. The scale of social media is the hook, with the inability to oversee and inadequacies around governance, weaponised. Academics call this a wicked problem – a challenge that is so hard that even partial responses to any single aspect or facet increase the levels of complexity, often exponentially.

Generating greater friction around the production, promotion and spread of content is not in the interests of social media companies, who will continue to maintain – not without some merit – that billions of users producing vast amounts of mundane yet popular content daily is what primarily drives research and development. Read: profits.

Not without some irony, Facebook’s Chief Operating Officer Sheryl Sandberg wrote in 2018 a glowing tribute to New Zealand’s Prime Minister in Time magazine’s list of 100 ‘Most Influential People’. After Prime Minister Jacinda Ardern noted that the live streaming of the grisly killings would be an issue she takes up with the company and perhaps mortified that this incident will strengthen calls around more robust regulation in the US, Sandberg had reached out after the violence, though it is unclear with what intent or assurances.

This rough sketch of the context I locate my doctoral studies in masks far greater complexity, anchored to community, culture, context and country. What is true of social media in Sri Lanka, my home and the central focus of my research, doesn’t always hold sway in New Zealand. There are however strange parallels.

Repeated warnings around the weaponisation of Facebook to incite hate and violence, since 2014, went entirely unacknowledged by the company until severe communal riots almost exactly a year ago.

In Myanmar, the company’s platforms were flagged by the United Nations as those that helped produce, normalise and spread violence against Muslims.

Till 2018, the company did little to nothing to address this, despite warnings and ample evidence from local organisations. YouTube’s recommendation engine – the crucial algorithm that presents content that may interest you – has long since and openly been flagged as extremely problematic, beguilingly guiding users towards far-right radicalisation.

The Christchurch killer’s release of a document before his rampage shows an acute understanding of how all this works, by transforming tired conspiracy into highly desirable material through strategic distribution just before an act that serves as the accelerant to infamy.

Alex Stamos, the former Chief Security Officer at Facebook, posted in the aftermath of
Christchurch a sobering reminder of just why this violence goes viral. He notes that the language used, links provided and even excerpts of the violent video broadcast by news media only served to pique interest in the killer’s original document and full video. This is a disturbing information ecology where content produced by terrorists cannot be taken down easily or quickly because the surge of interest generated around discovery and sharing will overwhelm attempts to delete and contain.

If this is the world we now inhabit and by using social media, contribute to cementing, the questions directed at companies and governments may be better directed at ourselves. How many of us searched for the video, and shared it?

How many of us, without having any reason to, searched for, read and shared the killer’s document? If we cannot control our baser instinct, then we become part of the problem.

The terrorists are counting on this, and us, to succeed. We should not let them win.

Sanjana Hattotuwa is a PhD student at the National Centre for Peace and Conflict Studies (NCPACS), University of Otago.

Tags:

Leave a Reply

You must be logged in to post a comment.

 

 


Copyright © 2024 LankaWeb.com. All Rights Reserved. Powered by Wordpress