Tackling online abuse

#InclusiveBritain

Equality Hub
5 min readApr 17, 2023

by Richard Laux, Chief Statistician of the Cabinet Office

It was appalling seeing the online racial abuse directed towards Marcus Rashford, Jadon Sancho and Bukayo Saka following the dramatic conclusion of the Euro 2020 men’s final. This sparked a series of questions about the nature and scale of online abuse the footballers received. The PFA and Signify looked at the social media accounts of 55 Home Nations footballers attending the Euros and found that the majority of abusive posts they received contained racism (42%) and homophobia (41%). Further research from FIFA revealed that Rashford, Sancho and Saka were the most abused players in the Euro 2020 final. The scale of these attacks drew a public response from Twitter, which acknowledged that while this abusive content was a global issue, the majority of these tweets originated from inside the UK.

And it’s not “just” at the Euros. PFA/Signify also looked at the social media accounts of 400 premier league players covering the 2020/21 Premier League football season, and found discriminatory abusive posts directed at 176 (44%) players. 20% of this abusive content was directed at the accounts of just 4 players; more than 50% of these abusive posts were flagged as coming from inside the UK.

To understand the scale of this, Ofcom and the Alan Turing Institute had experts examine a random sample of 3,000 tweets sent to players during the 2021/22 season. They found that 3.5% of these tweets were abusive, with 8.6% of abusive tweets attacking players’ identities, such as race or religion. They also used machine learning techniques to process a wider collection of 2.3 million tweets, and estimated that 2.6% of all tweets were abusive in nature. These abusive tweets tended to be focussed on a handful of high-profile players. Just 12% of football players — including Marcus Rashford — received 50% of all abusive tweets.

But are only footballers recipients of online abuse ?

Of course not! The Alan Turing Institute reports that between 30% and 40% of UK citizens have witnessed online abuse in some form, and between 10% and 20% have been the direct victim of online abuse personally. However the Institute also notes that UK evidence about online abuse is “fragmented, incomplete and inadequate for understanding the prevalence”. If we don’t know the scale of online abuse or the drivers of it, how can we raise awareness of the issue or develop evidence-based policies and measure their impact?

Why don’t we have measures of online abuse?

Measuring online abuse is a challenging task that requires the collaboration of government statisticians, big data analysts and social media companies. There is a need to define and measure online abuse with more “precision”, “consensus” and “standardisation” (House of Commons Library). There is confusion around how best to define online abuse that takes place on social media, which hasn’t helped researchers to identify the scale of it. Clarity on what we mean about online abuse and creating a common definition should lead to precise statistics and standardisation of research.

While there are many similarities with ‘offline’ hate speech, online abuse has unique properties that makes it particularly difficult to address. These include factors such as: the anonymity provided to users by many online platforms; the intrinsic opportunities for viral spread of hateful content and messages; the near-limitless range in which abusive content can spread, particularly across international borders; and the difficulty in holding the companies that provide these platforms accountable when their headquarters are in different countries/continents (Gagliardone et al 2015).

What do we need to take into account when measuring online abuse?

Within action 3 of the UK Government’s Inclusive Britain Strategy, we outlined the need for a greater understanding of the prevalence and impact of online harms. Over the last year we have engaged with experts from government departments, academia, think tanks and international organisations (such as the UN). Elsewhere I have made the case for a framework-driven approach to the business of official statistics, as a means of promoting clear thinking about subject and purpose — what we are trying to measure and estimate, and why. Consistent with this approach, we have developed a five-point framework for measuring online abuse:

  1. The degree of offensiveness of the statement. We can think of this as varying from ‘low hazard’ through ‘causing harm’ (for example images, words and videos that are legal to create and view but are likely to be perceived as offensive) to ‘criminal’ (hate crime, posting and sharing hateful and prejudiced content against an individual, group or community. It can take the form of derogatory, demonising and dehumanising statements, threats, identity-based insults, pejorative terms and slurs).
  2. The intent of the person making the statement. Unlike the degree of offensiveness, this is more of a binary scale — either the person genuinely did not mean to cause offence, or they had every intention of causing offence. This is impossible to measure directly, short of asking each person about each statement — and even then people might not be candid. So the measurement challenge is about finding a relevant proxy — for example, if a person has made repeated statements previously that are considered offensive, it might be reasonable to assume that this was the person’s intention this time.
  3. The nature of the ‘victim’. While we are interested in abuse that directly affects individuals, we are also interested in abuse that can indirectly affect groups/populations of potential victims. In such cases these people may not necessarily be the intended victims of the abuse but, due to being tangentially linked/related to the intended victim (such as through shared protected characteristics), these groups/populations become indirect victims of the abuse. Victims might range from a specific individual (being targeted because of statements they’ve made or things they have done or their personal characteristics) to a defined group of people (such as England footballers from an ethnic minority background) to the general public (or an extremely large group of people, such as members of a political party).
  4. The extent of any response. Having received online abuse, the victim might decide not to respond. However, if the victim does respond to abuse in a group/public setting, the nature of this response could, in turn, offend others.
  5. The location of the person making the statement. This is particularly relevant in considering legal responses to online abuse, because of the jurisdiction of the courts — it would be challenging to pass legislation in the UK that applied to a foreign national posting comments online from outside the UK about someone living in the UK. The measurement challenge is to determine where a particular statement is made. This would require information from the technology (eg social media) company.

What next ?

Going forward, we are now engaging with other government departments to consider how to embed the new framework. We will also consider how the framework will work alongside the new Online Safety Bill. And we will continue working with our international partners, as online abuse is an issue that needs cross-border collaboration and action.

--

--

Equality Hub

We lead on UK Government's disability, ethnicity, gender, and LGBT policy.