Twitter has decided to crack down on racist abuse against black England players after the Euro 2020 final, but in the face of widespread demand for social media platforms to act, is its approach sufficient?
The abuse was also posted on Facebook and comes after players and clubs boycotted social networks entirely in April to protest a growing wave of discrimination against footballers.
Here are the steps social media platforms are taking to tackle the problem and issues that are preventing further progress.
What is required?
There are two main demands of social media platforms.
The first is that: “Messages and posts should be filtered and blocked before being sent or published if they contain racist or discriminatory material.”
The second is that “all users should go through an improved verification process which (only if required by law enforcement) allows precise identification of the person behind the account.”
What are the filtering issues?
The challenge with the first request – filtering content before it’s sent or posted – is that it requires technology to automatically identify whether a message’s content contains racist or discriminatory material, and that technology does not exist. simply not.
Filtering cannot be based on a list of words – people can invent new epithets or surrogates – and existing racist terms can be used in a context that does not spread hatred, such as a victim seeking to help citing an abusive message sent to them.
How do they filter out other materials?
Social media platforms have been successful in filtering and blocking terrorist material or images of child sexual exploitation, but this is a different problem from a technological standpoint.
Fortunately, there is a limited amount of abuse images in circulation. Unfortunately, this number is increasing, but since the vast majority of these media have already been downloaded, they have also been taken into account, making it easier to find them in the future and automatically remove them.
Borrowing an image and understanding the meaning of a message in English are very different technological challenges.
Even the most advanced natural language processing (NLP) software can struggle to take into account the context that a human will innately understand, although many companies claim their software handles this successfully.
What are companies saying?
Instead, Twitter and Facebook say they quickly deleted the abusive posts after they were posted.
Twitter said that “through a combination of machine learning-based automation and human review, we quickly deleted over 1,000 Tweets and permanently suspended a number of accounts for violating our rules.”
A Facebook spokesperson said: “We quickly removed comments and accounts pointing to abuse against England footballers last night and we will continue to take action against those who break our rules.”
They added: “In addition to our work to remove this content, we encourage all players to turn on hidden words, a tool that means no one should see abuse in their comments or private messages.”
Hidden Words is Facebook’s filter for “offensive words, phrases and emojis” in DM requests, but the shortcomings of this approach are described above.
What are the issues with the verified ID requirement?
The call for social media users to identify with platforms – if not necessarily with the public – has also been echoed by the professional body BCS, the accredited institute for computing.
Online anonymity is valuable, as Culture Secretary Oliver Dowden acknowledged during a parliamentary debate, noting “that it is very important to some people – for example, victims fleeing domestic violence and children who have questions about their sexuality that they don’t want their families to know they are exploring. There are many reasons to protect this anonymity ”.
Suggestions for an identity repository – in which the platform knows the identity of the user, but not other social media users – raise questions about the reliability of staff inside the platforms. for “groups such as human rights defenders [and] whistleblowers “whom the government has identified as deserving of anonymity online.
And if companies had the real identities of these users in escrow, they could be exposed to law enforcement, with a number of undemocratic states known to target dissidents who speak freely against their government on the media. social.
It’s also unclear what processes social media platforms might have in place to verify these identities.
“Online abuse is not anonymous,” said Heather Burns, policy officer at the Open Rights Group.
“Almost all of the current wave of abuse is immediately traced to those who shared it, and social media platforms can pass the details to law enforcement.”
“The government cannot pretend this problem is not its responsibility. Calls for social media platforms to remove the material miss the point and let criminals get away with it,” Ms. Burns.
The biggest challenge for UK police when attempting to prosecute people on the basis of tweets is Twitter’s extremely low compliance rate with requests for information. According to the company’s transparency report, less than 50% get answers!
– Alexandre Martin (@AlexMartin) July 12, 2021
But transparency figures released by Twitter reveal that the company responds to less than 50% of all inquiries from law enforcement in the UK regarding accounts on its platform.
What is the government going to do?
Oliver Dowden said: “I share the anger at the appalling racist abuse of our heroic players. Social media companies need to improve their game to fix it and, if they don’t, our new security bill. online will hold them to account with fines of up to 10% of global revenue. “
The Online Safety Bill – a draft of which was released in May – introduces a legal obligation on social media platforms to tackle damage, but it does not define what damage is.
Instead, the judgment on this will be left to the regulator Ofcom, which has the power to sanction a company that does not meet these obligations with a fine of up to 10% of its global turnover.
Notably, a similar power was available to the Information Commissioner’s office to deal with data protection breaches, and the maximum fine has yet to be imposed on a large platform.
In cases such as racist abuse, the content will obviously be illegal, but the language regarding the duty itself is vague.
As drafted, platforms will be required to “minimize the presence” of racist abuse and the length of time it remains online. It could be that Ofcom thinks the regulator is already doing this.
What are others saying?
Basically, the question is who is responsible for managing this content.
Imran Ahmed, Director of the Center for Countering Digital Hate (CCDH), said: “The disgusting racist abuse of England players is a direct result of Big Tech’s collective failure to tackle hate speech for several years.
“This culture of impunity exists because these companies refuse to take decisive action and impose real consequences on those who spit hate on their platforms.
“In the immediate term, racists who abuse public figures should be immediately removed from social media platforms. Nothing will change until Big Tech decides to radically change its approach to this problem.
“So far political leaders have offered nothing but words, without taking action. But if social media companies refuse to take notice of the problem, the government will have to step in to protect people.”
Ms Burns replied: “Unlawful racial abuse sent to England footballers must be prosecuted under applicable laws.
“The government needs to ensure that the police and the judiciary enforce existing criminal law, rather than abdicating responsibility by making it the problem of social media platforms. Social media sites do not manage courts and tribunals. prisons, ”she added.
What else can we do?
Graham Smith, a respected cyber law expert at Bird & Bird, told Sky News he believes government and police could use existing “ASBO online” powers to target the most egregious anti-social behavior online .
In one interview with the Information Law and Policy Center, he said the potential for the use of ASBOs (anti-social behavior orders – now known as IPNA, or injunctions to prevent harm or inconvenience) “has been largely ignored “.
IPNAs “have controversial aspects, but at least have the merit of being targeted against perpetrators and subject to due process before the courts,” Smith added, noting that “there could be consideration of expanding their availability to certain voluntary organizations concerned with victims of online misconduct “.