Search
  • Toby Newton-Dunn, Olivia Archibald

Social media abuse: A need for urgent legal intervention


In the latest article for The Legal Pitch, Toby Newton-Dunn and Olivia Archibald assess the fallout from Euro 2020, which culminated in large amounts of online racial abuse. With reference to legal and practical implications, the piece covers aspects of criminal law, the newly implemented Online Safety Bill and societal attitudes, in an attempt to inform and raise awareness of an issue that seems easily forgotten.

(Image courtesy of www.mirror.co.uk)


Introduction


The final of Euro 2020 had been described as the greatest day in English football history since 1966; football seemed as if it was finally “coming home” and Gareth Southgate’s diverse and influential team had given the nation some hope to cling onto after the past 18 months of suffering caused by the pandemic. As devastating as it was for England to lose on penalties to Italy, missing out on their first major trophy in 55 years, what followed after the match when Marcus Rashford, Jadon Sancho and Bukayo Saka missed their respective penalties was as abhorrent as it was inevitable when a sudden wave of racist abuse flooded each of their social media channels.


The events following the final have intensified calls for social media companies to take racism seriously and do more to protect its users. It is clear that the current mechanisms in place are not robust enough to prevent racist ‘trolls’ from infiltrating high-profile accounts; and further, that these accounts exist in a ‘consequence free’ zone, most often anonymous, making it impossible to capture a true identity.


Firstly, this article will examine the current legal and practical problems in preventing online abuse. Secondly, the piece will evaluate some potential solutions; and thirdly it will discuss the likely impact of the Government’s Online Safety Bill and whether it has the power to eradicate online abuse for good.


Why online abuse is difficult to prevent


Facebook, Instagram, Twitter, YouTube and TikTok are currently the most dominant social media platforms in the world. However, they are currently failing to create a mechanism which can filter and block messages or posts before being sent if they contain racist or discriminatory material. These outlets argue this is because filtering cannot be based on a ‘list of words’; new words or phrases or substitute characters can be created to address a message in a discreet and covert way, for example using ‘monkey’ or ‘banana’ emojis in the comments section of black users, making it difficult for the technology to detect if these are being used offensively.


Despite this, social media companies have had great success in filtering and blocking terrorist material or images of child sexual exploitation and abuse, permanently banning Donald Trump and filtering misinformation regarding COVID-19 vaccines. The platforms argue that this is easier to control from a technological perspective because this media is fingerprinted, thus making it easier to detect in the future and allowing automatic removal. However, even when individuals take matters into their own hands and report abusive and offensive comments, social media companies still fail to remove these messages.


According to Instagram, reporting comments on Bukayo Saka’s Instagram with the ‘N word’ failed to infringe upon Instagram’s guidelines; an appalling thought. It begs the question whether these companies are being exposed for a severe lack of empathy and understanding towards its users, failing to protect them when they are at their most vulnerable.


Said companies have the technology available to showcase adverts on your timeline for products you were talking about with friends moments prior, yet they do not have the technology to stop online hate - either through algorithms, their own moderators or even when the general public reports them. Even though Twitter announced they had removed over 1000 comments, the damage had already been done and it is clear that not enough is currently being done to prevent these recurring incidents.






Analysis from a criminal law perspective - what are the issues and is anything being done to combat them?



A significant legal hurdle is that the current legislation in England & Wales used to prosecute abuse online is outdated and ill-suited to the modern nature of internet abuse. The criminal laws that most directly address online communications fall under s.1 of the Malicious Communications Act 1988 which overlaps with s.127 of the Communications Act 2003. Both of these laws are ambiguous because they both require a message to be sent that is: “grossly offensive or of an indecent, obscene or menacing character”. It therefore remains unclear for online users, technology companies and law enforcement agencies to know exactly what is meant by ‘grossly offensive’ or ‘indecent communication’ as there is no express indication ‘where the line is crossed’ from mere offensive communication and such that should amount to a criminal sanction. Furthermore, some behaviours such as online harassment are not specifically addressed by said act, and it is difficult to apply them to existing criminal sanctions that were not created with the current online space in mind.


In response to these issues and with the threat of online abuse exponentially increasing, the Law Commission launched a consultation paper on the 11th of September 2020, making a number of proposals for reform that would make the law clearer and better suited to effectively target serious abuse online, balanced with the need to target freedom of expression. Some of the changes included:


  • A new offence to replace the communications offences (the Malicious Communications Act 1988 and the Communications Act 2003), to criminalise behaviour where a communication would likely cause harm.


  • This would cover emails, social media posts and WhatsApp messages, in addition to pile-on harassment (when a number of different individuals send harassing communications to a victim).

  • This would include communication sent over private networks such as Bluetooth or a local intranet, which are not currently covered under the CA 2003.

  • The proposals include introduction of the requirement of proof of likely harm. Currently, neither proof of likely harm nor proof of actual harm are required under the existing communications offences.


Adopting these changes should (in theory) enable more individuals to be held accountable as the law would be easier to interpret and more conduct would be caught within the scope of the offence; however this is strictly limited to incidents that occur in England & Wales only, whereas abusers can target victims from anywhere in the world and consequently hide behind their anonymity with fake accounts that are too easily created. Ultimately, this criminal law is an important but limited part of the solution to online harms, which will require not just regulatory reform, but also educational, technological and cultural change. Is this technology available? In April 2021, it was revealed that six Premier League clubs were in discussions with US Tech firm Respondology whose software allows the abuse to be hidden in real time on several major social platforms, meaning players would not be exposed to any hateful content on their accounts. Although the abuse would be invisible, it would still be logged and abusers could face prosecution if necessary, acting as a discreet and personalised comment moderation tool (24/7, 365 days a year) enabled partly by artificial intelligence but also by around 1000 human moderators of which clubs pay a monthly fee to secure a contract with Respondology.


This proactive approach of removing hate in real time seems far more beneficial than a reactive approach of removing comments after they have been posted and seen by the abused which may subsequently have a psychological impact on the individual. Currently, Respondology’s software is compatible with Facebook, Instagram, YouTube and TikTok but not yet on Twitter as they allow users to hide replies, but also lets users see the replies if they choose to do so, thus making visible all the comments that Respondology aims to conceal. When pitched to professional sports teams in the US, it is notable that every single NFL, NBA and NHL team that has tested for this has consequently decided to sign a contract. However, it does not sit well that clubs have to pay private firms to eradicate trolls when the social media platforms fail to do so themselves. Why should anyone have to pay to prevent themselves or their employees from being racially abused online? Does that suggest that every large organisation would have to pay a monthly fee to protect staff well-being? Nevertheless, it is not just footballers who receive abuse online, individuals can become vulnerable very quickly when publicly scrutinised for their actions, leading to cases of suicide when a person cannot prevent abuse coming at them from every direction. For example, former Love Island presenter Caroline Flack took her own life after she was publicly ashamed for facing trial for allegedly assaulting her ex-boyfriend. People took to flooding her social media channels with horrific abuse, fuelled by the British press’ attack on her completely disregarding her mental well-being after she had been removed from Love Island and her career in jeopardy. If the only viable solution is to pay a monthly subscription fee to prevent people in the public limelight from seeing abuse, then it is clear social media companies are considerably failing to exercise their duty of care towards safeguarding their users. People should not have to pay to be protected.

Account verification


Social media sites have long been ‘consequence free’ areas for abuse and it has been campaigned for, over many years, that these companies should apply a mandatory identity verification requirement for users when opening an account. Since the beginning of social media, it has been far too easy to create an account under any identity (name and/or age) without this being checked by the site’s moderators, opening the floodgates of abuse by anonymous users who are rarely held to account.

This is not to say that public anonymity should be disregarded, it is important to large groups of people, and no-one should have to display their real name online. For these reasons anonymity is precious and should be preserved. Despite this, when opening an account, details about an individual’s identity can still be kept closed and protected behind the account, so that perpetrators can be easily found in the event they choose to send abuse. Social media companies have argued it would not be fair to impose this in all their users because not everyone has access to official documentation. However, if someone cannot prove who they are, why should they have the right to register an account and have the opportunity to do whatever they wish without the relevant authorities being able to identify them? Perhaps it would exclude children from being able to use these sites as they may not have a driving license, but other forms of ID may be accepted such as a passport or birth certificate (or even the use of their parent or guardian's ID so they can be traced). It is clear that not only adults partake in racist abuse, but children see it and repeat it. This is clearly documented through the racial abuse of Ian Wright by a teenager in May 2020 via a direct message on Instagram.


Said actions raise the question as to what social media channels are prioritising. Are they trying to simply maximise user activity and facilitate the ease of joining their platforms; as more users then create further advertising opportunities and thus help to retain their dominance in the market? Would increased barriers to creating accounts reduce their turnover? These are the questions that require an answer with account verification in order to reduce online hate.


If companies proceeded with the verification process, another logical step to increase deterrence could be to put online abusers on a register that can be made available to employers when undertaking a background check. The consequences need to go beyond identifying the perpetrators and instead further action must be taken - whether that be imprisonment, formal registration or compulsory education. This has been campaigned for by Katie Price, whose son Harvey has received online abuse for his disability, the colour of his skin and his size. In an interview with Sky News, Price said: “we are all allowed freedom of speech but you know when you’re crossing the line, more people are committing suicide, it’s just getting worse, it’s on a wider spectrum”. When she complained about all the cyber-bullying her son received, she said she received no response from any of the social media companies. She added “there have been more suicides, more mental health issues, more abuse...the language on there is getting worse because people know they can get away with it”. This was exemplified when she printed off abusive comments and handed them to the police as she was told that there was nothing they could do because there was nothing in place. This is hardly breaking news, but it is ultimately very concerning that without consequences and mechanisms in place, trolls will just simply continue to do what they do best. If law enforcement agencies are planning on issuing sanctions for those guilty of abuse, they must do so consistently. Three days after the final of Euro 2020, Boris Johnson announced that people guilty of racist abuse of footballers will be banned from matches. Whilst this would be a welcomed step in tackling this issue, it seems strange to limit these consequences to football fans with footballing consequences only. As aforementioned, racial abuse occurs daily; singers, actors, politicians, celebrities, YouTubers and people from all walks of life. There should be a proportionate punishment for everyone found guilty. From a more holistic perspective, it is clear as it is depressing, that racism is still a problem that lives in our society and banning a minority of football fans, which in itself will still be a mammoth task to go through the effort of tracking down, is not the most logical step to take initially in removing these trolls off social media sites.

Online Safety Bill:


The introduction of the Online Safety Bill published by the government in May 2021 will see a new age of accountability for user-to-user and search service providers. Relevant service providers will now find themselves with a duty of care towards their users that has previously been lacking. The Bill also confers powers on the Office of Communications (OfCom) to oversee the implementation of the new legislation and the future development of codes of practice to assist with clarity and therefore compliance. It is highly significant that there is now a regulator to hold relevant service providers to account.


Not only does the Bill encompass ‘illegal’ content but also, content that is “harmful” to adults or children. Secondary legislation will set out the meaning of “harmful”. This is significant because currently only a small proportion of online abuse can be prosecuted. This will hopefully mean that going forward; racist, derogatory and discriminatory content is moderated by service providers with the same gravity as terrorist content and content involving child sexual exploitation.


Although the Bill has the primary aim of protecting UK users of online services, the Bill is extra-territorial encompassing services with links to the UK; including services that have a significant number of users in the UK, services where UK users from one of the provider's target markets, or services that are capable of being accessed by UK users. Therefore, this Bill goes much further than existing legislation that is limited to incidents that occur within England and Wales.


Relevant service providers will be required to undertake risk and impact assessments as “harmful content” under the Bill includes ‘content that the relevant service provider has reasonable grounds to believe has a risk of significant adverse physical or psychological impact on an adult with ordinary sensibilities.’ This impact can include short term depression and anxiety. This language is highly subjective and therefore suggests that a more robust approach to content moderation is required from relevant online service providers.


However, it won’t simply be possible for online service providers to arbitrarily remove controversial content. Category 1 service providers, as defined in the legislation, must take steps to protect their users’ freedom of expression, as well as their users’ privacy, any journalistic content and other content of “democratic importance”. Perpetually, this is likely to be a difficult balance but this aims to ensure that fundamental rights in the online sphere remain unaffected by the legislation.


Service providers that fail to meet their duties under the Bill could be subject to OfCom intervention and receive a fine of up to the higher of £18m or 10% of their global annual turnover for non-compliance and senior management could find themselves the subject of criminal sanctions. OfCom would also be able to seek a court order to disrupt the activities of non-compliant online service providers, where it is deemed there is a risk of “significant harm” to individuals in the UK.



Conclusion


It is clear that there are a number of legal and practical issues that pose difficulties in preventing trolls from sending abuse online. The current criminal legislation under the Malicious Communications Act 1988 and Communications Act 2003 is outdated and lacks the potency for an effective deterrent in its existing form. Although changes proposed by the Law Commission would make the law more suited to the online world, it fails to consider how anonymous accounts would be targeted given that these accounts are often the perpetrators of these incidents, added to the fact the legislation is limited to abuse that occurs in England and Wales only. It is evident that the logical step is for all social media platforms to have a mandatory verification process for all accounts to ensure individual accountability and consequences for guilty actions. Resultantly, this step has the capacity to change the online world for the better, bringing into line a consistent mechanism that can identify abusers and issue proportionate punishments.


The Online Safety Bill has the potential to increase accountability of social media users however, if the aim of the bill is to eradicate abuse entirely, this goal is unlikely to be achieved if the draft bill remains unchanged due to the subjective wording of what constitutes “harmful” content, making it difficult to capture all forms of abuse. However, the emphasis right now is for social media companies to start taking abuse more seriously. They have the ability to use their technology more effectively to implement the necessary changes immediately in order to make the online world a safer place. New legislation may take years to come into force yet social media companies can make these changes overnight with the technology and financial resources they have at their disposal. Time will tell if the Online Safety Bill will have a significant impact preventing online abuse, but the longer social media companies fail to implement change, history will continue to repeat itself. As Bukayo Saka rightly said, it is love that always wins, not hate.



This article was written by Toby Newton-Dunn, Olivia Archibald and Adam Smith, and said piece is the opinion of all authors only.

223 views0 comments

Recent Posts

See All