The UK’s Online Safety Bill: more revisions needed to ensure legislation tackles harms on the web

The UK’s Online Safety Bill aims to introduce law and order to the Digital Wild West and was recently introduced to the House of Commons. Anna George, a social data science PhD student at the University of Oxford, supported by the Avast Foundation, reflects on the impact of the legislation
I grew up in Dodge City, Kansas, a part of the United States that many associate with settlers, cowboys, and outlaw gangs; a part of the United States that many call the “Wild West.” It is fitting then that I spend my days here in England at the University of Oxford researching online harms in an online space which many today consider the “Digital Wild West”.
Like in the Old West, powerful companies hold outsized sway. Substitute powerful railroad and mining companies with online social media and e-commerce platforms and you have some idea of the chaos and mayhem that self-regulation in this new environment brings to the concept of online harms. Much like the Wild West, many have now come to realise a sheriff might be needed to introduce the rule of law and order to the online world. The UK’s Online Safety Bill takes on the challenge of trying to define and regulate online harms. This Bill appoints technology companies to be the sheriffs and Ofcom as the deputy of the Digital Wild West. I study malicious actors online (e.g., trolls, hate groups) and misinformation, so I was interested to see how the Bill would address these topics.
Read more: Collaboration, education and legislation: how governments are tackling online harms
According to the Joint Committee on the Draft Online Safety Bill, the Bill is necessary because the technology companies trying to regulate online harms alone has not worked. The Committee states that technology companies’ “[a]lgorithms, invisible to the public, decide what we see, hear and experience. For some service providers this means valuing the engagement of users at all costs, regardless of what holds their attention. This can result in amplifying the false over the true, the extreme over the considered, and the harmful over the benign.” The Bill was introduced to the House of Commons last month. It leaves many of the decisions of how to regulate online harms up to the platforms themselves and gives Ofcom power to compel technology companies to comply with the stated laws against online harms. But, it assumes that all technology companies want to reduce online harms, which is not always the case.
Part of my research, supported by the Avast Foundation, a non-profit organization that works to create an ethical digital world that is inclusive, transparent, and safe, focuses on fringe platforms that pride themselves on “free speech”, which to them means harassment, trolling, and abuse are all acceptable. It is no secret that these fringe platforms will not redesign themselves due to a government order. For example, when Germany tried to impose restrictions on the platform Gab under Germany’s Network Enforcement Act, Gab responded in an email to their users that they would not obey the rules set out in the Act and would instead fight against the Act even if that meant fines. Given this past behaviour, I assume Gab and other fringe websites would respond to the UK’s Online Safety Bill in a similar manner.
When I first published my blog on the draft Bill in February, the draft Bill had limited protections in place for victims of online abuse. The current form of the Bill has included more protections against anonymous trolls. Users of some social media platforms will be able to block unverified users from commenting on their content. Nevertheless, some harmful content will not be banned directly, but instead technology companies will need to limit people’s exposure to such content. This is a tricky balance. From past research on what happens when harmful communities are allowed to stay on platforms, we know that their speech still spills over into other online communities. Plus, this approach puts the onus on potential victims to safeguard themselves from online harms. Instead, both the social media companies and the abusers should be held responsible for their roles in online harms.
Misinformation is a recognised online harm, but the Bill only lightly touches on this subject. I have studied how anti-vaccination narratives spread on mainstream social media websites even after some anti-vaccination leaders were removed from mainstream social media. The continued circulation of anti-vaccination narratives hinders the progress against the pandemic because, as research has shown, vaccine misinformation leads to lowered uptake of the COVID-19 vaccine in the UK. There are many other types of misinformation, such as voter misinformation which can result in damage to the democratic process. One tactic the spreaders of misinformation use is to set up low-quality news websites or to consider themselves to be journalists. The Bill does not address this common tactic and leaves news content exempt from regulation. This means that these low-quality news sites can continue to spread misinformation as they already do. I am not suggesting that news content should be regulated, but that the qualifications of what is considered “news” content should have a standard, such as “news content should be fact checked”.
Thankfully we do not face the same dangers of the Wild West such as shootouts, but there are still real dangers that are encountered daily online. The Online Safety Bill makes a good first step at trying to bring law and order to the Digital Wild West, but more revisions may be needed.
Anna George is a PhD student on the Social Data Science programme at the University of Oxford, supported by the Avast Foundation.