top of page
  • H1r0

World-first online safety laws have the Metaverse in their sights

Yesterday’s first reading of the Online Safety Bill represents the first stage of the Bill's passage through the House of Commons.


To date, the majority of internet content is governed by a self-regulatory approach. Critics, including parliamentary committees, academics, and children’s charities, have argued that self-regulation by internet companies is not enough to keep users safe and that statutory regulation should be introduced.


The Bill – originally announced in 2019 – is intended to protect children from harmful content such as pornography, to limit people’s exposure to illegal content, while protecting freedom of speech, and require social media platforms, search engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.


Additionally, the UK’s independent communications regulator Ofcom will have the power to fine companies failing to comply with the laws up to 10% per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.


Impact on the Metaverse


Nadine Dorries has previously stated in a Joint Committee meeting that the new law would apply in the Metaverse.


The Bill (as introduced) applies to “user-to-user services” which are defined as “an internet service by means of which content that is generated directly on the service by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service.”


It’s clear that any version of the Metaverse would fall under the above definition and would therefore fall under the scope of the Bill.


The Bill places a duty of care on providers of user-to-user services, including undertaking “suitable and sufficient illegal content risk assessments” and to “to take or use proportionate measures to effectively mitigate and manage the risks of harm to individuals”.


Last year, a beta tester of Meta’s virtual-reality social media platform, Horizon Worlds, reported that they had been groped by a stranger. Meta’s internal review of the incident found that the tester should have used Horizon World’s built-in “Safe Zone” tool, which provides users with a protective bubble, within which no one can touch them, talk to them, or interact in any way.


In this scenario, it could be argued that Meta had taken proportionate measures to effectively mitigate and manage the risks of harm to the tester.


The question is “how will the Online Safety Bill impact the tech giants' Metaverse plans?”


Jessica Lewis/unsplash

15 views0 comments

Recent Posts

See All
bottom of page