top of page
  • Tr1n1ty

Barriers to the Metaverse: Part 3

In our ongoing “Barriers to the Metaverse” series, we continue to explore thefactors that could impact the Metaverse and potential challenges that might need to be overcome before a boundaryless, persistent, immersive, virtual world can be fully realised.

In our previous posts, we have looked at the underlying network infrastructure required to power the Metaverse and the issue of open standards and interoperability. Next, we’re looking at privacy and security.


The internet in its current iteration, Web 2.0, has allowed us to share large amounts of data with ease, including personal details such as gender, sexual orientation, location data, political affiliation, religion, photos, videos, the list goes on. Unfortunately, the sharing of this type and volume of data has privacy and security concerns, with organisations holding and, unfortunately, losing millions of people’s data.


These data concerns led to the General Data Protection Regulation (GDPR), which came into force on the 25th of May 2018, superseding the previous Data Protection Directive. The GDPR is one of the most wide-ranging pieces of legislation passed by the European Union and was introduced to standardise data protection law across the single market and to give people greater control over how their personal information is used.


Unfortunately, the GDPR has not led to the elimination of data breaches and the loss of people’s data. Since it was enacted, over 900 fines have been issued to companies failing to protect their customer’s information. These include Amazon ($877 million), WhatsApp ($255 million), Google Ireland ($102 million), and Facebook ($68 million). Most recently, Meta was fined over $18 million because it could not readily demonstrate that it had implemented security procedures to protect user data. The question is: “how will these organisations fare in protecting our information in the Metaverse? Where the stakes will likely be much higher”.


The advent of the Metaverse will introduce new privacy and security issues, many of which we are not yet able to fully grasp. However, one thing is clear, if we are to create persistent digital versions of ourselves, which can interact with anyone on the planet in real-time, then we will be exposing ourselves to bad actors, adversaries in the Metaverse that are interested in stealing our information and using it for personal and/or financial gain. The technology may be new, but the threats will likely be familiar in the short-term.


Things to watch out for will be phishing attempts, whereby someone posing as a person or an organization you trust attempts to extract sensitive information, like bank account details Some banks have already set up shop in the Metaverse, and it’s easy to imagine nefarious groups or individuals creating virtually identical banks in order to swindle people out of the their money. Other areas of concerns include trading scams, and existing issue on the online gaming platform Roblox, fast-money scams, fake Metaverse sites masquerading as the real thing, and Metaverse tech support scams.


There is a significant body of academic research behind the science of trustworthiness. We tend to trust the people around us, and there will be many people inhabiting the same virtual environments in the Metaverse. We also tend to trust people who “look” trustworthy, faces that look happy and those that have feminine and even baby-like features tend to be trusted more, even when the person with that face does not have trustworthy intentions. It’s not hard to imagine bad actors using machine learning algorithms to create the ultimate in trustworthy guises in order to take advantage of others.


Do you foresee a race for the most trustworthy face in the Metaverse? How do you think the future titans of the Metaverse will fare in protecting our information? Please leave a comment below!


Roth Melinda/Unsplash

9 views0 comments

Recent Posts

See All
bottom of page