Home Metaverse Can We Make the Metaverse a Safe Space For All?

Can We Make the Metaverse a Safe Space For All?

by admin

As excited as we are about the metaverse, we can’t let things get out of hand.

From mid-90s forums to Facebook live streams, the topic of free speech on the internet is a decades-old conversation. As Web3 evolves and the building blocks of the metaverse take shape, AR and VR will lead to new forms of communication technology. Immersive experiences, such as 360-degree videos and avatar-based community spaces, will soon become common ways for users to interact with other people in real time. 

While these experiences are sure to transform how users connect, exchange information and express themselves online, there’s also room for them to do much more than overcome barriers of physical space. Potential issues — such as personal space violations, hate speech, verbal harassment and underaged access to explicit content — are just a few of the concerns that are being tied to the emerging metaverse. It’s for this very reason that frightening stories are starting to trickle into our newsfeeds — such as tales about online “groping” incidents in Meta’s Horizon Worlds, or reports of children being exposed to explicit content in popular metaverse gaming platforms, such as Roblox.

Let’s dive deeper into some examples of present-day safety concerns in the nascent days of Web3. We’ll also go over some solutions being suggested by experts, as well as how we can try to learn from some of our past mistakes in previous tech eras.

First — has the internet ever been truly safe?

For many years now, freedom of expression has been seen as a fundamental principle of successful modern society. Even UNESCO states that: “The principle of freedom of expression and human rights must apply not only to traditional media but also to the internet and all types of emerging media platforms, which will contribute to development, democracy and dialogue.”

Since its inception, the internet has enabled people from all corners of the globe to come together and be heard. The earliest era of the internet — now referred to as Web1 — was largely uncontrolled by media organisations, showing us the wonders of where effective, unrestricted speech could go. Public forums, chat rooms and early website builders allowed just about anyone to exchange ideas or discourse without the governance of online guidelines or oversight from policymakers.

The early days of the web also introduced two key components: free speech and anonymity. Anonymity granted users a newfangled sense of freedom and privacy, along with the liberty to detach their legal and physical identity from their internet persona if they so desired. An open, anonymous internet also allowed users to be more transparent, more objective and less biased when building friendships or connections. Platforms that required people to communicate using their real identities also weren’t really a thing yet — meaning that users could also choose to their personal data entirely offline.

Of course, this framework also moulded the internet into a de-facto “wild west” of sorts. Hate speech, should it have cropped up, was seldom regulated — and those who engaged in any sort of illegal activity were able to do so while more easily dodging accountability. In the words of an old Web1 expert: “It [was] almost impossible to control illegal activity, which [was] perpetrated or discussed over the internet since, in most cases, police [were] not able to track the offender down.”

In Web2, a good chunk of the internet was eventually consumed by Big Tech monopolies (namely Facebook and Google). With large teams and sophisticated content moderation models in place, centralised platforms found ways to mitigate online abuse and explicit content in a bid to keep communications safer and more age-appropriate. 

Photo by © RoSonic – Shutterstock.com

Facebook, for example, has always enforced a set of Community Standards to regulate all content shared on its grounds. This regulatory framework has enabled a system where any inappropriate content is governed by a team of moderators — who are always working to remove rules-violating content from the platform. However, platforms like Facebook and Google have also famously compromised users’ rights to free expression and privacy. In the last decade, the ethics behind Big Tech’s content moderation systems have also been the subject of extensive questioning and scrutiny.

The short answer? No, the internet has never been truly safe. The freer terrain of Web1 allowed for more unregulated expression and personal privacy but put in little stops to curb online harassment or prevent underaged audiences from being able to access explicit content. Web2 has arguably done a better job at the latter, but it has been at the expense of our privacy and rights to ownership. 

We’re now faced with the risks that will come with a more immersive internet. Unlike previous iterations of the web, user interactions will be encouraged to mirror real-world actions in the metaverse. While this will allow for more lifelike experiences and limitless opportunities for users to create and monetise, there’s a high probability that this model will also further exacerbate challenges for user safety.

Can we ensure safe communication in shared metaverse spaces?

As the concepts of physical and digital will be converged in the metaverse, it’s looking like the lines between good and bad contact might be as well. In light of this, concerns around physical and sexual assault have been raised — with many experts calling for increased preventative measures before the metaverse becomes more widely accessible.

Early iterations of Meta’s first Web3 offering — Horizon Worlds — was one such example of this. As Mark Zuckerberg’s first version of a metaverse space launched its first beta release in late 2021, so did the floodgates for safety concerns. 

While running a beta test in Horizon Worlds, a woman alleged that she was “virtually groped” inside the platform by other male users. Not long after this encounter, another woman reported being “verbally and sexually harassed” by three or four male avatars inside Horizon Venues.

Photo by © Diego Thomazini – Shutterstock.com

“Sexual harassment is no joke on the regular internet, but being in VR adds another layer that makes the event more intense,” the first subject remarked. “Not only was I groped last night, but there were other people who supported this behaviour — which made me feel isolated.”

In all, the idea that female users could see their safety compromised in the metaverse is extremely concerning. In a 2021 survey by Reach3 Insights and Lenovo, 59% of women reported feeling the need to hide their gender while playing games online, in an effort to avoid being harassed. If we compare this data with growing initiatives to make Web3 more inclusive and welcoming for women, we can see that these numbers are trending in the wrong direction.

While Meta responded to the virtual assault quite rapidly, their action generated a mixed response. To deter any VR groping from taking place inside their virtual world, the company introduced the Personal Boundary feature for the Horizon series: an imagined “4-foot zone of personal space” that will encircle each users’ avatar to prevent any unwanted interactions. 

According to Meta staff: “Personal Boundary builds upon our existing harassment measures that were already in place — for example, where an avatar’s hands would disappear if they encroached upon someone’s personal space.” Moreover, they’ve argued that having the Personal Boundary system on by default will “help to set behavioural norms” — a feature that will be “important for a relatively new medium like VR.”

Will we see similar personal space boundaries envelop our metaverse avatars in all of our future online journeys? Will there be an increased need for women to adopt them while participating in online activities? 

Right now, it’s hard to tell — and even Meta’s representatives aren’t entirely sure if their latest solution is totally foolproof. According to Andrew Bosworth, Meta’s VP of AR and VR, moderating the “toxic environment” in a metaverse space “at any meaningful scale is practically impossible.” But while there may not be a magical answer, it’s becoming clear that — at the very least — new safety protocols will need to be outlined and evaluated as they are adapted to the conditions of Web3.

Can we keep explicit content age-restricted in the metaverse?

In any case, ensuring that the web is safe for younger audiences will always be paramount. Children born today will never have known a world without the internet or social networks, meaning that the likelihood that they’ll encounter something inappropriate will certainly increase as they become more active online.

Studies have shown that 56% of children aged 11 to 16 have viewed explicit material online, while one-third of British children have encountered sexist, racist or discriminatory content at some point in their lives. Examples of inappropriate materials that children have reported finding access to include pornographic material, explicit language, racist, sexist or violent imagery and unmoderated discourse.

Roblox, currently one of the most popular children’s games in the world, has been referred to as a “primitive metaverse” akin to decentralised Web3 platforms — namely for its ability to offer more immersive gaming experiences, a robust community and a space for users to submit and generate their own content. The gaming giant also recently came under fire for failing to regulate a plethora of games hosted on its platform. In spaces code-named as “condos”, pint-sized avatars could be found participating in sex acts and exchanging sexually explicit dialogue.

Photo by © Wachiwit – Shutterstock.com

Recent reports have also accused gaming platform VRChat — an application with a minimum age rating of 13 — of providing all users with open access to “metaverse strip clubs”. While posing as a 13-year-old girl, a BBC researcher alleges being subjected to sexual materials, racist insults, instances of grooming and even rape threats. Also, due to the experience being more immersive, the researcher also noted the capacity for users to act out sex acts in front of other users’ avatars.

Like Meta, Roblox has since outlined a plan to enhance safety for its user community. Big Tech companies appear to be racing to build metaverse spaces that will follow a set of strict guidelines — especially as they become increasingly more immersive and lifelike. Will decentralised platforms be able to achieve the same, or have the “condos” of Roblox given us an omen for how difficult these new spaces will be to police?

Just how harmful could a misguided metaverse be?

Given that the metaverse will allow such a wide range of interpersonal interactions, it’s only logical to assume that not all actions or expressions will be positive. 

Dr. Liraz Margalit, a digital psychologist who studies online behaviour, asserts that — like many already do on the internet — people will find ways where they can behave differently in the metaverse than they can in real life. While remarking on the dangers of future metaverse interaction, she’s claimed that: “You have the anonymity and you have the disinhibition effect. [Platforms can] provide you with the playground to do anything you want.”

We’re also posed with a significant question — is sexual harassment in the VR world still considered a form of assault? Should all metaverse platforms consider the need for imaginary “shields” or boundaries to deter the invasion of a user’s personal space? According to experts, sexual harassment in VR is still considered a form of assault — with “groping” or virtual coercion still being defined as offences, even if there’s no physical contact involved. 

Katherine Cross, a PhD student researcher of online harassment at the University of Washington, has defined it well: “At the end of the day, the nature of virtual-reality spaces is such that it is designed to trick the user into thinking they are physically in a certain space, that their every bodily action is occurring in a 3D environment.” As a result, these incidents are “likely to produce similar emotional and psychological reactions as occurrences of assault in real life.”

Moreover, we know that harmful online content can have a wide-reaching impact in the real world. Of course, where lines are drawn is largely dependent on laws, norms and expectations of particular users and platforms. However, there’s still no denying that any form of hate speech, harassment and misinformation can lead to greater risks in the offline world — such as the potential for targeted violence, social or political consequences and emotional damage.

What are some proposed safety solutions?

In order to create safer and more welcoming environments, metaverse platforms will need to ensure they equip their spaces with moderation tools that will prevent and discourage misuse. It’s also becoming clearer that there is a need for policymakers to begin tailoring internet safety laws so that they can better meet the growing needs of Web3. But in a decentralised internet no longer moderated by Big Tech platforms, how can this be achieved? 

According to the NSPCC (National Society for the Prevention of Cruelty to Children), “improvements in online safety are a matter of urgency.” And while the risks associated with VR and the metaverse haven’t yet been outlined in the UK’s upcoming Online Safety Bill, Culture Secretary Nadine Dorries has stated that the legislation will begin covering these new technologies. When passed, the bill will impose stricter mandates on what platforms and providers can share — with a primary goal of protecting children from explicit content.

A recent report on metaverse content moderation from ITIF has also emphasised the importance of third-party platforms mediating channels where immersive activity will take place:

“Without proper consideration for these shifting parameters of speech in immersive spaces, content moderation approaches — and the policies that restrict them — could have a chilling effect on individual expression or allow harmful speech to proliferate.”

ITIF suggests that policymakers should work with industry leaders to “mitigate the greatest potential harms from immersive content.” However, it’s also critical that all platforms — centralised or decentralised — are armed with the necessary tools and knowledge to establish proper content moderation approaches that will protect users from harm. 

One solution is for platforms to implement protections (such as established community guidelines) against real-world harms that could occur from activities in the metaverse — including non-consensual pornography, fraud, child endangerment and other forms of defamatory content. Another includes the creation of working groups that will provide guidance on intellectual property and copyright protections, to “promote innovation, fair compensation and creative expression in immersive experiences.”

In decentralised spaces, owners should also consider establishing voluntary guidelines that will encourage users to “identify, respond to and report on harmful content and content moderation activities.”

In Web3, decentralised platforms should ultimately find ways to harmonise safety and privacy by implementing user controls that will allow individuals to shape their own experiences and meet their needs and expectations. Age-gating controls could be put in place for underaged users, but we could one day interact within spaces where adult users would be given their own set of controls (such as determining how wide of a shield they’d like around their avatar, or which filters they’d like to enable). This would allow them to engage or not engage with certain types of objects or environments and make these decisions themselves. 

Final thoughts

Throughout the internet’s lifespan, online platforms have continued to innovate to meet the cultural and societal requirements of users. Over the years, however, platforms have also struggled to provide a balance of established rules, content moderation, user privacy and individual user controls. Like other digital platforms, the metaverse will inherit many of these challenges — and as our world continues to explore a more immersive future, many of them will need to be reevaluated.

With that being said, it’s also become abundantly clear that many of us are not looking for a regulatory framework like Facebook’s to govern us any longer. Instead, Web3 should be a place where we can learn from both Web1 and Web2’s mistakes — where users, developers and policymakers can evaluate the lessons we’ve learned and build spaces that are safer, more open and more equitable than ever before.

In a decentralised internet, it’s important that we also try to build spaces that will encourage, rather than force approaches to keeping users safe. It’s hoped that users and developers will continue to be educated, that industry standards will continue to be revised and that effective self-regulation frameworks will continue to be built.

Quelle:

You may also like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More