Virtual and augmented reality are deeply humanizing technologies, empowering users to experience digital content in the most natural form possible. At the same time, there are major privacy concerns, as metaverse platforms could easily track and profile users at levels that go far beyond any current consumer technologies.
So what can we do to protect our privacy in the metaverse?
The image below shows me standing in a “Virtual Escape Room” created by researchers at U.C. Berkeley’s Center for Responsible Decentralized Intelligence. The simulated world requires me to complete a series of tasks, each one unlocking a door. My goal is to move from virtual room to virtual room by solving puzzles that involve creative thinking, memory skills, and physical movements, all naturally integrated into the experience.
I am proud to say I made it out of the virtual labyrinth and back to reality. Of course, this was created by a research lab, so you might expect the experience was more than it seems. And you’d be right — it was designed to demonstrate the significant privacy concerns in the metaverse. It turns out, while I was solving the puzzles, moving from room to room, the researchers were using my actions and reactions to determine a wide range of information about me. I’m talking about deeply personal data that any third party could have ascertained from my participation in a simple virtual application.
As someone who’s been involved in virtual and augmented reality for decades, and who’s been warning about the hidden dangers for years, you’d think the data collected would have not surprised me. But you’d be wrong. It’s one thing to warn about the risks in the abstract — it’s something else to experience the privacy issues firsthand. It was quite shocking, actually.
That said, let’s get into the personal data they were able to glean from my short experience in the escape room. First, they were able to triangulate my location. As described in a recent paper about this research, metaverse applications generally ping multiple servers which enabled the researchers to quickly predict my location using a process called multilateration. Even if I had been using a VPN to hide my IP address, this technique would still have found where I was. This isn’t shocking, as most people expect their location is known when they connect online, but it is a privacy concern none the less.
Going deeper, the researchers were able to use my interactions in the escape room to predict my height, the length of my arms (wingspan), my handedness, my age, my gender, and basic parameters about my physical fitness level, including how low I could crouch down and how quickly I could react to stimuli. They were also able to determine my visual acuity, whether or not I was colorblind, the size of the room that I was interacting from, and basic assessments about my cognitive acuity. The researchers could have even predicted whether I had certain disabilities.
It’s important to point out that the researchers used standard hardware and software to implement this series of tests, emulating the capabilities that a typical application developer could employ when building a virtual experience in the metaverse. It’s also important to point out that consumers currently have no way to defend against this — there is no “privacy mode” in the metaverse that conceals this information and protects the user against this type of evaluation.
Well, there wasn’t any protection until the Berkeley researchers began building one — a software tool they call “MetaGuard” that can be installed on standard VR systems. As described in a recent paper from lead researchers Vivek Nair and Gonzalo Garrido of U.C. Berkeley, the tool can mask many of the parameters that were used to profile my physical characteristics in the metaverse. It works by cleverly injecting randomized offsets into the data-stream, hiding physical parameters such as my height, wingspan, and physical mobility, which otherwise could be used to predict age, gender, and health characteristics.
The free software tool also enables users to mask their handedness, the frequency range of their voice, their physical fitness level and conceal their geospatial location by disrupting triangulation techniques. Of course, MetaGuard is just a first step in helping users protect their privacy in immersive worlds, but it’s an important demonstration, showing that consumer-level defenses could easily be deployed.
At the same time, policymakers should consider protecting basic Immersive Rights for users around the globe, guarding against invasive tracking and profiling. For example, Meta recently announced that their next VR headset will include face and eye tracking. While these new capabilities are likely to unlock very useful features in the metaverse, for example enabling avatars to express more realistic facial expressions, the same data could also be used to track and profile user emotions.
This could enable platforms to build predictive models that anticipate how individual users will react to a wide range of circumstances, enabling adaptive advertisements that are optimized for persuasion. Such ads in the metaverse has been theoretical, but just this month ROBLOX (which boasts over 50 million daily active users, nearly all of them kids) announced they will begin “immersive advertising” in 2023. If a company focused on kids is headed in this direction, we can guess that most major platforms will follow unless policymakers put restrictions in place.
Without regulation, we need to worry that immersive advertising could cross the line from marketing to manipulation. This could be used to push products or services through predatory means, or worse it could drive misinformation more efficiently than any current technologies. As I discussed with POLITICO last week, an unregulated metaverse could become the most dangerous tool of persuasion humanity has ever created.
Don’t get me wrong — I firmly believe the metaverse has the potential to be a very positive technology for humanity. That’s why I have been pushing for immersive worlds for over 30 years. At the same time, the extensive data collected by virtual and augmented platforms is a major concern and requires a wide range of solutions, from protective tools like MetaGuard to thoughtful and meaningful metaverse policy and regulation.
— Note: this article originally appeared in VentureBeat.
All Comments