I developed the first interactive augmented reality system while working at the Air Force Research Laboratory three decades ago, allowing users to touch and interact with a blended world of real and virtual items. I was so motivated by people’s responses when they used early prototypes that I launched one of the first VR firms in 1993, Immersion Corp, as well as an early AR technology firm named Outland Research. Yes, I’ve been a long-time believer in the metaverse.
I’ve been a long-time critic of the sector, especially regarding the risks that AR and VR might have on society, as well as their potential to be used in tandem with artificial intelligence. It’s not the technology I’m afraid of; rather, it’s the possibility that huge businesses may use metaverse infrastructure to monitor and influence people in ways that social media seems outdated.
Because these systems will not only track what you click on, but also where you go, what you do, what you look at, and how long your gaze lingers, they represent a completely new level of monitoring. They’ll also keep an eye on your facial expressions, vocal inflections, and vital signs (as recorded by your smartwatch), all while sophisticated algorithms analyze your emotional state.
The major platforms will be able to take a deep look at your actions and reactions, predicting how you’ll respond. This level of intrusion may seem like science fiction, but I know that unless we require strong regulation, this is where it’s going.
But, precisely, what should we limit?
Finally, we must limit the amount of monitoring that is permissible in the metaverse. Our travels, interests, conversations, and interactions will all be recorded by the platform operators. They should not be allowed to keep this data for longer than necessary to handle whatever experience is being simulated. They’ll have a harder time defining our actions over time as a result of this.
Furthermore, they should be compelled to disclose to the general public what is being tracked and how long it is kept. If they’re keeping track of your gaze, for example, you need to be made aware of it. While these may seem like two separate issues, they are inextricably linked. There should also be tight restrictions on what they can track and for what purpose.
Advertisements algorithms that monitor your facial expressions, vocal inflections, posture, and vital signs (including your heart rate, breathing rate, pupil dilation, blood pressure, and even your galvanic skin response), for example, should be opposed by the general public. Unless carefully controlled, these highly personal bodily reactions will be utilized to modify marketing messages in real time.
Furthermore, we must regulate how third parties may affect us in the metaverse. That is because, instead of blatant pop-up advertising, AR and VR will use subtle simulated augmentations that seem real to market products. If a third party pays for a virtual product placement in your augmented environment, the platform should be required to notify you that it’s a targeted ad rather than a chance encounter.
When we target consumers with advertisements (be it in person or on the web), small but consistent behavioral changes occur. These subtle adjustments have a cumulative impact and contribute to major long-term results. This is especially true when third parties utilize simulated spokespeople (SimGens I call them). In the metaverse, you may be covertly targeted by individuals who seem and act like any other user, but are actually AI-controlled agents that have been programmed to engage you in “promotional conversation.”
These AI avatars will be able to read and interpret your emotions and vocal inflections better than any used-car salesman, adapting in realtime to your feelings. Even the style in which these simulated agents appear to you – their gender, hair color, eye color, wardrobe choice – will be designed by sophisticated algorithms that accurately anticipate which features are most likely to influence you personally.
We need to regulate this area, requiring third parties to inform us when we’re interacting with agenda-driven bots controlled by smart algorithms. This is especially essential if those algorithms are also monitoring our reactions, for example, assessing our posture, breathing, and even blood pressure, allowing conversational agents to adjust their messaging strategy in real time. Unless it is formally prohibited, this high degree of interactive coaching will take place.
Some people argue that regulations would stifle innovation and that consumers can just choose to opt out of the metaverse if they don’t want to be tracked and profiled. I believe it is naďve to think that opting out will not be an option because these platforms will become so essential in our access to the world. This becomes more apparent with virtual reality.
AR, in particular, will project important information all around us throughout our everyday activities. I’d like to refer you to Metaverse 2030, a short story I wrote that depicts the amazing potentials of the AR metaverse as well as its invasive and overpowering underside, to demonstrate how deeply integrated AR will be in our lives within ten years.
Finally, the metaverse will enable incredible applications that improve our lives, expanding what it means to be human. At the same time, there are genuine risks we must avoid. The most efficient way to harness the potential while minimizing risks is to tightly control the environment. And we need to start now before issues become so entrenched in infrastructure and business models that they’re impossible to cure.
The alternative is to live in a metaverse that appears and feels genuine, while behind the scenes, powerful businesses are manipulating our experiences for the highest dollar without our knowledge. That isn’t the future we want, therefore regulation is both inevitable and urgent. The metaverse is on its way soon, whether you like it or not.