As we enter a new year, it’s time for our annual ritual of synthesizing the lessons from recent history and formulating the outlook for the near term. The past year has been action-packed for spatial computing as the world gradually emerges from the grips of a pandemic.
The past year was also marked by the emergence of metaverse mania. Though it has legitimate principles and promise, the term has been ambiguated through overuse. It’s also been overhyped in terms of the timing of its arrival. A fully-actualized metaverse is decades away.
Beyond the metaverse, AR and VR continue to be defined by steady progress in several areas. We’re talking mobile AR engagement & monetization; AR marketing and commerce; continued R&D in AR glasses; enterprise adoption; and the gradual march of consumer VR.
So where is spatial computing now, and where is it headed? What’s the trajectory of the above subsegments? This was the topic of a report from our research arm, ARtillery Intelligence.
Beyond the Physical State
Last week, we looked at Niantic’s Lightship which surfaced a key concept: the real-world metaverse. This is one of the metaverse “tracks” we’ve examined. Beyond fully online/virtual worlds, this involves geo-anchored data that enables AR devices to evoke relevant digital content.
This metaverse track could be truer to the term itself. The Greek root meta means beyond. We’re talking about applying technologies like AR to add dimension to places and things that goes beyond their physical state. It could be more meaningful than fully virtual experiences.
As further background for this “real-world metaverse,” one of AR’s foundational principles is to fuse the digital and physical. The real world is a key part of that formula….and real-world relevance is often tied to location – everything from real estate to commerce to social interaction.
To that end, one of AR’s battlegrounds will be in augmenting the world in location-relevant ways. That could be wayfinding with Google Live View, or visual search with Google Lens. It’s about pointing your phone (or future glasses) at places and things to identify and contextualize them.
As you can tell from these examples, Google will have a key stake in this vision. It’s driven to future proof its core search business, given Gen-Z’s affinity for the camera. Also seen in Snap Scan, visual content joins text and voice as a search input, and makes the world shoppable.
And Google is well-positioned, given existing assets. For example, it utilizes imagery from Street View as a visual database for object recognition so that AR devices can localize themselves. That forms the basis for its storefront recognition in Google Lens and urban navigation in Live View.
But Google isn’t alone. Apple signals interest in location-relevant AR through its geo-anchors. These evoke AR’s location-based underpinnings by letting users plant and discover spatially-anchored graphics. And Apple’s continued efforts to map the world in 3D will factor in.
Meanwhile, Meta is building “Live Maps.” As explained by Meta Reality Labs’ chief scientist Michael Abrash, this involves indexes (geometry) and ontologies (meaning) of the physical world. It will be the data backbone for Meta’s AR ambitions and broader metaverse play.
Then there’s Snapchat, the reigning champ of consumer AR. Erstwhile propelled by selfie-lenses, Snap’s larger AR ambitions will flip the focus to the rear-facing camera to augment the broader canvas of the physical world. This is the thinking behind its latest Spectacles.
Beyond tech giants, several smaller players are filling in the gaps for the real-world metaverse. These include Darabase, Resonai, YouAR, Gowalla, Foursquare, 6D.ai (acquired by Niantic) Scape Technologies (acquired by Meta) and ARWay (acquired by NexTech*).
This competition to define the real-world metaverse will heat up in the coming months and years. Just like Google created massive value indexing the web, the opportunity is to index the physical world. That’s the foundation for digital and dimensional experiences with our physical world.