Visual 1st Perspectives
April 6, 2022
Our takeaways from “The metaverse – what’s in it for me;
what’s in it from me?” Visual 1st Spotlight, March 15

The Metaverse: What is it?

As a first order of business our expert panel described their visions for the metaverse, starting with Justin Melillo, whose broad definition was then further fleshed out by the other panelists:
  • Justin Melillo (Mona): the metaverse is a persistent and interconnected world of virtual environments that serve as a gateway to various online experiences and also underpins the physical world. For us, the metaverse is an open one, i.e. it is decentralized and owned by the actual users participating in it rather than a single vendor.
  • Eric Cheng (Meta): one centerpiece of the metaverse is embodiment, which gives users the experiential perspective of being present. Instead of looking at a screen with faces, in the metaverse it feels as if we're right there in a space with these people.
  • Alexis Khouri (Adobe): the metaverse is primarily an evolution of websites. Websites started with text, then images, then video, then spatial anchoring. A full metaverse will require massive infrastructure developments, so initially many metaverse experiences will be hybrid 2D – 3D experiences on flat screens
  • Kirby Winfield (Ascend): The metaverse is all of the above, but ultimately it is all about presence and immersive communication

The “Full VR” vision

  • A fully realized vision of the metaverse requires not only 360 visuals, but also depth-aware content and embodiment provided by high-fidelity VR headsets. 
  • While 360 visuals and depth-aware content already enables the creation of “browser-based VR” metaverses, higher fidelity headsets generate a more immersive experience. In particular, headsets that offer 6 degrees of freedom (DoF) not only enable users to look inside a scene with 3 degrees of freedom by rotating their head in different directions (left/right, up/down, pivot left/right), but also to move around in a three-dimensional space (move forward/backward, laterally/vertically, up/down).

The Metaverse: What’s already out there?

While the metaverse promises a transformative experience for the future, it’s important to note that real-life implementations already exist and address here-and-now use cases. These include:
  •  “Full VR” (360 + depth + embodiment) implementations, requiring headsets:
  • Games (primarily CGI), fitness, enterprise (such as training or on-site virtual guidance), documentaries to experience what you normally can’t do in real life (free climbing, space exploration).
  • “Browser-based VR” (360 + flat) implementations: 
  • Available on Mona Gallery, Echo3D, Obsess’ virtual stores; Everdome and ip.labs have solutions in development.

Where do we stand today?

1)  360 Field of vision: 
  • On the capture side, a broad variety of 360 cameras are available today, ranging from inexpensive hand-held cameras with built-in stitching to expensive multi-camera rigs.
  • In addition, various smartphone apps enable users to capture 360 footage by moving their smartphone camera in prescribed arcs and stitching the imagery inside these apps.
  • On the creation side, 3D authoring programs, such as the Adobe’s Substance 3D applications, enable users to also create 3D objects and scenes with synthetic photorealistic objects.
  • On the display side, today’s browsers enable users to navigate through 360 scenes by using their mouse or keyboard without the need to download plug-ins.

2) Realistic depth perception: 
  • On the capture side, stereoscopic cameras are still few and far between, in particular for the consumer market; high-end solutions are expensive and require complicated setups.
  • On the display side, VR headsets that provide immersive depth visuals are proliferating but their usage is still mostly among gamers at this point.

3) Immersion with embodiment:
  • Many of today’s VR headsets provide 6-DoF tracking, which mimic the user’s moves inside VR environments. But as, mentioned above, VR headset adoption beyond gaming is still limited.

How to bridge the gap to “full VR?” 

Our panelist provided several ideas to bridge the chasm separating us from “full VR.” Among those:

  • Developers could build AI solutions that leverage photogrammetric depth data when developing depth-aware experiences. 
  • Developers could also leverage depth data of imagery captured by smartphones with depth sensors, such as the more recent iPhone models, assuming that smartphone vendors provide programmatic access to this data.

What’s driving things forward?

  • Until recently, it has been virtually impossible to make meaningful use of raw programmatic depth data for images. However, as AI toolkits are getting more powerful and easier to implement, developers are now in a position to more easily use AI to extract meaningful image depth information for VR applications.
  • Assuming that AI and/or other software technologies can reduce the complexity of creating 360 and depth imagery, it’s fair to assume that photographers and videographers will enjoy using their creative skills to produce photo-realistic 360 and depth-aware visuals
  • This follows a well-established pattern of creatives adopting new technologies that open new perspectives for them, as was the case with digital cameras making things easier than film cameras, and with smartphones that feature built-in image enhancement and editing capabilities. 
  • It’s noteworthy that new capabilities are often used in unexpected ways, such as the emerging trend involving capturing photos and videos inside the metaverse (as opposed to capturing them in the physical world and bringing them into the metaverse).
  • Headset vendors are experimenting with developing Mixed Reality (MR) headsets for specific use cases when it’s not possible or advisable for users to completely shield themselves off from their surroundings – which they would be if they wore a full VR headset. We look forward to seeing a broader variety of headsets coming to market that address specific use cases and make different tradeoffs between immersiveness and accessibility.
  • Driven by the availability of AI technology platforms, AI-generated visual content is on the rise: synthetic imagery has become easier to generate, not just in 2D but also in 3D, thus generating cost savings and adding creative possibilities. 


Resources

Contacting speakers: 

Immersive Media Lead, Meta Reality Labs

Sr. Director of Business Development and Strategy, 3D & Immersive

Co-founder, CEO

Founding General Partner

Show & Tell presenters:





Best,

Hans Hartman
And a few more things...
Profoto & StyleShoots. Acquisition. 2020 Visual 1st Best of Show award winner Profoto acquires StyleShoots for €18M. StyleShoots offers hardware and software automation tools for ecommerce studios, typically operated by ecommerce platform providers and retailers. Profoto offers lighting products for professional photographers and has embarked on a strategy to expand into adjacent markets. 

Amaze & Ozone Metaverse. Partnership. Ecommerce design platform Amaze, presenters at last year’s Visual 1st conference, announces a partnership with Ozone Metaverse with the goal to enable retailers to build virtual shopping experiences cost effectively, and at scale.

CEWE. Gloves and ties are off. Before Christian Friege’s appointment at CEO, male workers were required to wear (CEWE-red) ties at any public appearance. Friege abandoned this policy early on during his tenure. Today not only the ties but apparently also the gloves have come off. In a rare moment of disclosure, the board of trustees of CEWE Color claimed Friege “massively obstructed” diversity and the appointment of a woman as a board member, which reportedly led to his contract not being renewed.
But … a spokesperson for CEWE told Reuters tersely that the management board Friege chairs does not make management board appointments and that four women had been promoted to executive leadership positions in the past few years. 

Mojo Vision. AR contact lenses on their way. We’ve seen it so many times in action movies, you’d assume it’s already feasible in real life: the secret agent wearing contacts that show them vital info in their peripheral vision. Having an image sensor embedded in a contact lens is one; providing complete AR functionality is a whole different ballgame. And then there’s the pesky little question as to how to include an onboard battery... A 1.0 product is still a ways to go, but the investment community is betting on Mojo Vision being able to pull it off, having funded the startup with $205M to date.

Snapbar. Headshots going virtual. 2020 Visual 1st Special Recognition award winner announces Virtual Headshots, a platform to enable companies to create consistent quality employee headshots, while automating the editing and formatting of headshots taken either on premise or remotely. 

Archive & Subscribe Share your news with us | Connect on LinkedIn