NVIDIA to showcase 3D MoMa at the Conference on Computer Vision and Pattern Recognition
NVIDIA is announcing a new extended reality (XR) design tool via its AI Research branch this week. The company’s latest tool is 3D MoMa, a service for creating rich digital twins of real-world objects.
During this week’s Conference on Computer Vision and Pattern Recognition, the firm is introducing the new technology and its accompanying research paper.
The Institute of Electrical and Electronics Engineers (IEEE) and the Competency and Values Framework (CVF) are hosting the expo, which takes place June 21 to 24.
This year’s conference uses keynotes, workshops, and short courses to introduce emerging computer vision and immersive XR technology.
Using 3D MoMa
NVIDIA designed its 3D MoMa digital twin solution to assist XR content creation by employing seamless digital twin creation tools.
MoMa employs inverse rendering to streamline the digitization process, where a designer needs to take photos of an object at various angles to create a rich mesh which reacts realistically to digital physics and lighting.
The service lets developers modify an object’s scale, texture, and lighting conditions before importing the real-time 3D (RT3D) asset into a digital setting or graphics engine.
Then a user can import the 3D object into an NVIDIA Ominverse-supported graphics suite like Autodesk, Blender, Unreal Engine 5, and Adobe Substance 3D for further customization.
A MoMa user can then place their edited 3D object in a digital environment for gaming, filmmaking, and XR projects.
Omniverse Integration for Enterprise
Many international firms employ the Omniverse product to support industry 4.0 requirements. The service combines many XR tools under one roof for a unified product.
For example, BMW, Ericsson, Siemens Energy, and Lockheed Martin use the NVIDIA Omniverse Enterprise edition to create physically accurate virtual replicas of equipment and places with integrated AI and synchronized real-world data. The service also accelerates the production of enterprise-grade 3D tools with no-code solutions and services.
In April, NVIDIA also introduced Instant Neural Radiance Fields (Instant NeRF), a tool that uses AI frameworks to instantly create 3D digital twins of 2D images, including people and places.
NVIDIA claims that the NeRF platform applies to various markets, including automotive, robotics, manufacturing, architecture, and entertainment.
Additionally, the firm explains how self-driving cars can use the NeRF platform to render a digital environment to train an autonomous AI programme in navigating a real-world space.