A password will be e-mailed to you.

Nvidia unveils GANverse3D 2D image to 3D model engine

Today saw the unveiling of a 3D model- a revamped Knight Rider KITT car by Nvidia researchers. A deep learning engine, GANverse3D, produced and developed the model from a 2D image. The new model has mesh, textures, and information that will facilitate the automatic animation of the object.

GANverse3D

GANverse3D

The GANverse3D application was developed by the Nvidia AI Research Lab in Toronto. The application facilitates the inflation of flat images into realistic 3D models. These models can be controlled and visualized in virtual environments. This capability will be highly beneficial and supportive for architects, creators, game developers, and designers, facilitating the addition of new objects to their mockups without the pre-requisite of expertise in 3D modeling, or a large budget.

For instance, an image of a car could be turned into a 3D model which can be driven around a virtual scene. The model will include realistic headlights, taillights, and blinkers. The research team revamped KIIT into the 21st-century car by combining the new model with NVIDIA Omniverse. The announcement was made on Monday during the GTC keynote of CEO Jensen Huang.

A generative adversarial network or GAN was harnessed by the researchers for the generation of a training dataset that can be used for the synthesis of images that can depict the same object from multiple viewpoints. This is much similar to a photograph taken by a photographer who walks around the object to get multiple-angle shots. These images were plugged into a rendering framework for inverse graphics, which is the process used for inferring 3D mesh models from 2D images.

After training just once on multi-view images, GANverse3D can predict a 3D mesh model using a single 2D image. This model can then be used together with a 3D neural renderer which equips the developers with the control that helps in the customization of objects.

GANverse3D can be used for the recreation of any 2D image into 3D when imported as an extension in the NVIDIA Omniverse platform.

This is in contrast to the previous models for inverse graphics which primarily relied on 3D shapes as training data. The new system successfully converted a GAN model into a data generator which is efficient and sharp.

According to Jean-Francois, a deep learning engineer at NVIDIA,

“Omniverse allows researchers to bring exciting, cutting-edge research directly to creators and end-users. Offering GANverse3D as an extension in Omniverse will help artists create richer virtual worlds for game development, city planning or even training new machine learning models.”

 

Comments

comments

No more articles
Send this to a friend