The new AI model from NVIDIA makes it easy to create virtual environments and people.

NVIDIA’s new AI model is meant to make developing 3D virtual environments easier. NVIDIA claims that GET3D is capable of producing a wide variety of 3D content, including characters, buildings, cars, and more. The model’s ability to swiftly come up with shapes is also important. According to the firm, a single GPU can be used to generate about 20 objects per second in GET3D.

Researchers used computer-generated, two-dimensional photographs of three-dimensional shapes photographed from various angles to train the model. According to NVIDIA, employing A100 Tensor Core GPUs accelerated the process of feeding about 1 million photos into GET3D from start to finish in just two days.

NVIDIA makes
NVIDIA makes

According to a blog post by NVIDIA’s Isha Salian, the model can generate results with “high-fidelity textures and sophisticated geometric elements.” GET3D’s output “is in the form of a triangle mesh, like a papier-mâché model, covered with a textured material,” Salian explained.

Objects generated by GET3D should be easily imported into game engines, 3D modelers, and film renderers for further editing. What this means is that it may become much simpler for programmers to create elaborate virtual environments for use in games and the metaverse. NVIDIA mentioned robotics and building design as additional applications.

The business claimed that GET3D could build sedans, trucks, race cars, and vans based on a training dataset of photographs of cars. After being educated on animal pictures, it can also produce foxes, rhinos, horses, and bears. NVIDIA says that “the more varied and detailed the output,” or the final product, will be “the larger and more diverse the training set.”

One more NVIDIA AI technology, StyleGAN-NADA, allows users to give an object a new look simply by typing in instructions. A demonstration film says you could use the technology to give any animal tiger stripes or give a car the appearance of having been burned.

The GET3D team at NVIDIA Research thinks future versions can be taught on real-world photos rather than synthetic data. In addition to not being limited to training on only one class of objects, it is likely that the model might be trained on a wide variety of 3D shapes simultaneously.

Read more:-

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top