Nvidia researchers have made a generative model that can create virtual environments using real-world videos from sources like YouTube.
Nvidia researchers have made a generative model that can create virtual environments using real-world videos from sources like YouTube — a way of generating graphics that could have implications for the future of gaming, AI, and graphics.
“It’s a new kind of rendering technology, where the input is basically just a sketch, a high-level representation of objects and how they are interacting in a virtual environment. Then the model actually takes care of the details, elaborating the textures, and the lighting, and, and so forth, in order to make a fully rendered image,” Nvidia VP of applied deep learning Bryan Catanzaro told VentureBeat in a phone interview.
The system was trained using Apolloscape video from Baidu’s autonomous driving project. Sketches of where to find things — like trees, buildings, cars, and pedestrians — are input to the model.
Home
United States
USA — software How Nvidia researchers are making virtual worlds with real-world videos