Earlier this week, internet users took to Twitter to express their disbelief and surprise, knowing that CEO Jensen Huang did not hold the entire keynote presentation himself during the NVIDIA GTC conference. Some tech enthusiasts tweeted that they were “fooled” by NVIDIA.
Huang’s virtual replica spoke to the audience for a good 14 seconds (from 1.02.41 to 1.02.55) while he presented the CPU, which is designed for accelerated computing in the terabyte range. Last week, the graphics processing company revealed in its blog post that it is harnessing the power of Omniverse to perform this stunt without notice.
Create a virtual replica
Using Omniverse – a tool for connecting and describing the metaverse – NVIDIA worked with content creation tools including Autodesk Maya and Substance Painter. Capabilities have been further enhanced through tools such as Universal Scene Description (USD), Material Design Language (MDL), and NVIDIA RTX real-time ray tracing technologies. Together, these technologies helped NVIDIA create the photorealistic scene, physically accurate materials, and lighting.
In this video, the NVIDIA team explains that to create a realistic clone of Huang’s background, they clicked hundreds of photos of the kitchen (a popular setting for NVIDIA CEO’s conversations since the pandemic started) and a 3D model of it have created. The engineers went on to explain that they needed to make sure that all 16,000 details and elements – including the kitchen’s Easter eggs – were in place.
The deep learning, graphics research, engineering, and creative teams at NVIDIA performed a full face and body scan to create Huang’s 3D model for his virtual replica. They then trained an artificial intelligence model to mimic his gestures and facial expressions, and “used some AI magic to make his clone realistic,” the blog post reads.
While the virtual replica of Huang caused a stir, albeit much later than expected, a closer look reveals the small details that Team NVIDIA overlooked (details of his jacket), as shown in the image above.
The capabilities of Omniverse
NVIDIA has shown that its Omniverse can do more. Together with tools such as Foundry Nuke, Adobe Photoshop, Adobe Premiere, Autodesk Maya and Adobe After Effects, Omniverse can render complex machines and create realistic cinema environments.
- During the keynote itself, Huang took the audience into the NVIDIA DGX Station A100 for a look. According to NVIDIA, the team converted the CAD model into a physically accurate virtual replica using the capabilities of Omniverse.
According to former journalist and NVIDIA chief blogger Brian Caulfield, a project like this typically takes months to complete and weeks to render. However, Omniverse made it possible for an animator to complete the animation and render it in less than a day.
- NVIDIA implemented PhysX, a staple in the NVIDIA gaming world, in Omniverse as research. The Omniverse engineering and research team re-rendered older PhysX demos in Omniverse, highlighting the critical PhysX technologies (think Soft Body Dynamics, Vehicle Dynamics, Smoke and Fire). This resulted in realistic looking effects that obeyed the laws of physics in real time.
- In addition, Omniverse is critical to NVIDIA’s self-driving car initiative. It helps create an environment for autonomous vehicle training. In collaboration with Mercedes, NVIDIA built its DRIVE Sim (NVIDIA’s simulation platform for AV development) on Omniverse to show how autonomous functions would work in the real world. It enables the team to test light, weather and traffic conditions.
The icing on the cake, however, was the creation of a virtual replica of Huang’s kitchen during the GTC Conference keynote. The demonstration combines the work of the deep learning, graphics research, creative, and engineering teams at NVIDIA.
Last year, Neon, a collaboration between Samsung Technology and Advanced Research Labs or STAR Labs, debuted its virtual being. These virtual beings were presented at the Consumer Electronics Show 2020 in Las Vegas and can be described as digital people with the ability to present intelligence and emotions. Essentially, it’s chatbots, these digital people aim to make video chatbots look real. However, Neon made it clear that unlike AI assistants, these bots cannot function like smart assistants and do not know everything.
The virtual replica of Huang could be the beginning of what the future of AI and virtual conferencing might look like, but are we really ready for it? It all boils down to people spending hours and often money watching and hearing their favorite stars and only getting a virtual replica to deliver sessions.
Join our Discord server. Become part of a dedicated online community. Join here.
Subscribe to our newsletter
Get the latest updates and relevant offers by sharing your email.
After immersing herself deeply in the Indian startup ecosystem, Debolina is now a technology journalist. When she’s not writing, she reads or plays with a brush and a spatula. She can be reached at [email protected]