NVIDIA announces platform for creating AI avatars

0

NVIDIA today announced NVIDIA Omniverse Avatar, a technology platform for creating interactive AI avatars.

 

The Omniverse Avatar connects units in the company’s speech, computer vision, natural language understanding, recommendation engines, and simulation technologies. The avatars created on the platform consist of interactive characters with ray-traced 3D graphics that can see, speak, chat on a wide variety of topics and naturally understand spoken intent.

NVIDIA Omniverse Avatar announced

Omniverse Avatar opens the door to the creation of artificial intelligence assistants that can be easily customized for almost any industry. This assistant will be able to assist billions of daily customer service interactions such as restaurant orders, banking transactions, making personal appointments and reservations, and more. It also has great potential to lead to greater business opportunities and better customer satisfaction.

Omniverse Avatar combines NVIDIA’s core graphics, simulation, and AI technologies to create some of the most complex real-time applications ever created,” said Jensen Huang, founder and CEO of NVIDIA. The use cases of collaborative robots and virtual assistants are incredible and far-reaching.” stressed the importance of this issue.


Omniverse Avatar is part of NVIDIA Omniverse, a virtual world simulation and collaboration platform for 3D workflows, currently in open beta, with over 70,000 users.

During his keynote at NVIDIA GTC, Huang shared several examples of Omniverse Avatar: Project Tokkio for customer support, always-on, NVIDIA DRIVE Concierge for smart services in vehicles, and Project Maxine for video conferencing.

In the premiere of Project Tokkio, Huang showed his colleagues engaging in a real-time conversation with an avatar made as a toy replica of him, chatting on topics like biology and climate science.

In a second Project Tokkio demo, he highlighted a customer service avatar who could see, chat with, and understand two customers at a restaurant kiosk ordering a veggie burger, fries, and a drink. The demonstrations were powered by NVIDIA AI software and the Megatron 530B, currently the world’s largest customizable language model.

In a demo of the DRIVE Concierge AI platform, the digital assistant in the center instrument cluster helps the driver choose the best driving mode to reach his destination in time, and then follows up on a prompt to set a reminder when the vehicle’s range drops below 100 miles.