Nvidia’s president and CEO, Jensen Huang, wowed the crowd at this week’s GPU Technology Conference (GTC) with technology advancements and ecosystems that will have an impact that ripples through many industries. There were the hardware announcements, such as the DGX-1V, which, according to Huang, provides the equivalent of 400 servers in 3 rack units for $149k. Powering this supercomputer in a box is the NVIDIA® Tesla® V100 data center GPU, a mega processor built with 21 billion transistors, which utilizes the newly announced Volta™ GPU computing architecture.
The upshot is that Nvidia claims performance improvements of 5x improvement over Pascal™, their current-generation NVIDIA GPU architecture. Huang suggested that we can expect another 1000x performance improvement by 2025. And these performance improvements are critical to keep up with the ever-growing demand for artificial intelligence-driving applications; whether on the desk-top or in the cloud (all the major cloud service providers offer Nvidia GPUs).
Nvidia is helping to drive demand for this souped up power with two of its own software applications that Huang announced during his keynote; Holodeck and the Isaac Robot Simulator. Holodeck is a photo-realistic, virtual reality room where people, such as product designers, architects or content creators from around the world can gather at a moment’s notice. The Isaac Robot Simulator allows engineers to test and perfect the brains of a robot in an “alternative universe,” before deployment in the real world.
To help data scientists and researchers harness the power of neural networks for deep learning, Nvidia announced the NVIDIA GPU Cloud (NGC), which is a containerized software stack that includes all the leading deep learning stacks. This allows a researcher to start a project on the desktop and, as the need for more GPU increases, the project easily migrates to the cloud.
Vehicles are Nvidia’s target of the Xavier SoC (System on Chip) as it provides 30 TFlops of performance in a compact size with only 30 Watts of power consumption. This means it can run the software stack for autonomous driving without draining the battery, while easily conforming to the small spaces of a vehicle.
As innovative as all these technology announcements are, perhaps the most disruptive announcement of GTC 2017 wasn’t about technology, but the business model for a particular slice of the Nvidia technology (11:07); that is, the open sourcing of Xavier DLA (Deep Learning Accelerator). Combining an ARM CPU, GPU with CUDA and DLA, the reason to open source Xavier DLA, as Huang puts it, is
“to expand the reach of deep learning….to democratize the ability of every single IoT device around the planet, the trillions of them that will be able to use deep learning in the future to be able to access a world class design.”
This approach will simplify and lower the cost of entry for any manufacturer or chipmaker to integrate artificial intelligence at the edge. With this strategy, Nvidia could be at the center of an ecosystem that will an increasingly important part of the burgeoning IoT market, where formerly dumb things become a little smarter, thanks to the Nvidia brains.
Stay tuned to ViodiTV for an example of AI at the edge.
Leave a Reply