GTC Keynote 2018: From AI to Black Panther, a View into the Future

Tuesday Mar 27th 2018 by Rob Enderle
Share:

The product announcements aside, what we saw at GTC was the future of much of the world around us.

This week, I’m at GTC (GPU Technology Conference), NVIDIA’s technology showcase and developer’s event. With the fall of IDF and the decline of Intel’s influence, this is the show to be at this year if you want to see what is coming for the next generation of ever smarter technology. NVIDIA’s CEO, Jen-Hsun Huang, is a consummate showman and he didn’t disappoint this year. This was the introduction to a future of artificial intelligence (AI), a protector, a healer, a helper, a guardian, a visionary, and just a little slice of amazing.

Even the music we heard in the opening moments of the keynote was AI composed, though an orchestra performed it. Each of the elements showcased what an AI will, and in many cases can now, accomplish. From protecting the planet, discovering new treatments and drugs to heal, robots that assist and help those who can’t help themselves or those who just need extra strength, to guarding against attack by identifying threats in places like airports, to being able to create. We are entering an amazing era where our limitations are increasingly not even defined by our imaginations but by the ever-expanding imaginations of an AI.

Let’s talk about the highlights.

Graphics

Every year, this firm gives us a sense of how close it is to providing high-resolution images at full movie speed. We aren’t there yet, but the single image capability has reached a point where a rendered image is largely indistinguishable from a high-resolution photograph. We can now fully model light refraction through a wide variety of virtual materials including glass which, in the demonstration, looked just like high-end crystal.

Now as far as real-time rendering, they showcased what looked like a scene out of Star Wars using Unreal Engine 4 and ray tracing. The scene was rendered real time and it looked nearly perfect; in fact, had we not been told this was rendered real time, we’d have likely thought this was done with actors and more traditionally rendered CGI. Given ILM, Industrial Light and Magic, is involved with this effort, it promises an even higher rate of very high-quality pictures. The demonstration was done with one DGX computer, which apparently was able to outperform significantly the supercomputer that was used for this in the past. This was a showcase of NVIDIA RTX technology, which was announced at the show. Netflix and Amazon are going to love this and the potential for real-time emulation is off the charts.

Supported by Vulkan and Microsoft DXR, this should be huge. They also announced NVLink2 and the most powerful GPU yet, the Volta-based, workstation-based solution with 10K cores and a 64G frame buffer. Cost of a render farm using the technology drops from millions to hundreds of thousands; the power needed drops to a fraction as well. This could revolutionize a number of industries.

Supercomputers

Current supercomputer technology is amazing, but it still takes days to analyze major problems like reformulating the battery in your smartphone or PC. And this is expensive time. NVIDIA is now talking about moving this to hours. Huang is arguing that Moore’s Law is not dead, it is outmoded, with GPU-accelerated computing doubling the performance improvement rate. CPUs just can’t keep up.

Medical imaging is perhaps the use for this technology that most resonates with many of us. It is this technology that can identify the problems that put us and those we care about at risk and help find a way to cure them. Huang showed two pictures of an unborn child, a traditional ultrasound scan that hardly looks human and a GPU-enhanced image from current generation equipment that provides an actual picture. They then showcased a variety of scans (brain and heart), moving from old-style scans that are very hard to interpret to the photorealistic scans. Given that it would take upward of 15 years to update the old hardware in hospitals, NVIDA is announcing a medical supercomputer that can analyze the scans from this aging equipment and turn them into the far more photorealistic images, effectively updating these aging scanners without replacing them.

This could make for a rather interesting cloud service at some point.

Deep Learning

With deep learning, you take massive amounts of data to make a system intelligent. The number of deep learning network architectures has been growing exponentially. Each one effectively represents a new species of AI. Complexity has increased 500x in five years.

This is where Huang launched what he is calling the world’s largest GPU to handle these massive loads: 81k CUDA cores, 2K teraflops. More movies than have ever been seen could be transferred across this processor in one second. This introduces NVSwitch, which is 20x faster than PCI Express. The card is truly large and pulls 10K watts of power. It weighs 350 pounds and is called the NVIDIA DGX-2. (It must be really dense because it looks smaller than a hotel in-room refrigerator.)

The DGX-2 is 10 times faster than DGX-1. This class system is used to train other systems, so this kind of performance increase should massively increase the rollout of AI and it is priced at $399K (this was cute because they teased a $1.5M price for this since it effectively replaces a $3M supercomputer). AlexNet, which established deep learning five years ago, took six days to learn; with DGX-2, it takes 16 minutes, or a 500x improvement. Kind of makes you wonder what will happen five years from now.

They also updated the NGC cloud effort, which is now available on most cloud services.

Inference

Once you train a system, then you spread that information out to lesser systems, which then can infer from the learning and make decisions. This is one of the areas where CPUs traditionally did well. NVIDIA has been moving on this opportunity aggressively for the last several years. It is announcing four brand-new capabilities to enhance the millions of hyperscale servers. This affects translation and decision-making systems most directly. Image recognition has been increased 190x, recommender systems like IBM’s 45x, and speech synthesis 36x. They just announced Kubernetes on NVIDIA GPUs, which orchestrates clusters at scale. They did a visual comparison between an Intel Skylake and a new NVIDIA GPU. Intel was handling a handful of images a second, NVIDIA hundreds of images a second, then they accelerated that with Kubernetes and they jumped to thousands.

They demonstrated self-healing using cloud resources. Using a series of linked systems, they killed several, which failed over to the AWS cloud and performance resumed in seconds automatically.

Autonomous Cars, Trucks and Tractors

This was the first time I saw autonomous tractors on the screen. I grew up working on a farm and I thought driving a tractor on a field or through a grove was incredibly boring. I often heard stories of folks who dozed off or weren’t paying attention, fell off or were tossed off, fell under the equipment, and became horribly mangled. Given how repetitious this job is, I’d have figured farming might be the first to get this technology, not the last. Well, apparently, it is on the list now.

NVIDIA has the deepest resources when it comes to the intelligence behind autonomous driving. It largely started well ahead of its peers and has driven its engine to be the leading non-aligned (to a car maker) autonomous car brain in market.

The recent Uber accident showcases the need for whatever is in the car driving it to be less about car OEM or service differentiation, which is sadly where most of it is now, to the most advanced and common system in market. Being different isn’t as important as saving a life. I expect the market will, if only for liability and regulation reasons, lock down on a single technology and NVIDIA is most likely to be the firm supplying it. Huang is representing that his technology will be shipping by year end, and it will be fully certified at that time. They also announced their next advancement, Orin, which is even more powerful.

One of the most interesting parts of the NVIDIA solution is that they train it with virtual reality and effectively drive 1B virtual miles a year on top of the far more limited actual test driving. The overall effort is backed by 370 partners worldwide. These include most of the major car companies.

Robotics

As you would expect, once you have cars that drive themselves, creating robots that can self-navigate should be easy. NVIDIA launched Isaac, which is a platform for robotics. It starts out being virtual reality trained, able to leapfrog the development process thanks to the autonomous car efforts. The actual robots use NVIDIA’s Jetson platform.

Black Panther

Ok, this was really kind of cool. If you saw the movie Black Panther, there was a scene where the Black Panther’s sister took over a Lexus and drove it remotely. They demonstrated this same thing live with a Ford (though, I should note, the car they rendered was a Lexus).

The lab setup was rather amazing in that it created an environment that allowed the remote driver to feel like he was really in the car. The applications for this range from flying rescue craft in dangerous situations without putting the crew at risk to just being able to experience someplace remotely, in real time. It will be interesting to see where this telepresence technology ends up.

Wrapping Up: At GTC, I Saw the Future

The product announcements aside, what we saw at GTC was the future of much of the world around us. Improvements in real-time rendering will vastly advance entertainment and technologies like VR. Advancements in AI will give us ever more intelligent systems far sooner, helping us with our decisions and vastly speeding the time it takes from diagnosis to wellness. Our cars and robots will be getting ever smarter and the next generation of first responders could be responding remotely and from a safe distance, so their lives aren’t also needlessly put at risk. It should not only be a brave new world, but a far safer one as well.

 

Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm.  With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+

Share:
Home
Mobile Site | Full Site
Copyright 2018 © QuinStreet Inc. All Rights Reserved