More

    SIGGRAPH 2018: NVIDIA Just Obsolesced Moore’s Law and Intel to Create Something Amazing

    I’m at SIGGRAPH this week and this is really the era when this show comes into its own. With the graphics technology expanding how we view reality using technologies like augmented and virtual reality, the lines are increasingly blurred between what is real and what we can imagine. One of the most important segments of this show is NVIDIA CEO Jensen Huang’s keynote at the show. NVIDIA has really risen to dominate the various aspects of GPU computing from gaming graphics and workstations, to VR, and most recently AI and autonomous cars.

    This is an overview of his keynote.

    Setting the Foundation

    Huang opened by talking about the anniversary of Pixar and then spoke about the advancements in the movies that started with Toy Story in 1995 and what was then impressive performance. 800,000 CPU hours were used to create that movie. In 2006, in Cars, they used global illumination for the first time and light just worked as it should, saving thousands of hours. With Finding Dory, they were able to use AI-based denoising, filling all the spots that hadn’t been reached. Big Hero 6 200M CPU core hours were used to create that movie. Apparently in the audience with me are a broad cross section of the luminaries and pioneers who created these technologies.

    The Death of Moore’s Law

    Huang argues that Moore’s law has largely ended. CPU performance has begun to stall as density, energy and heat created hard barriers to further advancement. He suggests that Moore’s law has been replaced by the GPU computing law and GPU compute continues a rapid ramp of advancement despite CPUs hitting a wall because they approach the problem differently: Using geometry, photogrammetry, materials, simulated physics (fluids, particles, etc.), character animation (they can now even teach a character to animate itself), and facial animation (using AI rather than markers to capture very realistic facial animation).

    Ray Tracing on Steroids

    This has all been possible because they’ve been able to work as one industry. There has, however, been one huge roadblock. That roadblock has been fully accurate lighting. Back in 1979, Turner Whitted created Multi-Bounce Recursive Ray Tracing as a concept. But back then, it took 1.2 hours for one 512×512 image using this method (using a $1.5M computer). It is very accurate, but it is massively resource intensive, and he believed that, at the time, it would take a Cray Supercomputer behind every single pixel to do this effectively. NVIDIA created the DGX station, which effectively puts this power behind every pixel to create photorealistic real-time simulations. They demonstrated this showcasing a scene from Star Wars at the industry conference earlier this year.

    The Birth of the Quadro RTX

    They used this to lead into a new card, the first Ray Tracing GPU, the Quadro RTX, which can now render a scene like what was shown by itself. Performance specs are impressive, with 10 Giga Rays/sec, up to 16 Teraflops, up to 500 Trillion Tensor Ops per second, and up to 100GB per second with NVLink (their interconnect technology). Called the New Turing GPU, it is apparently the biggest technological leap they have made since the 2006 CUDA GPU. This has created a step function in realism, supports a new Hybrid Rendering Model, has interoperable rasterization, ray tracing, compute, and AI. This is one impressive part. (I think he may be overselling this. I say that because I really want this card even though I can’t figure out how I’d use it. I think it may be time to develop a new skill set.) This is the first card in its class to support 8K displays. This is so much faster than the prior generation of parts that it should force a major workstation upgrade cycle.

    They are announcing that they are working with Pixar to support a universal C programming language, which should provide far higher code portability as a part of an entirely new software stack to take full advantage of these new cards. They showcased a series of images of three balls in a room using different textures on each of the balls. They then began to apply the technology layer by layer until they got to RTX, which gave shadows a soft edge and a vastly more realistic image. (One fascinating thing about Huang is that he often complains that they don’t properly rehearse, which creates some interesting moments during the presentation, and this was no exception.)

    Apparently, and I didn’t know this, the vast majority of catalogues are rendered now. It apparently saves a ton of time and expense in photography and staging.

    The Amazing Porsche 911 Speedster

    Apparently, this is also the 70th anniversary of Porsche and they showcased the rendered 911 Speedster concept. That is one beautiful car and it only exists virtually, for now… It is so amazing that they must build it. It generally was believed that this kind of performance was at least 10 years in the future, according to Huang. This level of advancement is unprecedented. This is a 6X performance increase in computer graphics. Part of how they get there is that they first render in a low resolution but then use AI to massively increase the resolution to get to this performance level. I wonder how many older existing images will be taken through this AI process to massively increase their resolution automatically.

    SolidWorks Autodesk

    They demonstrated SolidWorks with this technology showcasing they could, in real time, take an image (in this case, a hotel lobby) and then alter the lighting and even place objects realistically in the environment with all lighting remaining intact. They then moved to an Autodesk showcase using Spiderman movie images in final film quality using one card. They then announced their RTX server using up to 8 RTX cards taking render time down from hours to minutes. They then showcased an Intel Skylake Render farm costing $2M using 10 racks, which could be replaced by one rack costing $500K providing the same level of performance. (He makes a compelling argument that Intel is effectively obsolete in this space.) This new Quadro RTX 5000 starts at $2,300 retail, RTX 6000 $6,300, and the RTX 8000 is $10K. NVIDIA’s new tag line is apparently “the more you buy, the more you save.”

    Dell, HP, HPE and Lenovo are launching solutions based on this new technology. What was particularly ironic/fascinating is that they ended with a video reminiscent of the old Intel Bunnymen shot that Intel successfully ran in the 1990s but updated for this decade using photorealistic animation rather than actors. They really made Intel look obsolete.

    Wrapping Up: A New Virtual World

    We are clearly moving into a virtual world. I couldn’t help but remember the book and movie Ready Player One and think that this RTX advancement was a major step toward creating a virtual world that would be indistinguishable from the real world, something that was decades into the future. Given that what they showcased should have been a decade out, apparently this future is coming at us far more quickly than any of us thought. But, near term, the images our companies produce, the movies and TV shows we see, and the virtual prototypes that are created will improve massively thanks to this technology. And I still want a card… I guess this means I’ll need to learn Fluid Works and Autodesk.

    Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm.  With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+

     

    Rob Enderle
    Rob Enderle
    As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles