Nvidia’s Convincing Unreality
When I read the book Ready Player One, I really got excited about the future of virtual reality, but the movie really didn’t do the book justice. Some blockbusters have succeeded in interspersing computer graphics with real people and making it hard to tell the difference. Still, a totally rendered movie still looks like animation and you just don’t get the film quality you get from, well, film.
This was disappointing in particular with the movie Sky Captain and the World of Tomorrow, which was all done with actors in front of blue screens and rendered. I read the book, but the animation quality of the movie really detracted from the story. An earlier film, The Last Starfighter, was even worse. It’s another example of a decent story being ruined by the fact that the technology just wasn’t up to the task at the time. (By the way, everyone and their brother apparently wants to remake this film but getting the rights to it has proven daunting).
Well, Nvidia last week announced its new Quadro RTX cards, which appear to have the horsepower needed to make movies like Sky Captain and Last Starfighter real. In fact, it actually could just re-render those movies and, for a tiny fraction of a remake, turn them into Netflix or Amazon Prime stars.
This technology goes beyond that, though. It is so good you may not be able to tell what is real and what is unreal.
This is actually one of the biggest things to happen in tech. I’ll explain and then close with my product of the week: an impressive professional VR headset from StarVR.
Light Is the Problem
The truly annoying problem the industry has been struggling to solve is light. Light refracts, it reflects, it creates shadows, and it does weird things to textures and finishes. Light makes it incredibly difficult and expensive to create animations that look like real objects. You can do it, but it is massively resource-intensive, and when rendered, people tend to have a creepy almost-zombie look to them.
Back in 1979 a guy by the name of Turner Whitted came up with a solution for this, but it was wicked expensive — and when I say wicked expensive, I mean Department of Defense expensive. His concept, called “multi-bounce recursive ray tracing,” could lead to real-looking objects, but it would require a super computer for every single pixel rendered. Say you have 4K pixels on a screen — it would require 4,000 supercomputers to render a frame of a movie.
That would be an impressive effort that likely would eclipse the cost of flying to the moon, and it clearly would be outside the budget of a movie studio. But man, if you could do it, it would be amazing.
Most of us thought the technology was at least a decade out, so man did we get a surprise at SIGGRAPH 2018 this week.
The SIGGRAPH Show
If you don’t know about SIGGRAPH, it is the premier computer graphics show of the world. If it has to do with rendering, it is at this show, and there is actually a pretty impressive 3D printing showcase there now as well.
This year there was a huge spotlight on women in graphics, and major vendors like Apple were recruiting employees at the event. If you are at all interested in computer graphics, this is a must-attend event this year.
I was incredibly impressed with the advancements in VR hardware, 3D printing, and workstation performance at the event. Both HP and Lenovo had a strong workstation presence (Lenovo had a really impressive mobile workstation).
Intel also had a decent presence, but it was overshadowed by yet another processor vulnerability. That made last week’s AMD Threadripper 2 announcement particularly timely, as AMD is not affected by that vulnerability. AMD now has the fastest CPU in the world. Apparently, Intel just can’t catch a break.
Nvidia CEO Jen-Hsun “Jensen” Huang really stuck it to Intel during his launch talk at SIGGRAPH. He showcased that something like four shelves of Nvidia’s new RTX cards easily could outperform something like 10 racks of Intel servers for rendering, saving a studio millions of dollars on unneeded servers in a render farm.
He later closed his talk with Nvidia’s take on Intel’s iconic bunny men, but updated with the new Nvidia RTX cards, so the image was fully rendered (Intel used actors). That demonstration more than implied that Intel, at least with regard to rendering, had become obsolete.
Strangely, I don’t think he went far enough. The capabilities showcased on stage were so far beyond anything Intel technology currently can do, it really did look as though the two companies were in different centuries.
You couldn’t help but wonder what would happen if Intel had a CEO like Huang, and there appears to be growing sentiment in analyst circles that Intel should merge with Nvidia just to get a decent CEO. (Currently Intel has no CEO or CMO.)
These new RTX cards range from US$2,300 at the low end to $10K at the high end, and this means you don’t need a new workstation to get this additional performance — you just need a new graphics card.
It did get me wondering how incredibly powerful an AMD Threadripper box would be with this new line of cards. Coupling the fastest processor with the most powerful rendering graphics card would result in a wicked fast workstation — one that graphics artists and animators likely would kill for.
What also is amazing is that this is the very first version of these revolutionary cards, which makes you wonder what generation No. 3 will look like. (Typically, it takes three generations for a new technology like this to fully align with what the market wants, and given that this first generation is so impressive, the third should be unimaginably fast.)
One of the ways Nvidia gets to this performance, which is a whopping 6x over prior generations, is that the card renders in low resolution first and then uses an AI-based up-converter to get to higher resolutions like 4K and 8K.
This up-conversion process can be applied to other media, so you could take films from the 1930s and up-convert them to look like current generation movies — but in theory, you also could take older animated films like Sky Captain and Ready Player One and up-convert them to something that looks real.
Not stopping there, you also could use this technology to replace actors or elements with other actors or elements globally. Think about a Disney option to get a movie with your kids in it, where it looked as though they were in the movie. Granted, there would be an upcharge for the work — but wouldn’t that make an amazing addition to a birthday party?
Another option would be to render extra frames so that rather than having to use a $140K video camera to be able to do super-slow motion, you could use a GoPro and simply render the intermediate frames for the same effect. Once again, this is just generation No. 1 of this technology.
Turning our imaginations into reality is what likely will define the coming Ready Player One-like VR world. Last week at SIGGRAPH, one of the few CEOs I’m not worried about, Jensen Haung, launched Nvidia’s Quadro RTX cards, which take a massive step in that direction.
In the meantime, this technology could turn some of our old beloved TV shows and movies, and some newer moves that were before their time, into Netflix and Amazon Prime hits. (Some may even qualify for re-release to theaters.)
This is one of those launches that, I expect, we’ll look back on as industry-changing. I think we are going to love our new virtual worlds.
One of the big VR industry problems is that much of the success is in the commercial market but virtually all VR headsets are consumer grade. This means, in a commercial deployment, they wouldn’t hold up and the result would be less than ideal.
Well a number of companies have started to step up, and I saw an impressive headset at SIGGRAPH that appears to do a nice job of addressing this problem. It is called the “StarVR One” headset, and it is a beast. It actually looks pretty damned good, kind of an edgy sharp industrial design.
StarVR One Headset
It has a whopping 16 million sub-pixels (4K has less than 12K to give you an idea of the resolution), a 90Hz low persistence refresh rate, a 210-degree horizontal and 130-degree vertical field of view, full eye tracking (Tobii), dynamic foveated fendering (reduces workstation workload), and head mounting hardware that is both easy to replace and robust enough to survive a commercial deployment.
With all of this it is relatively light, at just about one pound in weight.
One interesting feature is automatic IPD adjustment (adjusting the display to account for the distance between the screen and your pupils often makes setting up one of these things a pain in the butt). Further, the display isn’t LCD but AMOLED, which provides far brighter and more realistic colors.
Pricing hasn’t been announced yet, but in person and on paper this thing is damned close to what the professional market is looking for, so the StarVR One VR headset is my product of the week. (Paired with a Quadro RTX card this thing could truly be amazing because it is optimized for Nvidia VRWorks.)
The only negative is that I really want one and I’m betting it won’t be a cheap date.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.
Rob Enderle has been an ECT News Network columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an MBA in human resources, marketing and computer science. He is also a certified management accountant. Enderle currently is president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly served as a senior research fellow at Giga Information Group and Forrester. Email Rob.
Source : TechNewsWorld.com