Nvidia may have stumbled onto the answer with multi-resolution shading (MRS) feature, a new GameWorks VR middleware technology available for developers. MRS takes advantage of a quirk in the way VR headsets render images to drastically reduce the graphics performance needed to create virtual scenes--which could effectively be used to run VR games on less powerful hardware.
Let's dig in.
Let's do the image warp again
The secret sauce in Nvidia's multi-resolution shading lies in the way virtual reality headsets, by their very nature, warp on-screen imagery.
Normally, graphics cards render full-screen images as a straight-ahead, rectangular scene, applying the same resolution across the entire image--think of how PC games appear when you're playing them. But VR headsets use a pair of over-the-eye lenses to push the focal point of scenes out into the distance.
"If those lenses weren't there, you'd be basically trying to focus on a screen right in front of your face, which causes a lot of fatigue and strain," says Tom Peterson, a distinguished engineer at Nvidia. "So these lenses are actually distorting [the image]."
The Oculus Rift (and other VR headsets) scrunch the edges of rendered environments together into a roughly oval shape to make them appear correctly when viewed through the lenses. You can see the end result on your primary computer screen if you ever use a PC connected to an Oculus Rift. Making images appear correctly with all that distortion requires a lot of graphical trickery.
"GPUs render straight, not distorted," says Peterson. "So what we actually have to do is take the original image, then warp it, to account for the fact that it's going to be re-distorted by those lenses, so that by the end of the day--when you see it--the image is straight again."
But that warping compresses the edges of the images, throwing away a lot of the native imagery produced by the GPU. Your graphics card is essentially working harder than it has to. Enter Nvidia's new multi-resolution shading technology.
Divide and conquer
Rather than rendering the entire image at the same resolution, MRS splits the screen into separate regions. The center of the image--where your eyes primarily focus in a VR headset, and where the image isn't distorted-- is rendered at full, native resolution. The edges of the screen, however, are rendered at a reduced quality to take advantage of VR's necessary warping and distortion.
"We're going to cut down the resolution [at those edges], we're going to cut down the scaling, and effectively use fewer pixels," says Peterson.
The compressed image is rendered in parallel with the full-resolution center region on Nvidia's Maxwell GPU architecture--and yes, Nvidia says a recent 900-series GeForce graphics card or GTX 750 Ti is required for multi-resolution shading--and then re-warped to appear through the VR headset's lenses with no apparent loss of image fidelity. The concept is somewhat similar to the "foveated rendering" technique explored by Microsoft Research in recent years, which concentrates on rendering only the part of the screen that you're actively looking at in full resolution.
"It's between 50 percent and 100 percent less pixel work [compared to traditionally rendered VR scenes]," says Peterson.
That's insane. Even more insane: The reduced quality edge regions truly aren't noticeable in the final image unless the compression quality is cranked to extreme levels.
Eyes-on with Nvidia's multi-resolution shading
In a closed-room Nvidia demo on an Oculus Rift, Peterson let me compare a scene rendered with MRS and without MRS, enabling and disabling the feature on the fly. Rather than staring at the center of the image, as everyday users would do, I focused my attention on the edges of the screen, where multi-resolution shading's magic happens.
At a 30 percent reduction in pixel work, there was no visible difference with MRS enabled or disabled. There was no drop in fidelity, so sudden jarring sensation or flickering when turning the tech on or off--nothing. It just looked like it should.
In order to truly make the reduced rendering visible, Peterson had to crank the compression up to 50 percent, or half the workload of the same image rendered at full resolution across the board. Only then was the effect noticeable, as a faint shimmering around the very edges of the image. The effect was minimal, however, and that's when I was specifically looking for it on the edges. When staring at the center of the display, which was rendered at full fidelity, the compressed resolution at the edges could only very barely be seen, no doubt thanks to the way the human eye views images in our peripheral vision with far less detail than what we're directly looking at.
That's big news for VR developers, and for gamers who want to get into the virtual reality experience without spending the equivalent of a college education on a graphics card.
"So if you're a game developer, this means that you can have higher quality games, or that you can have your games run on more GPUs," says Peterson.
And just like that, the Oculus Rift's GTX 970 requirement didn't feel quite as paltry as it did when the headset's specs were released.
There's a potential flaw in this gem, however. Multi-resolution shading isn't only restricted to the most recent GeForce graphics cards, but it's also an Nvidia proprietary technology being offered under the company's new GameWorks VR program, which brings Nvidia's former VR Direct initiatives (like VR SLI) under the GameWorks banner.
GameWorks is Nvidia-created middleware that adds features and technologies with performance optimized for GeForce graphics cards--but, naturally, not for AMD Radeon cards. That's been the cause of much recent hand-wringing, most recently when The Witcher 3 launched with Nvidia's HairWorks technology, allegedly--but not really--crippling performance on AMD hardware. (ExtremeTech has a superb overview of all the GameWorks concerns if you're interested.)
While the threat of GameWorks-packing titles that work well on GeForce GPUs--but not Radeons--feels overblown for standard games, the possibility of VR developers specifically targeting GeForce cards with a passion seems like a very real possibility, given the nascent nature of virtual reality and the potential performance benefits of multi-resolution shading. AMD can't see GameWorks code, which means it can't optimize its graphics cards for Nvidia's various proprietary technologies (like MRS).
That said, AMD's targeting VR developers with its own "LiquidVR" software development kit for Radeon hardware. While no multi-resolution shading-like feature has been announced for LiquidVR, AMD's been pounding the virtual reality pulpit hard, and it's easy to envision the company rolling out similar technology if MRS starts to gain traction with VR developers--though again, that raises the potential specter of separate, splintered software solutions dependent on the graphics hardware you're running, rather than a universal DirectX-like approach.
But set aside all those worries for now. Virtual reality's one of the most exciting developments in the PC ecosystem in years, and if Nvidia's performance claims for multi-resolution shading prove true, it could genuinely be a killer feature for the fledgling VR field. Color me intrigued--and hopeful.