Home United States USA — software Everything you need to know about the Nvidia GeForce RTX GPUs

Everything you need to know about the Nvidia GeForce RTX GPUs

326
0
SHARE

Nvidia’s new GeForce RTX 2000 series represent a major advancement in graphics technology. It introduces real-time ray tracing, huge improvements to A. I. and deep learning and new and cooler designs for Nvidia’s own reference Founders Edition cards. But it does all come at a price. Here’s everything you need to know.
After months of speculation, Nvidia finally revealed its next-generation graphics architecture at Gamescom, with CEO Jensen Huang hailing it as the greatest advancement in its GPU technologies since CUDA cores were introduced back in 2006. It adds new technologies that bring about lighting techniques thought to still be years away from being practically possible and overhauls Nvidia’s now-classic reference cooler design. But all of it comes at a price.
The new-generation graphics cards are an order of magnitude more expensive than their predecessors, even factoring in recent pricing problems faced by graphics card buyers all over the world. Here’s the breakdown on what these cards are and what they do.
Nvidia had three cards to show off at Gamescom: The RTX 2070, RTX 2080, and RTX 2080 Ti. That was somewhat of a surprise, as with generations past, Nvidia has staggered the showcase and release of these cards over a longer period of time. Where the 10-series saw the 1080 release first, followed by the 1070 a month later, and the 1080 Ti a year after that, the 2080 and 2080 Ti were initially slated to go on sale on (“or around”) September 20, while the 2070 has a possible release date of October.
We have a bit of bad news regarding those release dates. The RTX 2080 has been confirmed to ship on September 20. However, the 2080 Ti has been delayed a week to September 27. In a forum post, a Nvidia representative said that the company expects pre-orders to ship between the 20 and 27.
These cards have a couple of different prices for each, with Huang claiming that third-party reference cards (that aren’t overclocked or given custom cooling solutions) would be cheaper and slightly lower-clocked than Founders Edition cards. The Nvidia FE GPUs will launch at $1,200 for the RTX 2080 Ti, $800 for the RTX 2080, and $600 for the RTX 2070. Reference designs will be noticeably cheaper at $1,000, $700, and $500 respectively, but that is still much more expensive than previous generation cards have been. Equivalent 10-series GPUs debuted at $200-$300 less.
Typically we would expect more mid-range cards at more reasonable prices to debut a couple of months after the flagship GPUs. A hypothetical RTX 2060 and RTX 2050 could show up before the end of the year, but Nvidia has yet to make any official announcement to such an effect.
After sticking to purely ray tracing performance comparisons in its Gamescom reveal, Nvidia soon followed up with some more real-world performance numbers in various games. It shoehorned DLSS into those results too, meaning they needed to be consumed with a grain of salt, but they gave us something to go on.
However, since then we’ve been able to get our hands on the cards for our own testing. Though they do offer noticeable improvements over the last-generation of Pascal GPUs, they aren’t anywhere near as powerful as Nvidia’s marketing would suggest.
There’s no denying that the 2080 is more powerful than the 1080, and the 2080 Ti more powerful than the 1080 Ti, but that’s far from the whole story. The 2080 falls behind the 1080 Ti in some tests, perhaps most notably in our 4K ultra gaming test. If the 2080 was priced much more conservatively, this wouldn’t be so noteworthy, but when you can buy a 1080 Ti for $100 less than a 2080 at this time, it makes the 2080 a much less attractive purchase.
Ultimately, the 2080 and 2080 Ti are around 20-30 percent faster than their last-generation counterparts in the most stark of scenarios, but in many tests that gap can be as little as single-digit percentage points, so performance improvements will depend on the software being run.
These results seem to corroborate earlier suggestions that Nvidia has altered its naming conventions with the new Turing graphics cards. The 2080 Ti is the new name for the first iteration of this generation’s Titan GPUs, while the 2080 is a true 1080 replacement. With this in mind, we would expect Nvidia to release its typical “Ti” enhanced Turing card sometime next year, but may well call it a Titan instead, unless it leaves that moniker purely for prosumer and enterprise GPUs — like the Titan V.
Our hands-on performance numbers are around what we expected to see based on the raw specifications of each card.
Note: Nvidia’s Founders Edition models will launch with slightly higher price tags, power requirements, and clock speeds than the reference models which will, in turn, be overclocked and tweaked by third-parties.
There are a number of interesting inter-generational changes at play here. The CUDA cores have increased by similar sort of numbers — although not percentages — as between the 9oo series and the 1000 series graphics cards, which is why we see a noticeable, if not significant, increase in general performance. Clock speeds have actually come down, which isn’t wholly surprising, but when shown in conjunction with an increase in power draw, is a little more so. It could be that those RT and Tensor cores require some juice of their own.
GDDR6 memory provides a solid bump in speed and bandwidth for the 2000-series, bringing both the 2080 and 2070 almost in line with the GTX 1080 Ti, though not quite. Now we know that whether the 2080 can compete with the 1080 Ti is really dependent on the software being run and at what settings. The 1080 Ti is going to fall behind when ray tracing is involved, but in more traditional lighting scenarios, it should be competitive, if not still more powerful than the RTX 2080.
While the number of traditional CUDA cores in the new graphics cards has increased across the board, the more exciting achievement of this new generation, we’re told, is the addition of dedicated hardware for ray tracing and AI. The Turing architecture includes RT cores which use clever tricks to accelerate ray tracing to make it possible to produce realistic lighting and reflections within games without much of an overhead.
Those RT cores will run alongside Turing’s Tensor cores, which utilize AI “trained by supercomputers” to fill in the blanks using a technique known as denoising — effectively a new form of advanced anti-aliasing. Huang also discussed the possibility of foveated rendering, which could help make virtual reality titles much less hardware-intensive by focusing the processing power where the gamer is looking and rendering everything in their peripherals at a lower detail level.
These new technologies at the heart of the 2000-series architecture mean that certain games will be able to leverage real-time reflections and advanced anti-aliasing techniques like never before. Demonstrations at Gamescom showed us explosions which would typically not be visible to the player, being rendered and reflected in materials like a car door or a character’s eyeball, which are visible.
It’s beautiful stuff. Ray tracing has often been considered by many as the end-goal of digital visuals, effectively rendering real light rays within a scene. Nvidia made it clear during its demonstration that its new cards are much better at handling that sort of rendering than any card that has come before it.

Continue reading...