Home United States USA — IT Nvidia GeForce RTX 3090 review

Nvidia GeForce RTX 3090 review

369
0
SHARE

The Nvidia GeForce RTX 3090 is being pushed as the most powerful consumer graphics card ever made, coming in at more than twice the price of the RTX 3080 – but is it worth that astronomical price tag? Read our review of the Nvidia GeForce RTX 3090 to find out.
When the Nvidia GeForce RTX 3090 became the sole replacement for the previous generation’s two most powerful GPUs, the Nvidia Titan RTX and the RTX 2080 Ti, it was clear it had a high bar to meet. Thankfully, this massive GPU more than meets that standard and is capable of heavy gaming and 3D rendering. With all that power as well as its 24GB of GDDR6x memory hiding behind its huge heatsink, the Nvidia GeForce RTX 3090 is capable of tackling 8K gaming at 60 fps, though with some hiccups. Of course, that means you’ll definitely get fantastic 4K performance with the newest AAA games. And, while that’s great for those diving into the best PC games, this graphics card is more likely to attract those who need every last ounce of graphics power to render video and 3D animation. Considering that this crown jewel of GPUs is Nvidia’s flagship, the Nvidia GeForce RTX 3090 just can’t be beat in terms of performance. That power, though, comes with a price to match, making the RTX 3080 as well as AMD’s alternative, the Radeon RX 6900XT, a much more attractive proposition for most mainstream users. That’s why this card is really only recommended for those who need hardware-accelerated rendering and hardcore enthusiasts who want the absolute best graphics card, price be damned. The Nvidia GeForce RTX 3090 is available right now, starting at $1,499 (£1,399, around AU$2,030) for Nvidia’s own Founders Edition. However, this will be the first time Nvidia has opened up a Titan-level card up to third party graphics card manufacturers like MSI, Asus and Zotac, which means you can expect some versions of the RTX 3090 to be significantly more expensive. It’s hard to pin down whether this is a price increase or a price cut over the previous generation. Compared to the Titan RTX, it’s a massive price cut, where that card cost an outrageous $2,499 (£2,399, AU$3,999) for similar, albeit last-generation, specs. However, the RTX 2080 Ti, which in some ways still doesn’t have a direct successor, launched at $1,199 (£1,099, AU$1,899). The RTX 3090, then, exists in kind of a middle ground. The GeForce name suggests that this graphics card is aimed at gamers, but the specs and pricing suggest that it’s more geared towards prosumers that need raw rendering power, but aren’t quite ready to jump into the Nvidia Quadro and Tesla worlds. Just like its little sibling, the RTX 3080, the RTX 3090 is built on the Nvidia Ampere architecture, using the full-fat GA102 GPU. This time around, we’re getting 82 Streaming Multiprocessors (SM), making for a total of 10,496 CUDA cores, along with 328 Tensor cores and 82 RT Cores. At first glance, the small bump up from the 72 SMs on the Nvidia Turing-based Titan RTX seems like a minor improvement, but one of the most groundbreaking differences with the Ampere architecture is the ability for both datapaths on each SM being able to handle FP32 workloads. This means that CUDA core counts per SM is effectively doubled, which is why the RTX 3090 is such a rendering behemoth. The RTX 3090 is also rocking 24GB of GDDR6X video memory on a 384-bit bus, which makes for 936 GB/s of memory bandwidth – that’s nearly a terabyte of data every second. Having such a huge allocation of VRAM that is this fast means that anyone that does heavy 3D rendering work in applications like Davinci Resolve and Blender will get a huge benefit. And, when your work involves these applications, anything that can shave time off of project times saves you money in the long term. Combined with the comparatively low cost – at least compared to the Titan RTX – the RTX 3090 is straight up a bargain. As we mentioned in our RTX 3080 review, both the Tensor cores and RT cores that Nvidia has made such a huge deal of these past couple graphics card generations see big improvements, too. Namely, throughput of RT cores has doubled with the second-generation ones present on RTX 3000 series cards. In ray tracing applications, the SM will essentially cast a light ray, then offload ray tracing workloads to the RT cores, where they will calculate where in the scene it bounces, reporting that data back to the SM. In the past, ray tracing was basically impossible to do in real time, as the SM would be responsible for doing that whole calculation on its own, on top of any rasterization it had to do at the same time. But while the RT Core takes on a huge bulk of that workload, ray tracing is still a very computationally expensive technology, which means that it still has a heavy performance cost, which is why DLSS is becoming more and more important, both in gaming and in programs like D5 Render. The third-generation Tensor Cores present in Nvidia Ampere graphics cards have also seen a massive improvement, doubling in speed over the Turing Tensor Core.

Continue reading...