Whether it’s called the GTX 1180 or the GTX 2080, Nvidia’s next-gen graphics card is likely coming in August.
This is photoshopped, in case you couldn’t tell.
It’s been a while since Nvidia introduced its last new graphics architecture for gaming GPUs— more than two years to be precise. That last architecture was Pascal, and it has powered everything from the top-tier GTX 1080 and GTX 1080 Ti to the entry level GTX 1050 and GT 1030. The next generation of Nvidia graphics cards is finally approaching, using the Turing architecture. Here’s what we know about the GTX 1180, what we expect in terms of price, specs, and release date, and the winding path we’ve traveled between Pascal and Turing. The things we ‘know’ about GTX 1180
The list of things that we know —that we’re absolutely certain are correct—can basically be summarized into a single word: nothing. Nvidia has been extremely tight-lipped about its future GPUs this round, and we’re not even sure about the name. Rumors of GTX 1180 and GTX 2080 have been swirling for months, though it looks like the 1180 is going to win out on the official name. We’re going to stick with 1180 for the remainder of this piece and are confident enough of the name that it’s ensconced in a cheap photoshop above. (Expect a hasty update if the winds of change start gusting.) We’re also not sure what the codename for these new chips will be—GT104 would be an easy choice, but Nvidia had GT part names with the Tesla architecture back in the GTX 280 days (2008-2009). Those were all GT200 labels, though, so GT100 could still happen.
While Nvidia hasn’t officially revealed anything, we’re 99 percent certain on three things. First, the next generation architecture is codenamed Turing. Second, it will be manufactured using TSMC’s 12nm FinFET process. (We may see some Turing GPUs manufactured by Samsung later, as was the case with the GTX 1050/1050 Ti and GT 1030 Pascal parts, but the initial parts will come from TSMC.) Third, the first Turing graphics cards will use GDDR6 memory—not HBM2, due to costs and other factors, but GDDR6 will deliver higher performance than the current GDDR5X. Let’s hit those last two in a bit more detail.
What does the move to 12nm from 16nm mean in practice? Various sources indicate TSMC’s 12nm is more of a refinement and tweak to the existing 16nm rather than a true reduction in feature sizes. In that sense, 12nm is more of a marketing term than a true die shrink, but optimizations to the process technology over the past two years should help improve clockspeeds, chip density, and power use—the holy trinity of faster, smaller, and cooler running chips.
GDDR6 continues down the path graphics memory has traveled from GDDR5 and GDDR5X. Over its lifetime, GDDR5 has gone from 3.6 GT/s (that’s giga-transfers per second, though in practice it’s almost the same as Gbit/s) with AMD’s HD 4870 back in 2008, to 9 GT/s with the GTX 1060 6GB. GDDR5X has a range of 10-14 GT/s by sending more data per clock rather than higher clockspeeds. Where the base clock of the GTX 1070 GDDR5 is 2002MHz (8,008 MT/s effective), the GTX 1080 has a base clock of 1251MHz and sends twice as much data per clock (10,008 MT/s effective). Micron ended up being the only company to produce GDDR5X, with Nvidia being the only consumer running GDDR5X at 11 GT/s. GDDR6 will see far broader support, with Micron, Samsung, and SK-Hynix all participating. GDDR6 has an official target range of 14-16 GT/s, and Micron is already showing 18 GT/s modules. GTX 1180 cards are likely to use faster GDDR6, but the exact clockspeeds remain a question mark.
The Volta GV100 block diagram is a monster of cores. Expectations for GTX 1180
Moving on to what we expect from Turing and the GTX 1180, the list grows substantially. Obviously, performance needs to be better than the existing GPUs, and at lower prices for the same level of performance. That doesn’t mean we’ll see insane performance at low prices, but at the very least we should see GTX 1080 Ti levels of performance fall into the $500-$600 range. Nvidia has multiple paths to delivering higher performance than the GTX 1080 Ti, and which one GTX 1180 takes isn’t yet known, so here are the options.
First, Nvidia can go with a larger chip and more cores. Originally slated to arrive last year, Volta morphed into a product that will only see the light of day in supercomputing, machine learning, and professional markets. Volta is incredibly potent, with the Titan V besting the GTX 1080 Ti by up to 30 percent, but it also includes a lot of technology that is of marginal use for gamers—specifically, games don’t need double-precision FP64, and they don’t need the Tensor cores. The easiest solution to envision is that Turing initially ships with a design similar to GV100, but without the TPU or FP64 parts—up to 5,376 CUDA cores would certainly give Turing GPUs a shot in the arm.
More likely is that Turing will be a similar rollout to Pascal. The first GTX 1180 cards will launch this year, but they won’t be the full-fat version of Turing. Instead, we’ll get GPUs that look a lot like the current GP102, meaning up to 3840 CUDA cores, only with improved efficiency and features and slightly higher clockspeeds. Then in another 9-12 months, we’ll get Big Turing and GTX 1180 Ti, with more cores, more memory, and more performance.
The full GP102 block diagram will likely be very similar to the first Turing cards, but with a narrower memory interface.
But Nvidia isn’t locked into any specific core count. If Turing sticks with the 128 CUDA cores per SM, which seems likely, the number of cores present will determine clockspeeds. The Titan V runs 5120 cores at up to 1455MHz, and with a refined 12nm process and changes to the underlying architecture, Turing could run 5120 CUDA cores at 1.5-1.7GHz. Or Nvidia could go with fewer cores and higher clocks, with GTX 1180 potentially being the first Nvidia GPU to ship with stock clocks above 2GHz. But regardless of how Nvidia gets there, we expect performance to be around 25 percent better than the GTX 1080 Ti FE, just like the GTX 1080 was around 25 percent faster than the GTX 980 Ti.
What about the GDDR6—how much VRAM will GTX 1180 have, and how fast will it clock? The safe bet is 8GB, though 12GB and 16GB are also possible. GDDR6 is officially set to run at 14-16 GT/s, but Micron has already talked about 18 GT/s as well. With a 256-bit interface, that would give GTX 1180 anywhere from 448GB/s to 576GB/s of bandwidth, and improvements in the architecture could allow Turing to make better use of the available bandwidth. My bet would be for 16 GT/s GDDR6, with 512GB/s, since that should be available from multiple manufacturers.
More VRAM is an outside possibility but having 16GB of VRAM on a graphics card is a lot like having 32GB of system memory: only professional applications are likely to use it. Even 8GB of VRAM is mostly overkill for games right now, and with consoles continuing to ship with 8GB, it will remain a major target for many years. Plus, going with 8GB on the GTX 1180 leaves the door open for a 12-24GB 1180 Ti and/or Titan card in the future. 12GB with a 384-bit interface could deliver 672-864GB/s of bandwidth, depending on where it falls in the 14-18 GT/s spectrum. 24GB would be almost purely for content creators and professionals, as well as supercomputing—something the Tesla V100 already addresses better.
Power requirements are almost certainly going to be higher this round than the GTX 1070/1080, mostly because the process technology hasn’t changed enough during the past two years. 250W cards are relatively common these days, with the GTX 1080 using 180W, so the GTX 1180 will probably be in the 200-220W range. 8-pin plus 6-pin PCIe power connections will likely come on the reference (aka Founders Edition) models, and dual 8-pin connectors will ship on enthusiast cards.
And then there’s the price, and there’s plenty of uncertainty.
Home
United States
USA — software Everything we know about the GTX 1180, Nvidia's next graphics card