The NVIDIA GeForce RTX 2060 6GB Founders Edition Review: Not Quite Mainstream
by Nate Oh on January 7, 2019 9:00 AM ESTIn the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.
So far we’ve looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we’ve seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.
Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process. And for this launch, like the other RTX cards last year, NVIDIA is taking part by releasing their own GeForce RTX 2060 Founders Edition card, which we are taking a look at today.
NVIDIA GeForce Specification Comparison | ||||||
RTX 2060 Founders Edition | GTX 1060 6GB (GDDR5) | GTX 1070 (GDDR5) |
RTX 2070 | |||
CUDA Cores | 1920 | 1280 | 1920 | 2304 | ||
ROPs | 48 | 48 | 64 | 64 | ||
Core Clock | 1365MHz | 1506MHz | 1506MHz | 1410MHz | ||
Boost Clock | 1680MHz | 1709MHz | 1683MHz | 1620MHz FE: 1710MHz |
||
Memory Clock | 14Gbps GDDR6 | 8Gbps GDDR5 | 8Gbps GDDR5 | 14Gbps GDDR6 | ||
Memory Bus Width | 192-bit | 192-bit | 256-bit | 256-bit | ||
VRAM | 6GB | 6GB | 8GB | 8GB | ||
Single Precision Perf. | 6.5 TFLOPS | 4.4 TFLOPs | 6.5 TFLOPS | 7.5 TFLOPs FE: 7.9 TFLOPS |
||
"RTX-OPS" | 37T | N/A | N/A | 45T | ||
SLI Support | No | No | Yes | No | ||
TDP | 160W | 120W | 150W | 175W FE: 185W |
||
GPU | TU106 | GP106 | GP104 | TU106 | ||
Transistor Count | 10.8B | 4.4B | 7.2B | 10.8B | ||
Architecture | Turing | Pascal | Pascal | Turing | ||
Manufacturing Process | TSMC 12nm "FFN" | TSMC 16nm | TSMC 16nm | TSMC 12nm "FFN" | ||
Launch Date | 1/15/2019 | 7/19/2016 | 6/10/2016 | 10/17/2018 | ||
Launch Price | $349 | MSRP: $249 FE: $299 |
MSRP: $379 FE: $449 |
MSRP: $499 FE: $599 |
Like its older siblings, the GeForce RTX 2060 (6GB) comes in at a higher price-point relative to previous generations, and at $349 the cost is quite unlike the GeForce GTX 1060 6GB’s $299 Founders Edition and $249 MSRP split, let alone the GeForce GTX 960’s $199. At the same time, it still features Turing RT cores and tensor cores, bringing a new entry point for those interested in utilizing GeForce RTX platform features such as realtime raytracing.
Diving into the specs and numbers, the GeForce RTX 2060 sports 1920 CUDA cores, meaning we’re looking at a 30 SM configuration, versus RTX 2070’s 36 SMs. As the core architecture of Turing is designed to scale with the number of SMs, this means that all of the core compute features are being scaled down similarly, so the 17% drop in SMs means a 17% drop in the RT Core count, a 17% drop in the tensor core count, a 17% drop in the texture unit count, a 17% drop in L0/L1 caches, etc.
Unsurprisingly, clockspeeds are going to be very close to NVIDIA’s other TU106 card, RTX 2070. The base clockspeed is down a bit to 1365MHz, but the boost clock is up a bit to 1680MHz. So on the whole, RTX 2060 is poised to deliver around 87% of the RTX 2070’s compute/RT/texture performance, which is an uncharacteristically small gap between a xx70 card and an xx60 card. In other words, the RTX 2060 is in a good position to punch above its weight in compute/shading performance.
However TU106 has taken a bigger trim on the backend, and in workloads that aren’t pure compute, the drop will be a bit harder. The card is shipping with just 6GB of GDDR6 VRAM, as opposed to 8GB on its bigger brother. The result of this is that NVIDIA is not populating 2 of TU106’s 8 memory controllers, resulting in a 192-bit memory bus and meaning that with the use of 14Gbps GDDR6, RTX 2060 only offers 75% of the memory bandwidth of the RTX 2070. Or to put this in numbers, the RTX 2060 will offer 336GB/sec of bandwidth to the RTX 2070’s 448GB/sec.
And since the memory controllers, ROPs, and L2 cache are all tied together very closely in NVIDIA’s architecture, this means that ROP throughput and the amount of L2 cache are also being shaved by 25%. So for graphics workloads the practical performance drop is going to be greater than the 13% mark for compute throughput, but also generally less than the 25% mark for ROP/memory throughput.
Speaking of video memory, NVIDIA has called this the RTX 2060 but early indications are that there will be different configurations of RTX 2060s with less VRAM and possibly fewer CUDA cores and other hardware resources. Hence, it seems forward-looking to refer to the product mentioned in this article as the RTX 2060 (6GB); as you might recall, the GTX 1060 6GB was launched as the ‘GTX 1060’ and so appeared as such in our launch review, up until a month later with the release of the ‘GTX 1060 3GB’, a branding that does not indicate its lower-performing GPU configuration unrelated to frame buffer size. Combined with ongoing GTX 1060 naming shenanigans, as well as with GTX 1050 variants (and AMD’s own Polaris naming shenanigans also of note), it seems prudent to make this clarification now in the interest of future accuracy and consumer awareness.
NVIDIA GTX 1060 Variants Specification Comparison |
|||||||
GTX 1060 6GB | GTX 1060 6GB (9 Gbps) |
GTX 1060 6GB (GDDR5X) | GTX 1060 5GB (Regional) | GTX 1060 3GB | |||
CUDA Cores | 1280 | 1280 | 1280 | 1280 | 1152 | ||
Texture Units | 80 | 80 | 80 | 80 | 72 | ||
ROPs | 48 | 48 | 48 | 40 | 48 | ||
Core Clock | 1506MHz | 1506MHz | 1506MHz | 1506MHz | 1506MHz | ||
Boost Clock | 1708MHz | 1708MHz | 1708MHz | 1708MHz | 1708MHz | ||
Memory Clock | 8Gbps GDDR5 | 9Gbps GDDR5 | 8Gbps GDDR5X | 8Gbps GDDR5 | 8Gbps GDDR5 | ||
Memory Bus Width | 192-bit | 192-bit | 192-bit | 160-bit | 192-bit | ||
VRAM | 6GB | 6GB | 6GB | 5GB | 3GB | ||
TDP | 120W | 120W | 120W | 120W | 120W | ||
GPU | GP106 | GP106 | GP104* | GP106 | GP106 | ||
Launch Date | 7/19/2016 | Q2 2017 | Q3 2018 | Q3 2018 | 8/18/2016 |
Moving on, NVIDIA is rating the RTX 2060 for a TDP of 160W. This is down from the RTX 2070, but only slightly, as those cards are rated for 175W. Cut-down GPUs have limited options for reducing their power consumption, so it’s not unusual to see a card like this rated to draw almost as much power as its full-fledged counterpart.
All-in-all, the GeForce RTX 2060 (6GB) is quite the interesting card, as the value-enthusiast segment tends to be more attuned to price and power consumption than the performance-enthusiast segment. Additionally, as a value-enthusiast card and potential upgrade option it will also need to perform well on a wide range of older and newer games – in other words, traditional rasterization performance rather than hybrid rendering performance.
Meanwhile, looking at evaluating the RTX 2060 itself, measuring generalizable hybrid rendering performance remains unclear. Linked to the Windows 10 October 2018 Update (1809), DXR has been rolled-out fairly recently. 3DMark’s DXR benchmark, Port Royal, is due on January 8th, while for realtime raytracing Battlefield V is the sole title with it for the moment, with optimization efforts are ongoing as seen in their recent driver efforts. Meanwhile, it seems that some of Turing's other advanced shader features (Variable Rate Shading) are only currently available in Wolfenstein II.
Of course, RTX support for a number of titles have been announced and many are due this year, but there is no centralized resource to keep track of availability. It’s true that developers are ultimately responsible for this information and their game, but on the flipside, this has required very close cooperation between NVIDIA and developers for quite some time. In the end, RTX is a technology platform spearheaded by NVIDIA and inextricably linked to their hardware, so it’s to the detriment of potential RTX 20 series owners in researching and collating what current games can make use of which specialized hardware features they purchased.
Planned NVIDIA Turing Feature Support for Games | |||||
Game | Real Time Raytracing | Deep Learning Supersampling (DLSS) | Turing Advanced Shading | ||
Anthem | Yes | ||||
Ark: Survival Evolved | Yes | ||||
Assetto Corsa Competizione | Yes | ||||
Atomic Heart | Yes | Yes | |||
Battlefield V | Yes (available) |
Yes | |||
Control | Yes | ||||
Dauntless | Yes | ||||
Darksiders III | Yes | ||||
Deliver Us The Moon: Fortuna | Yes | ||||
Enlisted | Yes | ||||
Fear The Wolves | Yes | ||||
Final Fantasy XV | Yes (available in standalone benchmark) |
||||
Fractured Lands | Yes | ||||
Hellblade: Senua's Sacrifice | Yes | ||||
Hitman 2 | Yes | ||||
In Death | Yes | ||||
Islands of Nyne | Yes | ||||
Justice | Yes | Yes | |||
JX3 | Yes | Yes | |||
KINETIK | Yes | ||||
MechWarrior 5: Mercenaries | Yes | Yes | |||
Metro Exodus | Yes | ||||
Outpost Zero | Yes | ||||
Overkill's The Walking Dead | Yes | ||||
PlayerUnknown Battlegrounds | Yes | ||||
ProjectDH | Yes | ||||
Remnant: From the Ashes | Yes | ||||
SCUM | Yes | ||||
Serious Sam 4: Planet Badass | Yes | ||||
Shadow of the Tomb Raider | Yes | ||||
Stormdivers | Yes | ||||
The Forge Arena | Yes | ||||
We Happy Few | Yes | ||||
Wolfenstein II | Yes, Variable Shading (available) |
So the RTX 2060 (6GB) is in a better situation than the RTX 2070. With comparative GTX 10 series products either very low on stock (GTX 1080, GTX 1070) or at higher prices (GTX 1070 Ti), there’s less potential for sales cannibalization. And as Ryan mentioned in the AnandTech 2018 retrospective on GPUs, with leftover Pascal inventory due to the cryptocurrency bubble, there’s much less pressure to sell Turing GPUs at lower prices. So the RTX 2060 leaves the existing GTX 1060 6GB (1280 cores) and 3GB (1152 cores) with breathing room. That being said, $350 is far from the usual ‘mainstream’ price-point, and even more expensive than the popular $329 enthusiast-class GTX 970.
Across the aisle, the recent Radeon RX 590 in the mix, though its direct competition is the GTX 1060 6GB. Otherwise, the Radeon RX Vega 56 is likely the closer matchup in terms of performance. Even then, AMD and its partners are going to have little choice here: either they're going to have to drop prices to accomodate the introduction of the RTX 2060, or essentially wind down Vega sales.
Unfortunately we've not had the card in for testing as long as we would've liked, but regardless the RTX platform performance testing is in the same situation as during the RTX 2070 launch. Because the technology is still in the early days, we can’t accurately determine the performance suitability of RTX 2060 (6GB) as an entry point for the RTX platform. So the same caveats apply to gamers considering making the plunge.
Q1 2019 GPU Pricing Comparison | |||||
AMD | Price | NVIDIA | |||
Radeon RX Vega 56 | $499 | GeForce RTX 2070 | |||
$449 | GeForce GTX 1070 Ti | ||||
$349 | GeForce RTX 2060 (6GB) | ||||
$335 | GeForce GTX 1070 | ||||
Radeon RX 590 | $279 | ||||
$249 | GeForce GTX 1060 6GB (1280 cores) |
||||
Radeon RX 580 (8GB) | $200/$209 | GeForce GTX 1060 3GB (1152 cores) |
134 Comments
View All Comments
sing_electric - Monday, January 7, 2019 - link
It's likely that Nvidia has actually done something to restrict the 2060s to 6GB - either though its agreements with board makers or by physically disabling some of the RAM channels on the chip (or both). I agree, it'd be interesting to see how it performs, since I'd suspect it'd be at a decent price/perf point compared to the 2070, but that's also exactly why we're not likely to see it happen.CiccioB - Monday, January 7, 2019 - link
You can't add memory at will. You need to take into consideration the available bus, and as this is a 192bit bus, you can install 3, 6 or 12 GB of memory unless you cope with hybrid configuration thorough heavily optimized drivers (as nvidia did with 970).nevcairiel - Monday, January 7, 2019 - link
Even if they wanted to increase it, just adding 2GB more is hard to impossible. The chip has a certain memory interface, in this case 192-bit. Thats 6x 32-bit memory controller, for 6 1GB chips. You cannot just add 2 more without getting into trouble - like the 970, which had unbalanced memory speeds, which was terrible.mkaibear - Tuesday, January 8, 2019 - link
"terrible" in this case defined as "unnoticeable to anyone not obsessed with benchmark scores"Retycint - Tuesday, January 8, 2019 - link
It was unnoticeable back then, because even the most intensive game/benchmark rarely utilized more than 3.5GB of RAM. The issue, however, comes when newer games inevitably start to consume more and more VRAM - at which point the "terrible" 0.5GB of VRAM will become painfully apparent.mkaibear - Wednesday, January 9, 2019 - link
So, you agree with my original comment which was that it was not terrible at the time? Four years from launch and it's not yet "painfully apparent"?That's not a bad lifespan for a graphics card. Or if you disagree can you tell me which games, now, have noticeable performance issues from using a 970?
FWIW my 970 has been great at 1440p for me for the last 4 years. No performance issues at all.
atragorn - Monday, January 7, 2019 - link
I am more interested in that comment " yesterday’s announcement of game bundles for RTX cards, as well as ‘G-Sync Compatibility’, where NVIDIA cards will support VESA Adaptive Sync. That driver is due on the same day of the RTX 2060 (6GB) launch, and it could mean the eventual negation of AMD’s FreeSync ecosystem advantage." will ALL nvidia cards support Freesync/Freesync2 or only the the RTX series ?A5 - Monday, January 7, 2019 - link
Important to remember that VESA ASync and FreeSync aren't exactly the same.I don't *think* it will be instant compatibility with the whole FreeSync range, but it would be nice. The G-sync hardware is too expensive for its marginal benefits - this capitulation has been a loooooong time coming.
Devo2007 - Monday, January 7, 2019 - link
Anandtech's article about this last night mentioned support will be limited to Pascal & Turing cardsRyan Smith - Monday, January 7, 2019 - link
https://www.anandtech.com/show/13797/nvidia-to-sup...