Being an NVidia use for 3 generations, I'm finding it hard to ignore this cards value, especially since I've invested $100 each on my last two NVidia cards (including my SLI setup) adding liquid cooling. The brackets alone are $30.
Even if this card is less efficient per watt than NVidia's, the difference is negligible when considering kw/$. It's like comparing different brand of LED bulbs, some use 10-20% less energy but the overall value isn't as good because the more efficient ones cost more, don't dim, have a light buzz noise, etc.
After reading this review I find the Fury X more impressive than I otherwise would have.
Yeah a lot of reviews painted doom and gloom but the watercooler has to be factored into that price. Noise and system heat removal of the closed loop cooler are really nice. I still think they should launch the vanilla Fury at $499 - if it gets close to the performance of the Fury X they'll have a decent card on their hands. To me though the one I'll be keeping an eye out for is Nano. If they can get something like 80% of the performance at roughly half the power, that would make a lot of sense for more moderately spec'd systems. Regardless of what flavor, I'll be interested to see if third parties will soon launch tools to bump the voltage up and tinker with HBM clocks.
Water cooling if anything has proven to be a negative so far for Fury X with all the concerns of pump whine and in the end where is the actual benefit of water cooling when it still ends up slower than 980Ti with virtually no overclocking headroom?
Based on Ryan's review Fury Air we'll most likely see the downsides of leakage on TDP and its also expected to be 7/8th SP/TMU. Fury Nano also appears to be poised as a niche part that will cost as much if not more than Fury X, which is amazing because at 80-85% of Fury X it won't be any faster than the GTX 980 at 1440p and below and right in that same TDP range too. It will have the benefit of form factor but will that be enough to justify a massive premium?
You can get a bad batch of pumps in any CLC. Cooler Master screwed up (and not for the first time!) but the fixed units seem to be fine and for the units out there with a whine just RMA them. I'm certainly not going to buy one, but I know people that love water cooled components and like the simplicity and warranty of a CL system.
Nobody knows the price of the Nano, nor final performance. I think they'd be crazy to price it over $550 even factoring in the form factor - unless someone releases a low-profile model, then they can charge whatever they want for it. We also don't know final performance of Fury compared to Fury X, though I already said they should price it more aggressively. I don't think leakage will be that big of an issue as they'll probably cap thermals. Clocks will vary depending on load but they do on Maxwell too - it's the new norm for stock aircooled graphics cards.
As for overclocking, yeah that was really terrible. Until people are able to tinker with voltage controls and the memory, there's little point. Even then, set some good fan profiles.
I'm wondering the same thing. When Hector Ruiz left Motorola, they fell apart, and when he joined AMD, they out-engineered and out-manufactured Intel with quality control parity. I guess the fiasco would be when Hector Ruiz left AMD, because then they fell apart.
Oh, and also forgot his biggest mistake was vastly overpaying for ATI, leading both companies on this downward spiral of crippling debt and unrealized potential.
Uh...Bulldozer happened on Ruiz's watch, and he also wasn't able to capitalize on K8's early performance leadership. Beyond that he orchestrated the sale of their fabs to ATIC culminating in the usurious take or pay WSA with GloFo that still cripples them to this day. But of course, it was no surprise why he did this, he traded AMD's fabs for a position as GloFo's CEO which he was forced to resign from in shame due to insider trading allegations. Yep, Ruiz was truly a crook but AMD fanboys love to throw stones at Huang. :D
It's silly to paint AMD as the underdog. It was not that long ago that they were able to buy ATI (a company that was bigger than NVIDIA). I remember at the time a lot of people were saying that NVIDIA was doomed and could never stand up to the might of a combined AMD + ATI. AMD is not the underdog, AMD got beat by the underdog.
I mean, AMD has a market cap of ~2B, compared to 11B of Nvidia and ~140B of Intel. They also have only ~25% of the dGPU market I believe. While I don't know a lot about stocks and I'm sure this doesn't tell the whole story, I'm not sure you could ever sell Nvidia as the underdog here.
Sorry but that is plain wrong as nVidia wasn't just bigger than ATI, they were bigger than AMD. Their market cap in Q2 2006 was $9.06 billion, on the purchase date AMD was worth $8.84 billion and ATI $4.2 billion. It took a massive cash/stock deal worth $5.6 billion to buy ATI, including over $2 billion in loans. AMD stretched to the limit to make this happen, three days later Intel introduced the Core 2 processor and it all went downhill from there as AMD couldn't invest more and struggled to pay interest on falling sales. And AMD made an enemy of nVidia, which Intel could use to boot nVidia out of the chipset/integrated graphics market by not licensing QPI/DMI with nVidia having nowhere to go. It cost them $1.5 billion, but Intel has made back that several times over since.
@Wreckage Not quite. Cash reserves play a role in evaluating a company's net worth. When AMD acquired ATI, they spent considerable money to do so and plunged themselves into debt. The resulting valuation of AMD was not simply the combined valuations of AMD and ATI pre-acquisition. Far from it.
AMD is the undisputed underdog in 2015, and has been for many years before that. That is why Ryan gave so much praise to AMD in the article. For them to even be competitive at the high end, given their resources and competition, is nothing short of impressive.
If you cannot at least acknowledge that, than your view on this product and the GPU market is completely warped. As consumers we are all better off with a Fury X in the market.
Yes, NVIDIA was definitely the underdog at the time of the AMD purchase of ATI. Many people were leaving NVIDIA for dead. NVIDIA had recently lost its ability to make chipsets for Intel processors, and after AMD bought ATI it was presumed (rightly so) that NVIDIA would no longer be able to make chipsets for AMD processors. It was thought that the discrete GPU market might dry up with fusion CPU/GPU chips taking over the market.
Yep, I remember after the merger happened most AMD/ATI fans were rejoicing as they felt it would spell the end of both Nvidia and Intel, Future is Fusion and all that promise lol. Many like myself were pointing out the fact AMD overpayed for ATI and that they would collapse under the weight of all that debt given ATI's revenue and profits didn't come close to justifying the purchase price.
My how things have played out completely differently! It's like the incredible shrinking company. At this point it really is in AMD and their fan's best interest if they are just bought out and broken up for scraps, at least someone with deep pockets might be able to revive some of their core products and turn things around.
Well done Mr Smith. I would go so far as to say THE best Fury X review on the internet bar none. The most important ingredient is BALANCE. Something that other reviews sorely lack.
In particular the PCPer and HardOCP articles read like they were written by the green goblin himself and consequently suffer a MASSIVE credibility failure.
Yes Nvidia has a better performing card in the 980TI but it was refreshing to see credit
given to AMD where it was due. Only dolts and fanatical AMD haters (I'm not quite sure what category chizow falls into, probably both and a third "Nvidia shill") would deny that we need AMD AND Nvidia for the consumer to win.
"What's Left of AMD" can keep making SoCs and console APUs or whatever other widgets under the umbrella of some monster conglomerate like Samsung, Qualcomm or Microsoft and I'm perfectly OK with that. Maybe I'll even buy an AMD product again.
Yet, still 3rd rate. The overwhelming majority of the market has gone on just fine without AMD being relevant in the CPU market, and recently, the same has happened in the GPU market. AMD going away won't matter to anyone but their few remaining devout fanboys like Ranger101.
@piiman - I guess we'll see soon enough, I'm confident it won't make any difference given GPU prices have gone up and up anyways. If anything we may see price stabilization as we've seen in the CPU industry.
@medi03 AMD was up to 30% a few times and they did certainly have performance leadership at the time of K8 but of course they wanted to charge anyone for the privilege. Higher price? No, $450 for entry level Athlon 64, much more than what they charged in the past and certainly much more than Intel was charging at the time going up to $1500 on the high end with their FX chips.
Best interest? Broken up for scraps? You do realize how important AMD is to people who are Intel\NVidia fans right?
Without AMD, Intel and NVidia are unchallenged, and we'll be back to paying $250 for a low-end video card and $300 for a mid-range CPU. There would be no GTX 750's or Pentium G3258's in the <$100 tier.
@Samus, they're irrelevant in the CPU market and have been for years, and yet amazingly, prices are as low as ever since Intel began dominating AMD in performance when they launched Core 2. Since then I've upgraded 5x and have not paid more than $300 for a high-end Intel CPU. How does this happen without competition from AMD as you claim? Oh right, because Intel is still competing with itself and needs to provide enough improvement in order to entice me to buy another one of their products and "upgrade".
The exact same thing will happen in the GPU sector, with or without AMD. Not worried at all, in fact I'm looking forward to the day a company with deep pockets buys out AMD and reinvigorates their products, I may actually have a reason to buy AMD (or whatever it is called after being bought out) again!
you overestimate the human drive... if another isn't pushing us, we will get lazy and that's not an argument... what we'll do instead to make people upgrade is release products in steps planned out much further into the future that are even smaller steps than how intel is releasing now
ATi were ahead for the 9xxx series, and that's it. Moreover, NVIDIA's chipset struggles with Intel were in 2009 and settled in early 2011, something that would've benefitted NVIDIA far more than Intel's settlement with AMD as it would've done far less damage to NVIDIA's financials over a much shorter period of time.
The lack of higher end APUs hasn't helped, nor has the issue with actually trying to get a GPU onto a CPU die in the first place. Remember that when Intel tried it with Clarkdale/Arrandale, the graphics and IMC were 45nm, sitting alongside everything else which was 32nm.
I think you have to look at a bigger sample than that, riding on the 9000 series momentum, AMD was competitive for years with a near 50/50 share through the X800/X1900 series. And then G80/R600 happened and they never really recovered. There was a minor blip with Cypress vs. Fermi where AMD got close again but Nvidia quickly righted things with GF106 and GF110 (GTX 570/580).
nVidia wasn't the underdog in terms of technology. nVidia was the choice of gamers. ATi was big because they had been around since the early days of CGA and Hercules, and had lots of OEM contracts. In terms of technology and performance, ATi was always struggling to keep up with nVidia, and they didn't reach parity until the Radeon 8500/9700-era, even though nVidia was the newcomer and ATi had been active in the PC market since the mid-80s.
Well done analysis, though the kick in the head was Bulldozer and it's utter failure. Core 2 wasn't really AMD's downfall so much as Core/Sandy Bridge, which came at the exact wrong time for the utter failure of Bulldozer. This combined with AMD's dismal failure to market its graphics card has cost them billions. Even this article calls the 290x problematic, a card that offered the same performance as the original Titan at a fraction of the price. Based on empirical data the 290/x should have been almost continuously sold until the introduction of Nvidia's Maxwell architecture.
Instead people continued to buy the much less performant per dollar Nvidia cards and/or waited for "the good GPU company" to put out their new architecture. AMD's performance in marketing has been utterly appalling at the same time Nvidia's has been extremely tight. Whether that will, or even can, change next year remains to be seen.
Except efficiency was not good enough across the generations of 28nm GCN in an era where efficiency + thermal/power limits constrain performance, and look what Nvidia did over a similar era from Fermi (which was at market when GCN 1.0 was released) to Kepler to Maxwell. Plus efficiency is kind of the ultimate marketing buzzword in all areas of tech and not having any ability to mention it (plus having generally inferor products) hamstrung their marketing all along
1. If your TDP is through the rough, you'll have issues with your cooling setup. Any time you introduce a bigger cooling setup because your cards run that hot, you're going to be mocked for it and people are going to be weary of it. With 22nm or 20nm nowhere in sight for GPUs, efficiency had to be a priority, otherwise you're going to ship cards that take up three slots or ship with water coolers.
2. You also can't just play to the desktop market. Laptops are still the preferred computing platform and even if people are going for a desktop, AIOs are looking much more appealing than a monitor/tower combo. So you want to have any shot in either market, you have to build an efficient chip. And you have to convince people they "need" this chip, because Intel's iGPUs do what most people want just fine anyway.
3. Businesses and such with "always on" computers would like it if their computers ate less power. Even if you can save a handful of watts, multiplying that by thousands and they add up to an appreciable amount of savings.
ATI wasn't bigger, AMD just paid a preposterous and entirely unrealistic amount of money for it. Soon after the merger, AMD + ATI was worth less than what they paid for the latter, ultimately leading to the loss of its foundries, putting it in an even worse position. Let's face it, AMD was, and historically has always been betrayed, its sole purpose is to create the illusion of competition so that the big boys don't look bad for running unopposed, even if this is what happens in practice.
Just when AMD got lucky with Athlon a mole was sent to make sure AMD stays down.
While Intel wasn't the underdog in terms of marketshare, they were in terms of technology and performance with the Athlon 64 vs. Pentium 4. Intel had a dog on their hands that they managed to weather the storm with, until they got Conroe'd in 2006. Now, they are down and most likely out, as Zen even if it delivers as promised (40% IPC increase just isn't enough) will take years to gain traction in the CPU market. Time AMD simply does not have, especially given how far behind they've fallen in the dGPU market.
That's 40% over excavator, so about 60%? over vishera. If they manage to get good enough IPC on 8 cores, at a good price, they may really make a comeback
Well during the P4 era Intel bribed OEMs to not use Athlon chips, which they later had to pay $1.25bn to AMD for. While one could argue the monetary losses may have been partially made up for, the settlement came at the end of 2009, so too little too late. Intel bought themselves time with their bribes, and that's what really enabled them to weather the storms.
No, if you read the actual results and AMD's own testimony, they couldn't produce enough chips and offer them at a low enough price to OEMs compared to what Intel was just giving away as subsidies.
"Which is not say I’m looking to paint a poor picture of the company – AMD Is nothing if not the perineal underdog who constantly manages to surprise us with what they can do with less"
I think the word you were looking for is perennial. Unless you truly meant to refer to AMD as the taint.
Here we go, Chizoo once again attempting to align himself with the reviewer, as he does at PCPER no doubt attempting to gain much needed credibility. I'm afraid it's too late bro as you are universally recognised as a psychotic AMD hater. But feigning concern for the reviewer's health is a laughable cheap shot, even by your bottom feeding standards.
Good to see you charging in to defend Chizoo there Michael, he does need help. The point is, dearest Chizoo, You can't on the one hand @ PCPER claim that Ryan Smith conveniently got sick to avoid raining on the fury like other websites and then pretend that you are concerned about his health @ Anandtech. It's too late for damage control bro, people are now well aware of your mentality or lack thereof.
They literally hit the size limits interposers can scale up to with this chip - so they can't make it any bigger to pack more transistors for more ROPs, until a die shrink. So they decided on a tradeoff, favouring other things than ROPs.
They had a monster shader count and likely would be fine if they went to 3840 max to make room for more ROPs. 96 or 128 ROPs would have been impressive and really made this chip push lots of pixels. With HBM and the new delta color compression algorithm, there should be enough bandwidth to support these additional ROPs without bottle necking them.
AMD also scaled the number of TMUs with the shaders but it likely wouldn't have hurt to have increased them by 50% too. Alternatively AMD could have redesigned the TMUs to have better 16 bit per channel texture support. Either of these changes would have put the texel throughput well beyond the GM200's theoretical throughput. I have a feeling that this is one of the bottlenecks that helps the GM200 pull ahead of Fiji.
No, that implies the shaders are the bottleneck at higher resolutions while ROP/fillrate/geometry remained constant. While Nvidia's bottleneck at lower resolutions isn't shader bound but their higher ROP/fillrate allows them to realize this benefit in actual FPS, AMD's ROPs are saturated and simply can't produce more frames.
Right now there's not a lot of evidence for R9 Fury X being ROP limited. The performance we're seeing does not have any tell-tale signs of being ROP-bound, only hints here and there that may be the ROPs, or could just as well be the front-end.
While Hawaii was due for the update, I'm not so sure we need to jump up in ROPs again so soon.
What about geometry Ryan? ROPs are often used interchangeably with Geometry/Set-up engine, there is definitely something going on with Fury X at lower resolutions, in instances where SP performance is no problem, it just can't draw/fill pixels fast enough and performs VERY similarly to previous gen or weaker cards (290X/390X and 980). TechReport actually has quite a few theoreticals that show this, where their pixel fill is way behind GM200 and much closer to Hawaii/GM204.
Yeah my bet is on Geometry. Check out the Synthetics page. It own the Pixel and Texel fillrate tests, but loses on the Tessellation test which has a large dependency on geometry. nVidia has also been historically very strong with geometry.
Thanks for the review! While the conclusions aren't really any different than all the other reputable review sites on the Interwebs, you were very thorough and brought an interesting perspective to the table too. Better late than never!
You must use the latest nightly build of LAV filters, in order to be able to use the 4K H.264 DXVA decoder of AMD cards. All previous builds fall back to SW mode.
Ryan, regarding Mantle performance back in the R9 285 review (http://www.anandtech.com/show/8460/amd-radeon-r9-2... you wrote that AMD stated the issue with performance regression was that developers had not yet optimized for Tonga's newer architecture. While here you state that the performance regression is due to AMD having not optimized on the driver side. What is the actual case? What is the actual weighting given these three categories? - Hardware Driver API Software/Game
What I'm wondering is if we make an assumption that upcoming low level APIs will have similar behavior as Mantle what will happen going forward as more GPU architectures are introduced and newer games are introduced? If the onus shifts especially heavily towards the software side it it seems more realistic in practice that developers will have much more narrower scope in which optimize for.
I'm wondering if Anandtech could possibly look more indept into this issue as to how it pertains to the move towards low level APIs used in the future as it could have large implications in terms of the software/hardware support relationship going forward.
"What is the actual case? What is the actual weighting given these three categories? -"
Right now the ball appears to be solidly in AMD's court. They are taking responsibility for the poor performance of certain Mantle titles on R9 Fury X.
As it stands I hesitate to read into this too much for DX12/Vulkan. Those are going to be finalized, widely supported APIs, unlike Mantle which has gone from production to retirement in the span of just over a year.
Thanks for the response. I guess we will see more for certain as time moves on.
My concern is if lower level APIs require more architecture specific optimizations and the burden is shifted to developers in practice that will cause some rather "interesting" implications.
Also what would be of interest is how much of reviewers test suites will still look at DX11 performance as a possible fallback should this become a possible issue.
You don't need architecture improvements to use DX12/Vulkan/etc. The APIs merely allow you to implement them over DX11 if you choose to. You can write a DX12 game without optimizing for any GPUs (although, not doing so for GCN given consoles are GCN would be a tad silly).
If developers are aiming to put low level stuff in whenever they can than the issue becomes that due to AMD's "GCN everywhere" approach developers may just start coding for PS4, porting that code to Xbox DX12 and than porting that to PC with higher textures/better shadows/effects. In which Nvidia could take massive performance deficites to AMD due to not getting the same amount of extra performance from DX12.
Don't see that happening in the next 5 years. At least, not with most games that are console+PC and need huge performance. You may see it in a lot of Indie/small studio cross platform games however.
AMD is getting there but, they still have a little bit to go to bring us a new "9700 Pro". That card devastated all Nvidia cards back then. That's what I'm waiting for to come from AMD before I switch back.
Everyone who bought a Geforce FX card should feel bad, because the AMD offerings were massively better. But now AMD is close to NVIDIA, it's still time to rag on AMD, huh?
That said, of course if I had $650 to spend, you bet your ass I'd buy a 980 Ti.
C'mon, Fury isn't even close to the Geforce FX level of fail. It's really hard to overstate how bad the FX5800 was, compared to the Radeon 9700 and even the Geforce 4600Ti.
The Fury X wins some 4K benchmarks, the 980Ti wins some. The 980Ti uses a bit less power but the Fury X is cooler and quieter.
Geforce FX level of fail would be if the Fury X was released 3 months from now to go up against the 980Ti with 390X levels of performance and an air cooler.
Furmark power load means nothing, it is just a good way to stress test and see how much power the GPU is capable of pulling in a worst-case scenario and how it behaves in that scenario.
While gaming, the difference is miniscule and no one will care one bit.
Also, they didn't win 90% of the benchmarks at 4K, though they certainly did at 1440. However, the real world isn't that simple. A 10% performance difference in GPUs may as well be zero difference, there are pretty much no game features which only require a 10% higher performance GPU to use... or even 15%.
As for the value argument, I'd say they are about even. The Fury X will run cooler and quieter, take up less space, and will undoubtedly improve to parity or beyond the 980Ti in performance with driver updates. For a number of reasons, the Fury X should actually age better, as well. But that really only matters for people who keep their cards for three years or more (which most people usually do). The 980Ti has a RAM capacity advantage and an excellent - and known - overclocking capacity and currently performs unnoticeably better.
I'd also expect two Fury X cards to outperform two 980Ti cards with XFire currently having better scaling than SLI.
The differences in minimums aren't miniscule at all, and you also seem to be discounting the fact 980Ti overclocks much better than Fury X. Sure XDMA CF scales better when it works, but AMD has shown time and again, they're completely unreliable for timely CF fixes for popular games to the point CF is clearly a negative for them right now.
We don't yet know how the Fury X will overclock with unlocked voltages.
SLI is almost just as unreliable as CF, ever peruse the forums? That, and quite often you can get profiles from the wild wired web well before the companies release their support - especially on AMD's side.
We do know Fury X is an exceptionally poor overclocker at stock and already uses more power than the competition. Who's fault is it that we don't have proper overclocking capabilities when AMD was the one who publicly claimed this card was an "Overclocker's Dream?" Maybe they meant you could Overclock it, in your Dreams?
SLI is not as unreliable as CF, Nvidia actually offers timely updates on Day 1 and works with the developers to implement SLI support. In cases where there isn't a Day 1 profile, SLI has always provided more granular control over SLI profile bits vs. AMD's black box approach of a loadable binary, or wholesale game profile copies (which can break other things, like AA compatibility bits).
No, he did actually mention the 980Ti's excellent overclocking ability. Conversely, at no point did he mention Fury X's overclocking ability, presumably because there isn't any.
first off, its 81W, not 120W(467-386). Second, unless you are running furmark as your screen saver, its pretty irrelevant. It merely serves to demonstrate the maximum amount of power the GPU is allowed to use(and given that the 980 Ti's is 1W less than in gaming, it indicates it is being artfically limited because it knows its running furmark).
The important power number is the in game power usage, where the gap is 20W.
There is no "artificial" limiting on the GTX 980 Ti in FurMark. The card has a 250W limit, and it tends to hit it in both games and FurMark. Unlike the R9 Fury X, NVIDIA did not build in a bunch of thermal/electrical headroom in to the reference design.
You do realize HBM was designed by AMD with Hynix, right? That is why AMD got first dibs.
Want to see that kind of innovation again in the future? You best hope AMD sticks around, because they're the only ones innovating at all.
nVidia is like Apple, they're good at making pretty looking products and throwing the best of what others created into making it work well, then they throw their software into the mix and call it a premium product.
Intel hasn't innovated on the CPU front since the advent of the Pentium 4. Core * CPUs are derived from the Penitum M, which was derived from the Pentium Pro.
Man you are pegging the hipster meter BIG TIME. Get serious. "Intel hasn't innovated on the CPU front since the advent of the Pentium 4..." That has to be THE dumbest shit i've read in a long time.
Say what you will about nvidia, but maxwell is a pristinely engineered chip.
While i agree with you that AMD sticking around is good, you can't be pissed at nvidia if they become a monopoly because AMD just can't resist buying tickets on the fail train...
Pretty much, AMD supporters/fans/apologists love to parrot the meme that Intel hasn't innovated since original i7 or whatever, and while development there has certainly slowed, we have a number of 18 core e5-2699v3 servers in my data center at work, Broadwell Iris Pro iGPs that handily beat AMD APU and approach low-end dGPU perf, and ultrabooks and tablets that run on fanless 5W Core M CPUs. Oh, and I've upgraded also managed to find meaningful desktop upgrades every few years for no more than $300 since Core 2 put me back in Intel's camp for the first time in nearly a decade.
None of what you stated is innovation, merely minor evolution. The core design is the same, gaining only ~5% or so IPC per generation, same basic layouts, same basic tech. Are you sure you know what "innovation" means?
Bulldozer modules were an innovative design. A failure, but still very innovative. Pentium Pro and Pentium 4 were both innovative designs, both seeking performance in very different ways.
Multi-core CPUs were innovative (AMD), HBM is innovative (AMD+Hynix), multi-GPU was innovative (3dfx), SMT was innovative (IBM, Alpha), CPU+GPU was innovative (Cyrix, IIRC)... you get the idea.
Doing the exact same thing, more or less the exact same way, but slightly better, is not innovation.
Huh? So putting Core level performance in a passive design that is as thin as a legal pad and has 10 hours of battery life isn't innovation?
Increasing iGPU performance to the point it not only provides top-end CPU performance, and close to dGPU performance, while convincingly beating AMD's entire reason for buying ATI, their Fusion APUs isn't innovation?
And how about the data center where Intel's *18* core CPUs are using the same TDP and sockets, in the same U rack units as their 4 and 6 core equivalents of just a few years ago?
Intel is still innovating in different ways, that may not directly impact the desktop CPU market but it would be extremely ignorant to claim they aren't addressing their core growth and risk areas with new and innovative products.
I've bought more Intel products in recent years vs. prior strictly because of these new innovations that are allowing me to have high performance computing in different form factors and use cases, beyond being tethered to my desktop PC.
Show me intel CPU innovations since after the pentium 4.
Mind you, innovations can be failures, they can be great successes, or they can be ho-hum.
P6->Core->Nehalem->Sandy Bridge->Haswell->Skylake
The only changes are evolutionary or as a result of process changes (which I don't consider CPU innovations).
This is not to say that they aren't fantastic products - I'm rocking an i7-2600k for a reason - they just aren't innovative products. Indeed, nVidia's Maxwell is a wonderfully designed and engineered GPU, and products based on it are of the highest quality and performance. That doesn't make them innovative in any way. Nothing technically wrong with that, but I wonder how long before someone else came up with a suitable RAM just for GPUs if AMD hadn't done it?
I've listed them above and despite slowing the pace of improvements on the desktop CPU side you are still looking at 30-45% improvement clock for clock between Nehalem and Haswell, along with pretty massive improvements in stock clock speed. Not bad given they've had literally zero pressure from AMD. If anything, Intel dominating in a virtual monopoly has afforded me much cheaper and consistent CPU upgrades, all of which provided significant improvements over the previous platform:
All cheaper than the $450 AMD wanted for their ENTRY level Athlon 64 when they finally got the lead over Intel, which made it an easy choice to go to Intel for the first time in nearly a decade after AMD got Conroe'd in 2006.
Of course I've posted it elsewhere because it bears repeating, the nonsensical meme AMD fanboys love to parrot about AMD being necessary for low prices and strong competition is a farce. I've enjoyed unparalleled stability at a similar or higher level of relative performance in the years that AMD has become UNCOMPETITIVE in the CPU market. There is no reason to expect otherwise in the dGPU market.
Let's not also discount the fact that's just stock comparisons, once you overclock the cards as many are interested in doing in this $650 bracket, especially with AMD's clams Fury X is an "Overclocker's Dream", we quickly see the 980Ti cannot be touched by Fury X, water cooler or not.
Fury X wouldn't have been the failure it is today if not for AMD setting unrealistic and ultimately, unattained expectations. 390X WCE at $550-$600 and its a solid alternative. $650 new "Premium" Brand that doesn't OC at all, has only 4GB, has pump whine issues and is slower than Nvidia's same priced $650 980Ti that launched 3 weeks before it just doesn't get the job done after AMD hyped it from the top brass down.
75MHz on a factory low-volting GPU is actually to be expected. If the voltage scaled automatically, like nVidia's, there is no telling where it would go. Hopefully someone cracks the voltage lock and gets to cranking of the hertz.
North of 400W is probably where we'll go, but I look forward to AMD exposing these voltage controls, it makes you wonder why they didn't release them from the outset given they made the claims the card was an "Overclocker's Dream" despite the fact it is anything but this.
That wasn't so much due to ATI's excellence. It had a lot to do with NVIDIA dropping the ball horribly, off a cliff, into a black hole.
They learned their lessons and turned it around. I don't think either company "lost" necessarily, but I will say NVIDIA won. They do more with less. More performance with less power, less transistors, less SPs, and less bandwidth. Both cards perform admirably, but we all know the Fury X would've been more expensive had the 980 Ti not launched where it did. So, to perform arguably on par, AMD is living with smaller margins on probably smaller volume while Nvidia has plenty of volume with the 980 Ti and their base cost is less as they're essentially using Titan X throw away chips.
They still had to pay for those "Titan X throw away chips" and they cost more per chip to produce than AMD's Fiji GPU. Also, nVidia apparently had to not cut down the GPU as much as they were planning as a response to AMD's suspected performance. Consumers win, of course, but it isn't like nVidia did something magical, they simply bit the bullet and undercut their own offerings by barely cutting down the Titan X to make the 980Ti.
That said, it is very telling that the AMD GCN architecture is less balanced in relation to modern games than the nVidia architecture, however the GCN architecture has far more features that are going unused. That is one long-standing habit ATi and, now, AMD engineers have had: plan for the future in their current chips. It's actually a bad habit as it uses silicon and transistors just sitting around sucking up power and wasting space for, usually, years before the features finally become useful... and then, by that time, the performance level delivered by those dormant bits is intentionally outdone by the competition to make AMD look inferior.
AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them.
AMD's only possible upcoming saving grace could be that they might be on Samsung's 14nm LPP FinFet tech at GloFo and nVidia will be on TSMC's 16nm FinFet tech. If AMD plays it right they can keep this advantage for a couple generations and maximize the benefits that could bring.
Why is it dubious? What's the biggest chip Samsung has fabbed? If they start producing chips bigger than the 100mm^2 chips for Apple, then we can talk but as much flak as TSMC gets flak over delays/problems, they still produce what are arguably the world's most advanced seminconductors, right there next to Intel's biggest chips in size and complexity.
"AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them."
AMD is often first at announcing features. Nvidia is often first at implementing them properly. It is clever marketing vs clever engineering. At the end of the day, one gets more customers than the other.
While you're right that Nvidia paid for the chips used in 980 Tis, they're still most likely not fit for Titan X use and are cut to remove the underperforming sections. Without really knowing what their GM200 yields are like, I'd be willing to be the $1000 price of the Titan X was already paying for the 980 Ti chips. So, Nvidia gets to play with binned chips to sell at $650 while AMD has to rely on fully up chips added to an expensive interposer with more expensive memory and a more expensive cooling solution to meet the same price point for performance. Nvidia definitely forced AMD into a corner here, so as I said I would say they won.
Though, I don't necessarily say that AMD lost, they just make it look much harder to do what Nvidia was already doing and making bookoo cash at that. This only makes AMD's problems worse as they won't get the volume to gain marketshare and they're not hitting the margins needed to heavily reinvest in R&D for the next round.
"AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them."
AMD fanboys make it sound like AMD can actually walk on water. AMD did work with Hynix, but the magic of HBM comes in the density from die stacking, which AMD did nothing (they are no longer the actual chipmaker as you probably know). As for interposers, this is not new technology, interposers are well established techniques for condensing an array of devices into one package.
AMD deserves credit for bringing the technology to market, no doubt, but their actually IP contribution is quite small.
Good that you are feeling better Ryan and thanks for the review :) That being said Anandtech needs keep us better informed when things come up.... The way this site handled it though is gonna lose this site readers...
Ryan tweeted about the Fiji schedule several times and we were also open about it in the comments whenever someone asked, even though it wasn't relevant to the article in question. It's not like we were secretive about it and I think a full article of an article delay would be a little overkill.
Pipeline story... Dunno title, but, for text, explain it there. Have a link to THG as owned by same company now if readers want to read a review immediately.
The problem isn't only with the delays, it is that since Ryan took over as Editor in Chief I suspect his workload is too large. Because this also happened with the Nvidia GTX 960 review. He told 5-6 people (including me) for 5 weeks that it would come, and then it didn’t and he stopped responding to inquires about it. Now in what way is that a good way to build a good relationship and trust between you and your readers? I love Ryan's writing, this article was one of the best I've read in a long time. But not everyone is good at everything, maybe Ryan needs to focus on only GPU reviews and not running the site or whatever his other responsibilities are as Edit. in Chief.
Because the Reviews are what most ppl. come here for and what built this site. You guys are amazing, but AT never used to miss releasing articles the same day NDA was lifted in the past that I can remember. And promising things and then not delivering, sticking your head in the sand and not even apologizing isn’t a way to build up trust and uphold and strengthen the large following this site has.
I love this site, been reading it since the 1st year it came out, and that's why I care and I want you to continue and prosper. Since a lot of ppl. can’t reed the twitter feed then what you did here: http://www.anandtech.com/show/8923/nvidia-launches... Is the way to go if something comes up, but then you have to deliver on your promises.
Just to add, if there are any ideas of keeping you guys better informed, please fire away. In the meantime, Twitter is probably the best way to stay updated on whatever each one of us is doing :)
I think a pipeline story would've been good there, I mean using social media to convey the message to readership that may not even pay attention to it (I don't even think twitter sidebar shows on my iPhone 6 plus) is not a great way to do things.
A few words saying the review would be late for XYZ reasons with a teaser to the effect of "but you can see the results here at 2015 AT Bench" would've sufficed for most and also given assurance the bulk of testing and work was done, and that AT wasn't just late for XYZ reasons.
Where did you read this news though? Some forum thread? Twitter sidebar? I mean I guess whne everyone is looking for a front page story for the review, something in the actual front page content might have been the best way to get the message across. Even something as simple as a link to the bench results would've gone a long way to help educate/inform prospective buyers in a timely manner, don't you think? Because at the end of the day, that's what these reviews are meant for, right?
Yeah, all non-ideal for anyone actually looking for the review, those just make it seem more like AT was trying to hide the fact their Fury X review wasn't ready despite there being no reason and a legitimate reason for it.
Again, even a pipeline story with Ryan being sick and a link to the bench results would've been tons better in informing AT's readership, but I guess we'll be sure to comb through half conversations and completely unrelated comments in the future to stay informed.
Thank you for the good Review. The only thing which i'm missing is a synthetic Benchmark of the polygon outpout rate. Because this seams to be the bottleneck of Fury X
We do have TessMark results in the synthetics section. You would be hard pressed to hit the polygon limit without using tessellation to generate those polygons.
I don't understand the difference between Tesselation and Polygonoutput. I thought there are 2 ways of polyogon Output.
1. is tesselation where the gpu integrates smaller triangles into a big triangle 2. I thought Polygonoutput is the rate of triangles which the gpu can handle when they get it from the cpu.
The CPU being a bottleneck with processing triangles is why nVidia introduced the first 'GPU' with the GeForce256: the geometry is uploaded only once to VRAM, and the GPU will perform T&L for each frame, to avoid the CPU bottleneck.
Tessellation is the next step here: Very detailed geometry takes a lot of VRAM and a lot of bandwidth, causing a bottleneck trying to feed the rasterizers. So instead of storing all geometry in VRAM, you store a low-res mesh together with extra information (eg displacement maps) so the GPU can generate the geometry on-the-fly, to feed the backend more efficiently.
Thanks for the review - that must have taken quite a bit to write.
I hope you are feeling better now and back to normal.
As far as the GPU, yeah it's a disappointment that they were not able to beat the 980Ti. They needed 96 ROPs (I'd trade 3840 SP for that even for die), 8GB of VRAM (you might run out of VRAM before you run out of core), and probably 12 ACEs as well. Maybe 1/32 FP64 like on the GM200 would have helped too.
This thing needs to go down to $550 USD now. That and custom PCBs.
No, 96 ROPs would not have saved it. The card has PLENTY of pixel power, its GEOMETRY that is the issue, which is something that AMD/ATI has always lagged nVidia on. Kind of unfortunate here ... :(
As a long time AMD fan I'm disappointed, project quantum uses an Intel cpu? Instead of sad it would have been funny if they used a VIA cpu. And yes there are those of us who use their pc with a tv from the couch and want hdmi 2.0 for 4k, still waiting. BTW review great, just AMD still disappointing.
+1 AMD was quite clear: they listened the community who wanted flexibility; the cream of the cream for Z97 is without a doubt the 4790K; however that won't be the only CPU choice.
Would you rather have a high end PC with an AMD CPU, with much lower per-core performance? I can see the irony, sure, but it's still the better choice right now.
Ryan, one other question, the VRMs - how hot do they get? Reportedly, some reviews are seeing pretty high VRM temperatures. The 6phase DIrectFET design does allow for a lot of error, but it's still best to have cool running VRMs.
Some reviews are showing temps of 100C, and are making a huge deal about it. But these same reviewers have shown a past nVidia card that had them up close to 120C, and thought nothing of it. The fact is the VRM's are rated for 150C, and do not lose any efficiency until they hit ~130C.
There is NOTHING wrong with the Fury having them at 100C.
The current tools do not report VRM temperatures for the card (AFAIK). I've taken an IR thermometer to the card as well, though there's nothing terribly interesting to report there.
Civilization: Beyond Earth The bigger advantage of Mantle is really the minimum framerates, and here the R9 Fury X soars. At 1440p the R9 Fury X delivers a minimum framerate of 50.5fps 1440p should be changed to 4k
Fair review Ryan, unfortunately for AMD Fury X will go down as an underwhelming product that failed to meet the overhyped build up from AMD and their fans. Its not a terrible product by itself, as it does perform quite well, but it simply didn't live up to its billing, much of which came directly from AMD themselves when they made very public claims like:
1) HBM enables them to make the World's Fastest GPU. This didn't happen. 2) Easily beats the 980Ti, based on their internal benchmarks. This didn't happen either. 3) Fury X is an Overclocker's Dream. We've seen anything but this being the case. 4) Water Cooling allows this to be a cool and quiet part. Except that pump whine, that AMD said was fixed in shipping samples, but wasn't. 5) 4GB is enough. Doesn't look like it, especially at the resolutions and settings a card like this is expected to run.
Add to that the very limited supply at launch and Fury X launch will ultimately be viewed as a flop. I just don't know where AMD is going to go from here. R9 300 Rebrandeon happened (told you AMD fanboys this months ago) and those parts still aren't selling. R9 Fury X while still AMD's best performing part is still 3rd fastest overall at the same price point as the faster 980Ti, and in extremely limited supplies. Will this be enough to sustain AMD into 2016 where the hopes of Zen and Arctic Islands turning around their fortunes loom on the horizon, we'll see, but until then it will be a bumpy road for AMD with some cloudy skies on the horizon!
The pump whine was fixed. Only very early cards have the old pump, later cards do not. And even with the louder pump, its STILL quieter than a reference 980Ti.
Even if that is the case, that's not what AMD was telling the press when it was brought to their attention during the review phase. Obviously it would be difficult, if not impossible for AMD to correct the problem in shipping samples given how rushed they were just getting review samples out to the press.
AMD was dishonest about the pump issue plain and simple, and just hope the pump whine falls below any individual's noise tolerance thresholds.
As for comparisons to 980Ti, the Fury X will certainly be quieter in terms of pure dB under load, but the noise profile of that pump whine is going to be far more disturbing at any other point in time.
Beats me why nobody makes more of the practicality issues of trying to fit such a card in a case which in all likelyhood (for this class of GPU) _already has_ a water cooler for the CPU, and don't get me started on how one is supposed to cram in two of these things for CF (not that I'd bother given the state of the drivers; any DX9 fix yet? It's been over a year).
Without a clear performance advantage, which it doesn't have, it needed to be usefully cheaper, which it's not. Add in the lesser VRAM and no HDMI 2.0 and IMO AMD has blown this one. it wouldn't be so bad except it was AMD that chucked out much of the prelaunch hype. Other sites have differences to the 980 Ti a lot more than 10% at 1440 (less so at 4K of course, though with only 4GB and CF build issues I don't see 4K as being a fit for this card anyway). Factory oc'd 980 Tis are only a little more, but significantly quicker even at 4K.
Yeah, Fury X is not really a smaller form factor, its just different. Fitting that double thick rad is going to pose a much bigger problem for most cases vs. a full sized 9.5" blower, given Nvidia's NVTTM reference fits most any mini-ITX case that can take 2 slots.
As for Fury X price and perf, I think the 980Ti preemptively cut into AMD's plans and they just didn't want to take another cut in price when they had their sights set on that $800+ range. But yeah Fury X and by proxy, Fury Air and Fury Nano will be extremely vulnerable at 1080p and 1440p given they will be slower than Fury X which already has slower and last-gen cards like the 290X/390X/780Ti and GTX 980 on its heels.
I don't think AMD could've afforded more price compression or there's simply no spots that make any sense for Fury Air and Fury Nano, which again goes to my point they should've just launched these parts as the top end of their new R9 300 series stack instead of Rebrandeon + Fury strategy.
Between now and 2016 (preferably before the holiday season) I see AMD dropping the Fury X price and churning up better drivers; so it's not all too bleak. But it's still annoying that both of these could have been fixed before launch.
Yep, the Fury X is essentially vaporware at this point. It basically doesn't exist. Some tech journalists with inside information have estimated that fewer than 1000 were available for NA at launch. Definitely some supply issues to say the least, which I suspect is mostly due to the HBM.
I have no idea why AMD hyped up Fiji so much prior to launch. In a sense they just made it that much more difficult for themselves. What kind of reaction were they expecting with rhetoric like "HBM has allowed us to create the fastest GPU in the world", along with some of the most cherry picked pre-launch internal benchmarks ever conceived? It just seems like they've given up and are only trying to engage their most zealous fanboys at this point.
All that being said, I don't think Fury X is a terrible card. In fact I think it's the only card in AMDs current lineup even worth considering. But unfortunately for AMD, the 980Ti is the superior card right now in practically every way.
Yep, it is almost as if they set themselves up to fail, but now it makes more sense in terms of their timing and delivery. They basically used Fury X to prop up their Rebrandeon stack of 300 series, as they needed a flagship launch with Fury X in the hopes it would lift all sails in the eyes of the public. We know Rebrandeon 300 series was set in stone and ready to go as far back as Financial Analsyts Day (Hi again all AMD fanboys who said I was wrong) with early image leaks and drivers confirming this as well.
But Fury X wasn't ready. Not enough chips and cards ready, cooler still showing problems, limited worldwide launch (I think 10K max globally). I think AMD wanted to say and show something at Computex but quickly changed course once it was known Nvidia would be HARD launching the 980Ti at Computex.
980Ti launch changed the narrative completely, and while AMD couldn't change course on what they planned to do with the R9 Rebrandeon 300 series and a new "Ultra premium" label Fury X using Fiji, they were forced to cut prices significantly.
In reality, at these price points and with Fury X's relative performance, they really should've just named it R9 390X WCE and called it a day, but I think they were just blindsided by the 980Ti not just in performance being so close to Titan X, but also in price. No way they would've thought Nvidia would ask just $650 for 97% of Titan X's performance.
So yeah, brilliant moves by Nvidia, they've done just about everything right and executed flawlessly with Maxwell Mk2 since they surprised everyone with the 970/980 launch last year. All the song and dance by AMD leading up to Fury X was just that, an attempt to impress investors, tech press, loyal fans, but wow that must have been hard for them to get up on stage and say and do the things they did knowing they didn't have the card in hand to back up those claims.
do you want a nobel prize after all that multiple post gloating? you're not the one leaking, we already knew fiji was the only new gpu, i never saw any 'fanboys' as you call them saying the 3 series will be new & awesome... like you're talking to an empty room & patting yourself on the back
guess who is employed at amd? the guy that did marketing at nvidia for a few years, why do you think fury x is called fury x?
FLAWLESS maxwell hahahahaha.... 970 memory aside, how about all the TDR crashes in recent drivers, they even had to put out a hotfix after WHQL (are we also going to ignore kepler driver regression?)
yes amd has to impress everyone, that is the job of marketing & the reality of depending on TSMC with its cancelled 32nm & delayed/unusable 20nm... every company needs to hype so they dont implode, all these employees have families but you're probably not thinking of them
how the heck is near performance at cold & quiet operation a flop!? there are still 2 more air cooled fiji releases, including a 175watt one
'4gb isnt enough', did you even look at the review? this isnt geforce FX or 2900xt, talk about a reverse fanboy...
Wow awesome where were all these nuggets of wisdom and warnings of caution tempering the expectations of AMD fans in the last few months? Oh right, no where to be found! Yep, plenty with high conviction and authority insisting R9 300 won't be a rebrand, that Fiji and HBM would lead AMD to the promise land and be faster than the overpriced Nvidia offerings of 980, Titan X etc etc.
No Nobel Prize needed, the ability to gloat and say I told you so to all the AMD fanboys/apologists/supporters is plenty! Funny how none of them bothered to show up and say they were wrong, today!
And yes the 970, they stumbled with the memory bandwidth mistake, but did it matter? No, not at all. Why? Because the error was insignificant and did not diminish its value, price or performance AT ALL. No one cared about the 3.5GB snafu outside of AMD fanboys, because 970 delivered where it mattered, in games!
Let's completely ignore the fact 970/980 have led Nvidia to 10 months of dominance at 77.5% market share, or the fact the 970 by itself has TRIPLED the sales of AMD's entire R9 200 series on Steam! So yes, Nvidia has executed flawlessly and as a result, they have pushed AMD to the brink in the dGPU market.
And no, 4GB isn't enough, did YOU read the review? Ryan voiced concern throughout the entire 4GB discussion, saying while it took some effort, he was able to "break" the Fuiry X and force a 4GB limit. That's only getting to be a BIGGER problem once you CF these cards and start cranking up settings. So yeah, if you are plunking down $650 on a flagship card today, why would you bother with that concern hanging over your head when for the same price, you can buy yourself 50% more headroom? Talk about reverse fanboyism, 3.5GB isn't enough on a perf midrange card, but its jolly good A-OK for a flagship card "Optimized for 4K" huh?
And speaking of those employees and families. You don't think it isn't in their best interest, and that they aren't secretly hoping AMD folds and gets bought out or they get severance packages to find another job? LOL. Its a sinking ship, if they aren't getting laid off they're leaving for greener pastures. Everyone there is just trying to stay afloat hoping some of these rumors a company with deep pockets will come and save them from the sinking dead weight that has become of ATI/AMD.
My concern is, the longer AMD's current situation lingers, the higher the chance that the new buyers would simply cannibalize AMD's tech and IPs and permanently put down the brand "AMD", due to the the amount of negative public opinion attached to it.
@D. Lister sorry missed this. I think AMD as a brand/trademark will be dead regardless. It has carried value brand connotation for some time and there was even some concern about it when AMD chose to drop the name ATI from their graphics cards a few years back. Radeon however I think will live on to whoever buys them up, as it still carries good marketplace brand recognition.
Dude, what's the deal? Did an AMD logoed truck run over your dog or something.
Seems like every article regarding AMD has you spewing out hate against them. I think we all realize Nvidia is in the lead. Why exert so much energy to put down a company that you have no intention of ever buying from?
AMD wasn't even competing in the high end prior to the Fury X release. So any sales they get are sales that would have gone to the 980 by default. So they have improved their position. A home run? No.
Take pleasure in knowing you are a member of the winning team. Take a chill pill and maybe the comments sections can be more informative for the rest of us.
I, for one, would prefer to not having to skip over three long winded tirades on each page that start with Chizow.
@Intel999, if you want to digest your news in a vacuum, stick your head in the sand and ignore the comments section as you've already self-prescribed!
For others, a FORUM is a place to discuss ideas, exchange points of view, provide perspective and to keep both companies and fans/proponents ACCOUNTABLE and honest. If you have a problem, maybe the internet isn't a place for you!
Do you go around in every Nvidia or Intel thread or news article and ask yourself the same anytime AMD is mentioned or brought up? What does this tell you about your own posting tendencies???
Again, if you, for one, would prefer to skip over my posts, feel free to do so! lol.
I think you need to blame sites such as WCCFTech rather than fanboys/enthusiasts in general for the "Fury X will trounce 980 Ti/Titan X" rumours.
Also, if the 970 memory fiasco didn't matter, why was there a spate of returns? It's obvious that the users weren't big enough NVIDIA fanboys to work around the issue... going by your logic, that is.
The 970 isn't a mid-range card to anybody who isn't already rocking a 980 or above. 960, sure.
Fury X is an experiment, one that could've done with more memory of course, and I usually don't buy into the idea of experiments, but at least it wasn't a 5800/Parhelia/2900 - it's still the third best card out there with little breathing space between all three (depending on game, of course), not quite what AMD promised unless they plan to fix everything with a killer driver set (unlikely). The vanilla Fury with its GDDR5 may stand to outperform it, albeit at a slightly higher power level.
No silverblue, you contributed just as much to the unrealistic expectations during the Rebrandeon run-up along with unrealistic expectations for HBM and Fury X. But in the end it doesn't really matter, AMD failed to meet their goal even though Nvidia handed it to them on a silver platter by launching the 980Ti 3 weeks ahead of AMD.
And spate of returns for the 970 memory fiasco? Have any proof of that? Because I have plenty of proof that shows Nvidia rode the strength of the 970 to record revenues, near-record market share, and a 3:1 ownership ratio on Steam compared to the entire R9 200 series.
If Fury X is an experiment as you claim, it was certainly a bigger failure than what was documented here at a time AMD could least afford it, being the only new GPU they will be launching in 2015 to combat Nvidia's onslaught of Maxwell chips.
A lot of the 970 hate reminded me of the way some people carried on dumping on OCZ long after any trace of their old issues were remotely relevant. Sites did say that the 970 RAM issue made no difference to how it behaved in games, but of course people choose to believe what suits them; I even read comments from some saying they wanted it to be all deliberate as that would more closely match their existing biased opinions of NVIDIA.
I would have loved to have seen the Fury X be a proper rival to the 980 Ti, the market needs the competition, but AMD has goofed on this one. It's not as big a fiasco as BD, but it's bad enough given the end goal is to make money and further the tech.
Fan boys will buy the card of course, but they'll never post honestly about CF issues, build issues, VRAM limits, etc.
It's not as if AMD didn't know NV could chuck out a 6GB card, remember NV was originally going to do that with the 780 Ti but didn't bother in the end because they didn't have to. Releasing the 980 Ti before the Fury X was very clever, it completely took the the wind out of AMD's sails. I was expecting it to be at least level with a 980 Ti if it didn't have a price advantage, but it loses on all counts (for all the 4K hype, 1440 is far more relevant atm).
How about you present proof of such indescretions? I believe my words contained a heavy dose of IF and WAIT AND SEE. Speculation instead of presenting facts when none existed at the time. Didn't you say Tahiti was going to be a part of the 300 series when in fact it never was? I also don't recall saying Fury X would do this or do that, so the burden of proof is indeed upon you.
I can provide more if you like. The number of returns wasn't exactly a big issue for NVIDIA, but it still happened. A minor factor which may have resulted in a low number of returns was the readiness for firms such as Amazon and NewEgg to offer 20-30% rebates, though I imagine that wasn't a common occurrence.
Fury X isn't a failure as an experiment, the idea was to integrate a brand new memory architecture into a GPU and that worked, thus paving the way for more cards to incorporate it or something similar in the near future (and showing NVIDIA that they can go ahead with their plans to do the exact same thing). The only failure is marketing it as a 4K card when it clearly isn't. An 8GB card would've been ideal and I'd imagine that the next flagship will correct that, but once the cost drops, throwing 2GB HBM at a mid-range card or an APU could be feasible.
I've already posted the links and you clearly state you don't think AMD would Rebrandeon their entire 300 desktop retail series when they clearly did. I'm sure I didn't say anything about Tahiti being rebranded either, since it was obvious Tonga was being rebranded and basically the same thing as Tahiti, but you were clearly skeptical the x90 part would just be a Hawaii rebrand when indeed that became the case.
And lmao at your links, you do realize that just corroborates my point the "spate of 970 returns" you claimed was a non-issue right? 5% is within range of typical RMA rates so to claim Nvidia experienced higher than normal return rates due to the 3.5GB memory fiasco is nonsense plain and simple.
And how isn't Fury X a failed experiment when AMD clearly had to make a number of concessions to accommodate HBM, which ultimately led to 4GB limitations on their flagship part that is meant to go up against 6GB and 12GB and even falls short of its own 8GB rebranded siblings?
You: "And what if the desktop line-up follows suit? We can ignore all of them too? No, not a fanboy at all, defend/deflect at all costs!" Myself: "What if?
Nobody knows yet. Patience, grasshopper."
Dated 15th May. You'll note that this was a month prior to the launch date of the 300 series. Now, unless you had insider information, there wasn't actually any proof of what the 300 series was at that time. You'll also note the "Nobody knows yet." in my post in response to yours. That is an accurate reflection of the situation at that time. I think you're going to need to point out the exact statement that I made. I did say that I expected the 380 to be the 290, which was indeed incorrect, but again without inside information, and without me stating that these would indeed be the retail products, there was no instance of me stating my opinions as fact. I think that should be clear.
Fury X may or may not seem like a failed experiment to you - I'm unsure as to what classifies as such in your eyes - but even with the extra RAM on its competitors, the gap between them and Fury X at 4K isn't exactly large, so... does Titan X need 12GB? I doubt it very much, and in my opinion it wouldn't have the horsepower to drive playable performance at that level.
There's plenty of other posts from you stating similar Silverblue, hinting at tweaks to silicon and GCN level when none of that actually happened. And there was actually plenty of additional proof besides what AMD already provided with their OEM and mobile rebrand stacks. The driver INFs I mentioned have always been a solid indicator of upcoming GPUs and they clearly pointed to a full stack of R300 Rebrandeons.
As for RMA rates lol yep, 5% is well within expected RMA return rates, so spate is not only overstated, its inaccurate characterization when most 970 users would not notice or not care to return a card that still functions without issue to this day.
And how do you know the gap between them isn't large? We've already seen numerous reports of lower min FPS, massive frame drops/stutters, and hitching on Fury X as it hits its VRAM limit. Its a gap that will only grow in newer games that use more VRAM or in multi-GPU solutions that haven't been tested yet that allow the end-user to crank up settings even higher. How do you know 12GB is or isn't needed if you haven't tested the hardware yourself? While 1xTitan X isn't enough to drive the settings that will exceed 6GB, 2x in SLI certainly is and already I've seen a number of games such as AC: Unity, GTA5, and SoM use more than 6GB at just 1440p. I fully expect "next-gen" games to pressure VRAM even further.
If you visit the Anandtech forums, there's still a few AMD hardcore fanboys like Silverforce and RussianSensation making up excuses for Fury X and AMD. Those guys live in a fantasy land and honestly, the impact of Fury X's failure wouldn't have been as significant if stupid fanboys like the two I mentioned hadn't hyped Fury X so much.
To AMD's credit, they did force NVIDIA to price 980 Ti at $650 and release it earlier, I guess that means something to those that wanted Titan X performance for $350 less. Unfortunately for them, their fanboys are more of a cancer than help.
Hahah yeah I don't visit the forums much anymore, mods tried getting all heavy-handed in moderation a few years back with some of the mods being the biggest AMD fanboys/trolls around. They also allowed daily random new accounts to accuse people like myself of shilling and when I retaliated, they again threatened action so yeah, simpler this way. :)
I've seen some of RS's postings in the article comments sections though, he used to be a lot more even keeled back then but yeah at some point his mindset turned into best bang for the buck (basically devolving into 2-gen old FS/FT prices) trumping anything new without considering the reality, what he advocates just isn't fast enough for those looking for an UPGRADE. I also got a big chuckle out of his claims 7970 is some kind of god card when it was literally the worst price:perfomance increase in the history of GPUs, causing this entire 28nm price escalation to begin with.
But yeah, can't say I remember Silverforce, not surprising though they overhyped Fury X and the benefits of HBM to the moon, there's a handful of those out there and then they wonder why everyone is down on AMD after none of what they hoped/hyped for actually happens.
I eventually obtained a couple of 7970s to bench; sure it was quick, but I was shocked how loud the cards were (despite having big aftermarket coolers, really no better than the equivalent 3GB 580s), and the CF issues were a nightmare.
Personally I think the reason behind the current FX shortages is that Fury X was originally meant to be air-cooled, trouncing 980 by 5-10 % and priced at $650 - but then NV rather sneakily launched the Ti, a much more potent gpu compared to an air-cooled FX, at the same price, screwing up AMD's plan to launch at Computex. So to reach some performance parity at the given price point, AMD had to hurriedly put CLCs on some of the FXs and then OC the heck out of them (that's why the original "overclockers' dream" is now an OC nightmare - no more headroom left) and push their launch to E3.
So I guess once AMD finish respecing their original air-cooled stock, supplies would gradually improve.
Its possible, WCE Fury was rumored even last year I believe, so I don't think it was a last ditch effort by AMD. I do think however, it was meant to challenge Titan, especially with the new premium branding and it just fell way short of not only Titan X, but also 980Ti.
I do fully agree though they've basically eaten up their entire OC headroom in a full on attempt to beat 980Ti, and they still didn't make it. Keep in mind we're dealing with a 275W rated part with the benefit of no additional leakage from temperature. The 275W rated air cooled version is no doubt going to be clocked slower and/or have functional units disabled as it won't benefit from lower leakage from operating temps.
I think the delayed/staggered approach to launch is because they are still binning and finalizing the specs of every chip, Fury X is easy, full ASIC, but after that they're having a harder time filling or determining the specs on the Nano and Air Fury. Meanwhile, Nvidia has had months to stockpile not only Titan X chips, but anything that was cut down for the 980Ti and we've seen much better stock levels despite strong sell outs on the marketplace.
It's very difficult to find, but I've managed to locate one here in the UK from CCL Online... a PowerColor model (AX R9 FURY X 4GBHBM-DH) for £550. Shame about the price, though it isn't any more expensive than the 980 Ti.
In which case I'd get the 980 Ti every time. Less hassle & issues all round. Think ahead for adding a 2nd card, the build issues with a 980 Ti are nill.
Despite losing to the 980ti, I still think this card offers a good value proposition. I would gladly give up a small amount of performance for better acoustics and more flexible packaging with the CLC (assuming AMD fixes the noise issue other sites are reporting).
I never thought I'd say this, but drivers have to be causing some of the issues.
Fury X isn't such a bad deal; the quality of the product along with noise and temperatures (or the lack of them) can land it in smaller cases than the 980Ti. I do have to wonder why it performs comparatively better at 4K than at 1440p (I've seen weird regressions at 720p in the past), and why delta compression isn't helping as much as I expected. Overclockers may even find Fury (air-cooled) to be faster.
If you overclock at all, it's not small performance you'd be giving up. The 980 TI regularly gets a 20% performance gain from OCing, while Fury X is currently only getting 5-7%. At 1440p, you're looking at a 25% performance difference.
We don't know Fury X's full overclocking abilities yet, no one has unlocked the voltage. It might take a bios mod (shame on you AMD for locking the voltage at all, though!), but it will happen, and then we'll get an idea of what can be done.
Or more likely, AMD already "overclocked" and squeezed every last drop but 5-10% out of Fury X in a full on attempt to match 980Ti, and they still didn't make it.
This makes no sense at all. AMD clearly claimed you could overclock this, but they lock the voltage? The only explanation is that the chip is already clocked at the highest level. I never believed I would agree with that troll chizow, but here we are.
They also claimed in their own pre-release benchmarks that Fury X beat the 980Ti and Titan in every single benchmark. Now when independent testers get them, it turns out to be 180° different.. The Fury X gets beaten in every single benchmark. So they are probably lying about the overclocking also...
Great review Ryan. It was worth the wait and full of the deep analysis that we have come to expect and which make you the best in the business. Glad you are feeling better, but do try to keep us better updated on the status should something like this happen again.
Why does Frame Rate Target Control have a max cap of 90FPS when 120Hz and 144Hz displays are the new big thing? I would think that a 144FPS cap would be great for running older titles on a new 144Hz screen.
FRTC is probably still young and needs to be vetted. Its use, aside from the eSports extreme frame rate issue, is touting it more as an energy saving technology for mobile devices. With any luck, the range will increase over time, but don't forget that AMD is putting investment into Freesync, and Freesync-over-HDMI (we reported on it a while back), hoping that it becomes the norm in the future.
We already know the outcome but yes it would be nice to see nonetheless. Fury "OC'd" can't even beat non-OC'd 980Ti in most of those results, so add another 15-20% lead to 980 Ti and call it a day.
"Curious as to why you would not test Fury OC's against the 980TI's OC?"
As a matter of policy we never do that. While its one thing to draw conclusions about reference performance with a single card, drawing conclusions about overclocking performance with a single card is a far trickier proposition. Depending on how good/bad each card is, one could get wildly different outcomes.
If we had a few cards for each, it would be a start for getting enough data points to cancel our variance. But 1 card isn't enough.
You could test the same cards at multiple frequencies Ryan, that way you're not trying to give an impression of "max OC" performance, but more an expected range of performance you might expect IF you were able to OC that much on either card.
Ryan, to us, the readers, AT is just one of several sources of information, and to us, the result of your review sample is just one of the results of many other review samples. As a journalist, one would expect you to have done at least some investigation regarding the "overclockers' dream" claim, posted your numbers and left the conclusion making to those whose own money is actually going to be spent on this product - us, the customers.
I totally understand if you couldn't because of ill health, but, with all due respect, saying that you couldn't review a review sample because there weren't enough review samples to find some scientifically accurate mean performance number, at least to me appears as a reason with less than stellar validity.
I can understand some of the criticisms posted here, but let's remember that this is a free site. Additionally, I doubt there were many Fury X samples sent out. KitGuru certainly didn't get one (*titter*). Finally, we've already established that Fury X has practically sold out everywhere, so AT would have needed to purchase a Fury X AFTER release and BEFORE they went out of stock in order to satisfy the questions about sample quality and pump whine.
"if you absolutely must have the lowest load noise possible from a reference card, the R9 Fury X should easily impress you." Or, you know, mod the hell out of your card. I have a 290X in a very quiet room, and can't hear it, thanks to the Accelero Xtreme IV I bolted onto it. It does look monstrously big, but still, not even the Fury X can touch that lack of system noise.
The 5870 was the fastest GPU when it was released and the the 290X was the fastest GPU when it was released. This article makes it sound like AMD has been unable to keep up at all, but they've been trading blows. nVidia simply has had the means to counter effectively.
The 290X beat nVidia's $1,000 Titan. nVidia had to quickly respond with a 780Ti which undercut their top dog. nVidia had to release the 780Ti at a seriously low price in order to compete with the, then unreleased, Fury X and had to give the GPU 95% of the performance of their $1,000 Titan X.
nVidia is barely keeping ahead of AMD in performance, but was well ahead in efficiency. AMD just about brought that to parity with THEIR HBM tech, which nVidia will also be using.
Oh, anyone know the last time nVidia actually innovated with their GPUs? GSync doesn't count, that is an ages-old idea they simply had enough clout to see implemented, and PhysX doesn't count, since they simply purchased the company who created it.
The 5870 was the fastest for 7 months, but it wasn't because it beat Nvidia's competition against it. Nvidia's competition against it was many months late, and when it finally came out was clearly faster. The 7970 was the fastest for 10 weeks, then was either slower or traded blows with the GTX 680. The 290x traded blows with Titan but was not clearly faster and was then eclipsed by the 780 TI 5 days later.
All in all, since GTX 480 came out in March of 2010, Nvidia has solidly held the single GPU performance crown. Sometimes by a small margin (GTX 680 launch vs. HD 7970), sometimes by a massive margin (GTX Titan vs. 7970Ghz), but besides a 10 week stint, Nvidia has been in the lead for over the past 5 years.
check reviews with newer drivers, 7970 has increased more than 680, sometimes similar with 290x vs 780/780ti depending on game (it's a mess to dig up info, some of it is coming from kepler complaints)
speaking of drivers, 390x using a different set than 290x in reviews, that sure makes launch reviews pointless...
I see AMD fanboys/proponents say this often, so I'll ask you.
Is performance at the time you purchase and in the near future more important to you? Or are you buying for unrealized potential that may only be unlocked when you are ready to upgrade those cards again?
But I guess that is a fundamental difference and one of the main reasons I prefer Nvidia. I'd much rather buy something knowing I'm going to get Day 1 drivers, timely updates, feature support as advertised when I buy, over the constant promise and long delays between significant updates and feature gaps.
Good point, however NVIDIA has made large gains in drivers in the past, so there is definitely performance left on the table for them as well. I think the issue here is that NVIDIA has seemed - to the casual observer - to be less interested in delivering performance improvements for anything prior to Maxwell, perhaps as a method of pushing people to buy their new products. Of course, this wouldn't cause you any issues considering you're already on Maxwell 2.0, but what about the guy who bought a 680 which hasn't aged so well? Not everybody can afford a new card every generation, let alone two top end cards.
Again, it fundamentally speaks to Nvidia designing hardware and using their transistor budget to meet the demands of games that will be relevant during the course of that card's useful life.
Meanwhile, AMD may focus on archs that provide greater longevity, but really, who cares if it was always running a deficit for most of its useful life just to catch up and take the lead when you're running settings in new games that are borderline unplayable to begin with?
Some examples for GCN vs. Kepler would be AMD's focus on compute, where they always had a lead over Nvidia in games like Dirt that started using Global Illumination, while Kepler focused on geometry and tessellation, which allowed it to beat AMD in most relevant games of the DX9 to DX11 transition era.
Now, Nvidia presses its advantage as its compute has caught up and exceeded GCN with Kepler, while maintaining their advantage with geometry and tesseletion, so we see in these games, GCN and Kepler both fall behind. That's just called progress. The guy who thinks his 680 should still keep pace with a new gen architecture meant to take advantage of features in new gen games probably just needs to look back at history to understand, new gen archs are always going to run new gen games better than older archs.
+1, exactly, except for a few momentary anomalies, Nvidia has held the single GPU performance crown and won every generation since G80. They did their best with the small die strategy for as long as they could, but they quickly learned they'd never get there against Nvidia's monster 500+mm^2 chips, so they went big die as well. Fiji was a good effort, but as we can see, it fell short and may be the last grand effort we see from AMD.
I think it is also very important to note that the 980TI is an excellent overclocker. That was the main reason why I chose it over the Fury X. A 980TI is practically guaranteed to get a 20%+ overclock, while the Fury X barely puts out a 7% increase. That sealed the deal for me.
I've said before and I'll say it again, cheaper, more efficient cards are the way forwards. The GTX 750 and 750Ti were important in hammering this point home.
I just wish developers would try to get titles working at 60fps at high details on these sorts of cards instead of expecting us to pay for their poor coding (I'm looking at you, Ubisoft Montréal/Rocksteady).
I really doubt the overall gaming experience of 4K 30-50 FPS is better than 1440p 50-100 FPS. Targeting 4K is silly. 4K is still SLI territory even for 980 Ti.
it's still playable, it beats consoles, & it beats past generations of cards, it would be silly if a 7970 did it (actually i found it silly in the 5870 days when eyefinity was pushed, most everything had to have some reduced settings)
but consider this... push 4k, get people on 4k, get game devs to think about image/texture quality, get monitor prices to fall, get displayport everywhere, be ready early so that it's standard
plus fury gets relatively worse at lower resolutions so what can they do other than optimize the driver's cpu load
Nano is the card to wait for. It will sell millions and millions and millions. And AMD is a fool to offer it for anything over $300. Despite the 4 GB ram limitation, it will run every game currently on the market and in the next 4 years fine on average to high systems which is the absolute, dictatorial majority of systems kids all over the world play on. The Enthuuuuusiasts can philosophize all they can, but it does not change anything. The size and power requirements of Nano makes is the card of choice from 1 to Entusiast - 1 range of computer users from home to academia to industry. Well done AMD.
what is the difference between edram and hbm? do you think we'll ever see hbm on a cpu? do you think better amd drivers will close the performance gap in lower resolutions? do you think nvidia pays companies to optimize for their gpus and put less focus on amd gpus? especially in 'the way it is meant to be played' sponsored games?
eDRAM uses a single die or integrated on-die with logic. HBM is composed of several DRAM dies stacked on top of each other. eDRAM tends to be connected via a proprietary link making each implementation unique where as HBM has a JEDEC standard interface.
HBM on a CPU is only a matter of time. Next year HBM2 arrives and will bring capacities that a complete consumer system can utilize.
Fjij seemingly does have some driver issues due to some weird frame time spikes. Fixing these will resolute in a smooth experience but likely won't increase the raw FPS could by much.
With DX12 and Vulkan coming, I'd expect titles just going into development will focus on those new APIs than any vendor specific technology. This does mean that the importance of drivers will only increase.
"HBM on a CPU is only a matter of time." That is actually one of the more interesting and exciting things coming out of the Fiji launch. The effect of slower system memory on AMD APUs has been pretty well documented. It will be interesting to see if we get socket AM4 motherboards with built in HBM2 memory for APUs to use as opposed to using system ram at all. It's also exciting to see that Nvidia is adopting this memory the next go around and who knows how small and powerful they can get their own GPUs to be. Definitely a great time for the industry! Since the Fury X is reasonably close to the 980 Ti, I would love to pick one up. AMD put a lot of the legwork in developing HBM, and without the Fury X, Nvidia likely wouldn't have even created the $649 variant that essentially obsoleted the Titan X. For those reasons feel like they deserve my money. And also I do want to play around with custom BIOS on this card a bit. Now...if only there were any available. Newegg? Tiger? Amazon? Anyone? If they can't keep the supply chains full, impatience might drive me to team green after all.
Nah, just HBM for graphics memory. As HSA APUs shouldn't require the memory to be in two places at the same time, this will alleviate the latency of copying data from system memory to graphics memory. What's more, they don't really need more than 2GB for an APU.
I'm not sure, however, that such bandwidth will make a massive difference. The difference in performance between 2133MHz and 2400MHz DDR3 is far smaller than that between 1866 and 2133 in general. You'd need to beef up the APU to take advantage of the bandwidth, which in turn makes for a larger chip. 2GB would have 250GB/s+ bandwidth with HBM1 at 500MHz, nearly ten times what is currently available, and it would seem a huge waste without more ROPs at the very least. At 14nm, sure, but not until then.
Fixing the peaks and troughs would improve the average frame rates a little, I imagine, but not by a large amount.
Drivers are a sore point especially considering the CPU load in GRID Autosport for the Fury X. Could they not just contract some of this out? VCE was unsupported for a while, TrueAudio looks to be going to same way, and if NVIDIA's drivers are less demanding than AMD's, surely there must be something that can be done to improve the situation?
"do you think nvidia pays companies to optimize for their gpus and put less focus on amd gpus? especially in 'the way it is meant to be played' sponsored games?"
I have noticed quite a few people spitting fire about this all over the interwebs these days. The state of PC ports in general is likely more to blame than anything NVidia is doing to sabotage AMD. To differentiate the PC versions from their console counterparts and get people like us to buy $600 video cards the PC versions need some visual upgrades. That can included anything from high res textures to Physics and particle and hair effects. That is what NVidias Gameworks is all about. Most of the rumors surrounding NVidia deliberately deoptimizing a game at the expense of AMD revolve around Hairworks and Witcher 3. Hairworks is based off tessellation, which NVidia GPUs excel at compared to their AMD counterparts. Now why didn't NVidia just employ TressFX, a similar hair rendering technology used in tomb raider that performed well on both cards? TressFX is actually a DirectCompute based technology co-developed by AMD. NVidia scaled back much of the DirectCompute functionality in their Maxwell 2 GPUs to avoid cannibalizing their own workstation GPU business. Workstation GPU margins tend to be extremely high, as businesses can afford to shell out more dough for hardware. The Titan Black was such a DirectCompute beast, that many workstation users purchased it over much higher priced workstation cards. The Titan X and GTX 980 are now far less attractive options for workstations, but unable to perform as well using TressFX. The solution is to develop a technology using what your GPU does do well "tessellation", and get developers to use it. The decision was likely made purely for business reasons and only hurt AMD as tessellation was a weak point for their cards, although less so for the R9 Fury X. The real problem here is likely shoddy PC ports in general. Big studio titles generally are developed for console first, and ported to PC later. In the previous console generation that meant having a group develop a title for the PowerPC based Xbox 360, the Cell based PS3, and then finally porting to x86 based PC systems often after the console titles had already launched. With the shift to the new generation of consoles, both the Xbox One and Sony PS4 are AMD x86 based. Meaning it should be extremely easy to port these games to similarly x86 based PCs. However, Mortal Kombat X, and Batman Arkham Knight are two titles that recently had horrendous PC launches. In both cases the port was farmed out to a studio other than the primary studio working on the console version. The interesting part is that MKX was not a Gameworks title, while Arkham Knight was offered for free with 980 Ti video cards. I highly doubt NVidia would add Gameworks purely to screw over AMD when the result was a major title they promoted with their flagship card doesn't even work. It is actually a huge embarrassment. Both more NVidia, but more so for the studio handling the PC port. The new console era was supposed to be a golden age for PC ports, but instead it seems like and excuse for studios to farm the work out and devote even less time to the PC platform. A trend I hope doesn't continue.
1. eDRAM takes up more space and more energy and is slower than HBM. 2. HBM will make sense for GPUs/APUs, but not for use as system RAM. 3. Yes, almost a guarantee. But how long that will take is anybody's guess. 4. They don't "pay them" so to speak, they just have contractual restrictions during the game development phase that prevents AMD from getting an early enough and frequent enough snapshot of games so they can optimize their drivers in anticipation of the game's release. This makes nVidia look better in those games. The next hurdle is that GameWorks is intentionally designed to abuse nVidia's strengths over AMD and even their own older generation cards. Crysis 2's tessellation is the most blatant example.
"Crysis 2's tessellation is the most blatant example"
No it wasn't. What you see in the 2D wireframe mode is completely different to what the 3D mode has to draw as it doesn't do the same culling. The whole thing was just another meaningless conspiracy theory.
> do you think nvidia pays companies to optimize for their gpus and put less focus on amd gpus? especially in 'the way it is meant to be played' sponsored games
I don't think too many game developers checks the device ID and lower the game performance when it's not the sponser's card. However, I think through the developer relationship program (or something like that), the game with those logo tends to perform better with the respective GPU vendor as the game was developed with that vendor in mind, and with support form the vendor.
The game would be tested against the other vendors as well, but might not be as much as with the sponser.
Hi, I'd like to point out why the mantle performance was low. It was due to memory overcommitment lowering performance due to the 4GB of vram on the fury x, not due to a driver bug (it's a low level api anyway, there is not much for the driver to do). BF4's mantle renderer needs a surprisingly high amount of vram for optimal performance. This is also why the 2GB tonga struggles at 1080p.
Curious as to why both toms hardware and anandtech were kind to the fury x. Is anandtech changing review style based on editorial requests from toms now? Because an extremely important point is stock overclock performance. And while the article mentions the tiny 4% performance boost the fury x gets from overclocking, it doesn't show what the 980ti can do. Or even more importantly...that the standard gtx 980 can overclock 30-40% and come out ahead of the fury x, leaving the fury x a rather expensive piece of limited hardware concept at best. Also important to mention that the video encoding test was pointless as Nvidia has moved away from CUDA accelerated video encoding in favour of NVENC hardware encoding. In fact a few drivers ago they had fully disabled CUDA accelerated encoding to promote the switchover.
Then you must have selective reading, because they do mention it. In particular, they say if they just got a 7% OC, then the card will perform basically the same, and they it did.
No need to do an OC to the 980ti in that scenario.
Plus, they also mention the Fury X is still locked for OC. Give MSI and Sapphire (maybe AMD as well) until they deliver on their promise of the Fury having better control.
Again going back to the problem of "missing test data" in this review. Under a 11% GPU Clock OC (highest possible), which resulted in a net 5% FPS gain, the card hit nearly 400W (just the card, not total system) and 65C after a long session. Which means any more heat than this, and throttling comes into play even harder. This package is thermally restricted. That's why AMD went with an all in one cooler in the first place...because it wanted to clock it up as high as possible for consumers, but knew the design was a heat monster.
Outside of full custom loops, you won't be able to get much more out of this design even with fully unlocked voltage, your issue is still heat first. This is why it's important to show the standard GTX 980 OC'd compared to the Fury X OC'd. Because that completely changes the value proposition for the Fury X. But both Tom's and Anandtech have been afraid to be harsh on AMD in their reviews.
Great point, it further backs the point myself and others have made that AMD already "overclocked" Fury X in an all out attempt to beat 980Ti and came close to hitting the chips thermal limits, necessitating the water cooler. We've seen in the past, especially with Hawaii, that lower operating temps = lower leakage = lower power draw under load, so its a very real possibility they could not have hit these frequencies stably with the full chip without WC.
When you look at what we know about the Air-Cooled Fury, this is even more likely, as AMD's slides listed it as a 275W part, but it is cut down and/or clocked lower.
What people also overlook is the fact that the Fury X is using more power than the 980ti, for example, while benefiting from a reduction of 20w-30w from using HBM. So the actual power efficiency of the GPU is even lower than it shows.
The Fury X does not throttle at 65C. 65C is when the card begins ramping up the fan speed. It would need to hit 75C to start throttling, which given the enormous capacity of the cooler would be very hard to achieve.
Well that's just wrong, chart clearly shows "Total System Power" which has a 415w - 388w = 27w power difference between 980TI and Fury X. Reading is not your strong point.
275w card vs. a 250w card... 25w difference, who knew math was so easy.
are you supposed to beat it up? last time i checked, water cooling is rather expensive & limited, are you forgetting there are 2 more air cooled lower price launches coming?
I wasn't saying to put it up against a water cooled gtx 980. I was saying a stock reference design gtx 980 when overclocked to the max, will be competitive with and possibly even beat the fury x when overclocked to the max.
Also the 2 air cooled versions coming out will be using a cut down die.
To be clear here, we do not coordinate with Tom's in any way. I have not talked to anyone at Tom's about their review, and honestly I haven't even had a chance to read it yet. The opinions you see here reflect those opinions of AnandTech and only AnandTech.
As for overclocking, I already answered this elsewhere, but we never do direct cross-vendor comparisons of OC'd cards. It's not scientifically sound, due to variance.
Finally, for video encoding, that test is meant to be representative of a high-end NLE, which will use GPU abilities to accelerate compositing and other effects, but will not use a fixed-function encoder like NVENC. When you're putting together broadcast quality video, you do not want to force encoding to take place in real time, as it may produce results worse than offline encoding.
The one plus side of doing roundup reviews of oc'd cards is that very quickly just about all available models *are* oc'd versions, and reference cards cost more, so such roundups are very useful for judging the performance, etc. of real shipping products as opposed to reference cards which nobody in their right mind would ever buy.
From this review is feel something like - they are trying, they are poor - we have to be kind to them. Ryan wrote that cooling is great, i read few other review were cooling was criticized, as noisy and incosistent - i know if there is also a bit of good heart instead of facts, or it differs card by card, but review samples are usualy more polished - if arent from some local it shop.
AMD is very big company with lots of people with huge salaries for sake of our society we should be cruel to such big companies failures, but if they die, they will free the space and other companies could emerge.
Do you think that there is some more optimizations in the drivers that would increase the performance of the fury x vs 980 Ti? I understand that its a bit like peering into a crystal ball to know what kind of performance driver updates will bring, but I'm thinking your guesses would be more educated than most peoples.
I still don't understand how these numbers are correct. The 290X performance is completely wrong on the Beyond Earth test. Where you are pulling only 86.3 avg fps on a standard 290X I am pulling 110.44. I noticed the same when you showed Star Swarm test before. I can supply proof if need be.
The 290X by its very nature throttles, which is why we also have the "uber" results in there. Those aren't throttling, and are consistent from run-to-run.
So you guys are using a stock 290x? that would make sense then. Third party cooled models work much better and more consistently, and many have overclocks.
Yes this goes to AT's long-standing policy to use the reference cooler and stock clocks provided by the IHVs, going back to a dust-up over AnandTech using non-reference cooled and overclocked EVGA cards. AMD fanboys got super butthurt over it and now they reap the policy that they sowed.
The solution for AMD is to design a better cooler or a chip that is adequately cooled by their reference coolers; it looks like they got the memo. Expect air cooled Fury and all the 300 series Rebrandeons to not run into these problems as all will be custom/air-cooled from the outset.
It's not really realistic tho, I can't get a reference AMD 280, 280x, 290, 290x anywhere in Europe except maybe used, and I couldn't get those said reference cards a month after launch of the said cards. Same pretty much goes for nvidia, but a lot of high-end nvidia cards still use the blower design for some reason, so I can at least get a reference look alike there, not so with AMD cards.
Which again makes the results a bit skewed if the performance of said cards is really that much dependent on better cooling solutions, since better cooling solutions is all people can buy.
"Depending on what you want to believe the name is either a throwback to AMD’s pre-Radeon (late 1990s) video card lineup, the ATI Rage Fury family. Alternatively, in Greek mythology the Furies were deities of vengeance who had an interesting relationship with the Greek Titans that is completely unsuitable for publication in an all-ages technical magazine, and as such the Fury name may be a dig at NVIDIA’s Titan branding."
Took you guys a while to get this review out, but now I can see why. It's very detailed. Though I am not impressed with the card itself, it is a decent effort by AMD, but the product feels unfinished, almost beta quality. No HDMI 2.0 is also a big downer, along with "only" 4GB" RAM.
Great review :) Been waiting eagerly for it, Anandtech as always is so much more detailed than other reviews.
In case you're interested though, I did pick out a few minor typos ;) Page numbers going by the URLs... Page 11: "while they do not have a high-profile supercomputer to draw a name form" Page 14: "and still holds “most punishing shooter” title in our benchmark suite" probably should be "holds the" Page 19: The middle chart is labeled "Medum Quality" Page 25: "AMD is either not exposing voltages in their drivers or our existing tools [...] does not know how to read the data" Page 26: "True overclocking is going to have to involve BIOS modding, a risker and warranty-voiding strategy" Page 27: "the R9 Fury X does have some advantages, that at least in comparing reference cards to reference cards, NVIDIA cannot touch" very minor, but I believe the comma should be after "that" rather than "advantages" Page 27: "it is also $100 cheaper, and a more traditional air-cooled card design as well" I believe should be either "and has a [...] card design" or "and is a [...] card"
Cheers :) Sorry if I seem nitpicky. If anything, it means I read every word :P
"Mantle is essentially depreciated at this point, and while AMD isn’t going out of their way to break backwards compatibility they aren’t going to put resources into helping it either. The experiment that is Mantle has come to an end."
"Depreciated" should probably be deprecated when referring to software that still technically works but is no longer supported or recommended. Good discussion nonetheless, that paints a clearer picture of the current status of Mantle as well as some of the fine tuning that needed to be done per GPU/arch.
Also, just wondering, if you still feel AMD launched and performed the run-up of Fury X exactly as they planned, it does sound like you're more in agreement with me that 980Ti really caught them off guard and changed their plans for the worse.
Good review and for those that called [H] review's somehow biased, well here's AT where the Fury X again get's its ass handed to it by 980 Ti and Titan X.
The problem is that a ton of those transistors are used for the memory with the Fury. With the same amount of ROPs the Fury X is just really held back.
I think once they have access to a die shrink and can increase the ROPS and tweak some things what they have with Fiji will be a monster. Because of their lack of resources compared to NVidia they couldn't sit around and do what NVidia is doing, and instead had to just spend their money designing something that will work better once they have the smaller process available.
You need to be mad to think hardocp is not a legit review site where they show u all the numbers u need to know and the time frame. I must admit i added a lot of people on the forums to biased and not worth to pay attention to cause of the bashing of that site.
Thanks, Ryan. I wonder if there is any chance of doing a multi-GPU shootout between Fury X and 980ti, since as you alluded to in your closing remarks, "4K is arguably still the domain of multi-GPU setups." There will be two countervailing effects in such a test, first, close proximity of cards will not adversely affect Fury X, but CPU bottlenecking could be exacerbated by having two GPUs to feed.
With the proper case setup the close proximity of aircooled cards like the 980Ti isn't as big of a deal as it may seem due to their cooler design drawing in air from the end of the card. If you have a fan blowing fresh air straight into the GPU's intake on the end of the card you alleviate a lot of the proximity issues. Slanting the cards rather then keep them at a 90 degree angle helps as well. (See cases like the Alienware Area 51 case)
I'm not sure how relevant a crossfire test of the Fury X is to most people as many will not be able to use more then one Fury X due to the 120mm fan requirement.
I feel like a lot of people don't account for this issue as much as they should. Those running multi-GPU builds will naturally gravitate now more so towards NVidia simply because they do not have room for more then one Fury x in their case.
This is one of the biggest problems with Fury X, yet a lot of reviews gloss over it way too much. Just fitting it in a normal case which will likely already have a water cooler for the CPU will be a problem for many.
I just don't understand the issue of finding a place for the rad. I'm using a Phantom 410 - a case that has been around for years. It has no less than 5 mounting locations for a 120mm fan/rad. I find it hard to believe that current cases are no longer able to accommodate 2 120mm rads. Even a lot of mini ITX cases can do this like the BitFenix Phenom.
At the moment the answer is no, simply due to a lack of time. There's too much other stuff we need to look at first, and I don't have a second Fury X card right now anyhow.
Ryan, any reason you didn't include OC results for 980 Ti? I think that's a pretty big oversight because it would have shown just how far 980 Ti gets ahead of Fury X with them both OC'd.
Nevermind, saw your review to chizow. I still think you guys should include OC vs OC with a disclaimer that performance may vary a bit due to sample quality.
While 4GB is "enough" for a single Furry X, as it lacks raw performance to really go beyond that, do you think it will be the same for a 2x4GB crossfire build? It seems we would need a little more to fully enjoy the performance gain provided by the crossfire.
Yep, also more likely to use up GPU overhead to enable any sliders that were disabled to run adequately with just 1xGPU, crank up AA modes etc. Quite easy to surpass 4GB and even 6GB under these conditions, even at 1440p.
just to trow this here i believe the current cooler might be used on the Fury X2 (MAXX?) why? both the power and cooling is over engineered, so that might point to it being used on another card in the future, why? economy of scale, if amd gets alot of 500W coolers from cooler master then they get them cheaper..
I think the cooler is over-engineered because AMD didn't want a repeat of hot and loud 290/290X stock coolers, because they imagined someone with a crap case would buy one and because even the weakest CLLC coolermaster had was capable of handling a Fury X.
These 120mm CLLC are far superior to heatpipe heatsinks on GPUs. It's the equivalent of slapping a 120mm CPU tower cooler onto your GPU. Standard GPU heatsinks, on the other hand, are on a similar performance level to low profile CPU coolers.
You can probably imagine the issue with putting on a 120mm tower cooler onto a GPU.
Even at 1440p with all of the settings maxed (8xMSAA, post-processing on, Map set 4, x64 tessellation) I can get the 980 Ti to score below 300fps in TessMark. The Fury X numbers are underwhelming by comparison.
I've always rooted for AMD in the CPU and GPU arenas so I'm sad to see the writing so clearly etched on the wall now. They've all but surrendered in the CPU arena and now a sub-par flagship GPU marketed at 4K gaming BUT without HDMI 2.0 support - really? I just hope whoever buys them can keep them in tact and we don't end up with a sell off resulting in an Intel CPU monopoly and an Nvidia GPU monopoly.
Seems like a decent card - but not one that (as somebody who is brand-agnostic) I could justifiably buy. Hopefully it comes in with a price-drop or we see a factory OC versions which bumps performance up another 10%.
I would rather buy an AMD card right now, purely because Freesync monitors are totally happening and they aren't overly expensive compared to regular monitors.
On the other hand, if this matched the GTX 980 Ti precisely in performance, the fact that it has only 4Gb of VRAM is certainly an issue.
So whilst this product is impressive compared to its predecessors, overall, it's (IMO) mostly a disappointment.
Hopefully next year (I'm gunna need a new GFX card pretty badly by then) brings some more impressive cards.
There, there. You did alright champ, you did alright. *pats on back* You went against the reigning world champion, and still survived. Hopefully some korean cellphone tycoons saw how hard you fought and may sponsor you for another shot at the title.
It's been a major issue for a while, and one that doesn't look to be solved anytime soon. Perhaps it's one of the reasons for developing Mantle in the first place...?
Kind of disappointed in all honesty. No real OC capability despite the water cooler? So basically you have a card that costs the same as a 980 Ti but with worse performance, less vRAM, no HDMI 2.0, and a broken 4k decoder? Not really a very good prospect for AMD unless they're willing to cut prices again.
The R9 Nano will suffer the most from the lack of HDMI 2.0 as it's the card that could be used in a gaming HTPC. It's going to be competing directly with the mini GTX 970 and I really don't know if it can win that battle.
Whew!. Now that's a video card and architecture review. Mad props for all the work that involved Ryan. The current FuryX is a fine card and highlights a lot of the technology we are going to see on the next gen 14/16nm cards.CLLC and HBM are just great and it will be very interesting to how slick Nvidia can make their upcoming Pascal cards. +1 to AT for this solid review.
Have to say though, Fury X disappointed me. I am not AMD/ATI fan (ever since I spent dreadful 2 months forcing my DirectX 9 semestral work to run on ATI card) but I have hoped for quite a bit more, if only for the guts to be early adopter of HBM and heating up the GPU arena a bit for nVidia to try harder.
The noise from the pump is not something I'd classify as a "whine", or annoying for that matter. It is a pump that rocks under load, but it doesn't have much room to ramp down at idle, so it's still putting out a bit of noise at idle relative to air cooled cards.
Could you please add min, average and max fps to all game benhcmarks? I love Anandtech and as u showed in shadow of mordor the average fps is a bad way to look at performance where the average was same but the min fps was worlds apart. Is why i only care about hardocp and & hardwarecanucks reviews since they post that information. This is just not good enough for me, i hope u can accommodate us that want the whole picture.
Also why didnt u include 700 and 900 series in tesstelation benchmarks? Atleast the 900 series should have shamed the inefficient GNC series and showed that it at best on kepler performance. Dont get me wrong the review was okey but it was like a AMD card, just 90% there but at same price point.
You know I think you could fit a few more ads as content is still visible. That said, the interposer could be made in two or for smaller dies and just fab the phy on the interposer.
If you have multiple GPU silicon on a card, you still have to have a PCIe switch in order to navigate the PCIe 3.0 x16 input into the card. You can see that the dual GPU card has a PLX8747 on it, which splits the PCIe 3.0 x16 lanes from the CPU into two lots of x16, one of each GPU (PLX chips do multiplexing with a FIFO buffer). Having three or four GPUs means you have to look into more expensive PCIe switches, like the PLX8764 or the PLX 8780, then provide sufficient routing which most likely adds PCB layers.
You really should have made the point that, in order to get similar load noise level out of the 980 Ti, you're looking at greater expense because the cooler will need to be upgraded. That extra expense could be quite significant, especially if buyers opt for a closed loop cooler.
The other point is that AMD had better offer this card without the CLC so people who run water loops won't have to buy a fancy CLC that they're not going to use.
But you would just get an air cooler that you're not going to use. And I doubt there would be any price difference, because they'll get beefy air coolers.
No, it sounds efficient. There is no sound reason to manufacture and sell a complex part that people aren't going to use. There is no sound reason to buy said part if one plans to not use it.
Only landfill owners and misanthropes would cheer this sort of business practice. GPUs should be offered bare as an option for watercooling folk.
I think more waste would be generated from cards bought by 'tards who try running them with no heatsink than would be saved by omitting the cooler. Don't underestimate the number of people who know absolutely nothing about computers and who will make absolutely idiotic mistakes like that.
False dilemma fallacy, really. People buy OEM CPUs and install them in their machines all the time. Do they not put a cooler on the CPU because they're stupid? You'll need to come up with a better reason why my idea isn't a good one.
Hard to tell... when I moved to water, overall power consumption went down (watercooled cpu and gpu)... but I run with big radiator and pump (passive cooling without fan on radiator). As for the entry price the water cooling is more expensive in general... I doubt 980Ti could meet same price level with water loop, even this small like Fury X has...
Power consumption would not change much, because you're adding in a pump and swapping fans.
Pump alone can be 3W and higher, although most are around 18W at full speed. 18W is good enough for a CPU+GPU+2xRAD loop. (someone might want to correct me on this)
The pump you find in these CLLC are around 3W to 4W, with the radiator fan being 2W on average. But those numbers are only if they are running at full speed, which usually isn't necessary.
Love the work here! Thanks! The R9 Nano is what truly Fiji is but due to the costs of HBM, AMD would want to quickly recover those costs by overclocking the Nano and fit it with a liquid cooler so that it competes with the 980 Ti.
I believe the R9 Nano will be cheaper as it will have noticeably lower performance especially outside 4K resolutions and it doesn't have the liquid cooler. It is the Fury cards that are binned which makes more sense and simple.
This is just a small part of the big picture. HBM technology is what they need for their APUs, desperately. HBM is also the next step for more integration, built-in RAM.
Well sadly, even if the 4GB limitation of HBM1 somehow is somewhat enough for a simple VRAM use, it will probably be way too tight as a shared main memory for CPU + GPU.
True but their APUs doesn't compete in the high-end desktop/laptop in terms of CPU so they fall in the entry level and mid-range. With a 4GB APU, a design can leave out the DIMM slots for cost and size reduction. It can also be used as a high-end tablet CPU (probably on the next process node). At the mid-range, they might have 2GB APUs complemented with DIMM slots.
Right. I just hope it doesn't approach the Fury cards in terms of price. I have a gut feeling that it is a significantly cheaper, taking a clue from the exclusion of the Fury branding.
Ryan - what do you think are some of the issues in AMD getting good drivers out? It seems to be making a significant dent in their GPUs' overall performance.
Wow I can't believe I read the entire article - whew, but that was great. I only understood about half of it, but I'm now that much more intelligent (I think). :)
OMG is a Monster :D I will have it in near future ;-) Now i can see that Drivers needs tweaking -> and we need new Omega for Fury-X, imagine the +30-40% in every game DX11.1 and in Win_X DX12 OMG thats All.... THX for revievv
So basically HBM is the 40 or 80 pin ribbon cable of today right Ryan ? Do you see serial busses being abandoned for parallel again someday in your opinion ?
Horses for courses. HBM is perfectly suited to graphics memory - wide is really the only thing you need, event he "slow" speed of HBM is beyond what's needed. Making the memory faster makes virtually no performance difference. The only reason it even operates as fast as it does is to get the bandwidth up (bandwidth is the product of speed and width).
Think of a hose - with the same total pressure, a thinner hose will have a higher "speed" (as the water coming out will have a faster velocity) than a wider hose, but when all you need to do is fill the bucket, a wider hose is obviously better - which is the case here.
You're attempt at Nozzle pressure in a Fluid Dynamics analogy is solid, but if he cannot fathom that a wider bandwith bus allows for less round trips than he's an idiot.
I had to stop reading after the author used the term "reticle limit" to describe a 28nm chip. I really wish the "enthusiast press" wouldn't use real terms they don't understand.
Great review here! It was a good read going through all the technical details of the card I must say. The Fury X is an awesome card for sure. I am trying to wait for next gen to buy a new card as my 280X is holding it's own for now, but this thing makes it tempting not to wait. As for the performance, I expect it will perform better with the next driver release. The performance is more than fine even now despite the few losses it had in the benches. I suspect that AMD kind of rushed the driver out for this thing and didn't get enough time to polish it fully. The scaling down to lower resolutions kind of points that way for me anyways.
AMD/ATI, what a fail. Over the past 15 years I have only gone Nvidia twice for 6600GT and 9800GT but now I am using a GTX 980. Not a single mid-range/high-end card in AMD/ATI's line up is correctly priced. Lower price by 15-20% to take into account the power usage, poor driver and less features will make them more competitive
At the high end you "may" have a point.. but what is the 960 bringing to the table against the 380? Not much.. not much at all. How about the 970 vs the 390? Again.. not much.. and in crossfire/sli situations the 390 (in theory..) should be one helluva bang for the buck 4k setup.
There will be a market for the FuryX.. and considering the efforts they put into it I don't believe it's going to get the 15-20% price drop your hoping for.
Slightly better performance while pulling less power and putting out less heat, and in the 970's case, is currently about $10 cheaper. Given that crossfire is less reliable than SLI, why WOULD you buy an AMD card?
Maybe because people want decent performance above 3.5 GB of VRAM? Or they don't appreciate bait and switch, being lied to (ROP count, VRAM speed, nothing about the partitioning in the specs, cache size).
How do you feel about the business practice of sending out a card with faulty, cheating drivers that lower IQ despite what you set in game so you can win/cheat in those said benchmarks. It's supposed to be apples to apples not apples to mandarins?
How about we wait until unwinder writes the software for voltage unlocks before we test overclocking, those darn fruits again huh?
Nvidia will cheat their way through anything it seems.
It's pretty damning when you look at screens side by side, no AF Nvidia.
freesync? not as good as gsync and is still not free. It takes similar hardware added to the monitor just like gsync.
built in water cooling? just something else to go wrong and be more expensive to repair, with the possibility of it ruining other computer components.
Disgust for NVidia's shitty business practices? what are those? Do you mean like not giving review samples of your cards to honest review sites because they told the truth about their cards so now you are afraid that they will tell the truth about your newest pos? Sounds like you should really hate AMD's shitty business practices.
This card is not the disappointment people make it out to be. One month ago this card would have been a MASSIVE success. What is strange to me is that they didn't reduce price, even slightly to compete with the new 980 ti. I suspect it was to avoid a price war, but I would say at $600 this card is attractive, but at $650 you only really want it for water cooling. I suspect the price will drop more quickly than the 980 ti.
So it is as expensive as the 980Ti by delivering less performance and requires watercooling. Once Nvidia settles for a TITAN Y including HBM, its all over for the red guys.
No, they couldn't have. Fury X is already a 275W and that's with the benefit of low temp leakage using a WC *AND* the benefit of a self-professed 15-20W TDP surplus from HBM. That means in order for Fury X to still fall 10% short of 980Ti, it is already using 25+20W, so 45W more power.
Their CUSTOM cooled 7/8th cut Fury is going to be 275W typical board power as well and its cut down, so yeah the difference in functional unit power is most likely going to be the same as the difference in thermal leakage due to operating temperatures between water and custom air cooling. A hot leaf blower, especially one as poor as AMD's reference would only be able to cool a 6/8 cut Fiji or lower, but at that point you might as well get a Hawaii based card.
Your posts don't even try to sound sane. I wrote about the GTX 480, which was designed to run hot and loud. Nvidia also couldn't release a fully-enabled chip.
Ignore the point about the low-grade cooler on the 480 which ran hot and was very loud.
Ignore the point about the card being set to run hot, which hurt performance per watt (see this article if you don't get it).
How much is Nvidia paying you to astroturf? Whatever it is, it's too much.
this AMD card pumps out more heat than any NVidia card. Just because it runs a tad cooler with water cooling doesn't mean the heat is not there. It's just removed faster with water cooling, but the heat is still generated and the card will blow out a lot more hot air into the room than any NVidia card.
My excitement with HBM has subsided as I realized that this is too costly to be implemented in AMD's APUs even next year. Yet, I hope they do as soon as possible even if it would mean HBM on a narrower bus.
I thought it was great as well.. It had a lot more meat to it then I was expecting. Ryan might have been late to the party but he's getting more feedback than most other sites on his review so that shows that it was highly anticipated.
I don't understand why the Fury X doesn't perform better... It's specs are considerably better than a 290X/390X and it's memory bandwidth is far higher than any other card out there... yet it still can't beat the 980 Ti and should also be faster than it already is compared to the 290X. It just doesn't make sense.
Perhaps DX11 is holding it back. As far as I understand it, Maxwell is more optimized for DX11 than AMD's cards are. AMD really should have sponsored a game engine or something so that there would have been a DX12 title available for benchmarkers with this card's launch.
Great stuff. Can we get a benchmarks with these cards overclocked? I'm thinking the 980 Ti and the Titan X will scale much better with overclocking compared to Fury X.
Once again, the 400 AMP number is tossed around as how much power the Fury X can handle. But think about that for one second. Even a EVGA SuperNOVA 1600 G2 Power Supply is extreme overkill for a system with a single Fury X in it, and its +12V rail only provides 133.3 amps.
That 400 AMP number is wrong. Very wrong. It should be 400 watts. Push 400 Amps into a Fury X and it most likely would literally explode. I would not want to be anywhere near that event.
okay, see, it's not 12V * 400A = 4800W. It's 1V (or around 1V) * 400A = 400W 4800W would trip most 115VAC circuit breakers, as that would be 41A on 115VAC, before you even start accounting for conversion losses.
Anyone hear about Nvidia lowering thier graphics quality to get a higher frame rate in reviews vs Fury? Reference is semi accurate forum 7/3 (Nvidia reduces IQ to boost performance on 980TI? )
Wow very interesting, thanks bugsy. I hope those guys at the various forums can work out the details and maybe a reputable tech reviewer will take a look.
I'm still a bit perplexed about how AMD gets an absolute roasting for CrossFire frame-pacing - which only impacted a tiny amount of users - while the sub-optimal DirectX 11 driver (which will affect everyone to varying extents in CPU-bound scenarios) doesn't get anything like the same level of attention.
I mean, AMD commands a niche when it comes to the value end of the market, but if you're combining a budget CPU with one of their value GPUs, chances are that in many games you're not going to see the same kind of performance you see from benchmarks carried out on mammoth i7 systems.
And here, we've reached a situation where not even the i7 benchmarking scenario can hide the impact of the driver on a $650 part, hence the poor 1440p performance (which is even worse at 1080p). Why invest all that R&D, time, effort and money into this mammoth piece of hardware and not improve the driver so we can actually see what it's capable of? Is AMD just sitting it out until DX12?
Because lying to customers about VRAM performance, ROP count, and cache size is a far better way to conduct business.
Oh, and the 970's specs are still false on Nvidia's website (claims 224 GB/s but that is impossible because of the 28 GB/s partition and the XOR contention — the more the slow partition is used the closer the other partition can get to the theoretical speed of 224 but the more it's used the more the faster partition is slowed by the 28 GB/s sloth — so a catch-22).
It's pretty amazing that Anandtech came out with a "Correcting the Specs" article but Nvidia is still claiming false numbers on their website.
This card and the 980 ti meet two interesting milestones in my mind. First, this is the first time 1080p isn't even considered. Pretty cool to be at the point where 1080p is considered at bit of a low resolution for high end cards.
Second, it's the point where we have single cards can play games at 4k, with higher graphical settings, and have better performance than a ps4. So at this point, if a ps4 is playable, than 4k gaming is playable.
Idle power does not start things off especially well for the R9 Fury X, though it’s not too poor either. The 82W at the wall is a distinct increase over NVIDIA’s latest cards, and even the R9 290X. On the other hand the R9 Fury X has to run a CLLC rather than simple fans. Further complicating factors is the fact that the card idles at 300MHz for the core, but the memory doesn’t idle at all. HBM is meant to have rather low power consumption under load versus GDDR5, but one wonders just how that compares at idle.
I'd like to see you guys post power consumption numbers with power to the pump cut at idle, to answer the questions you pose. I'm pretty sure the card is competitive without the pump running (but still with the fan to have an equal comparison). If not it will give us more of an insight in what improvements AMD can give to HBM in the future with regards to power consumption. But I'd be very suprised if they haven't dealt with that during the design phase. After all, power consumption is THE defining limit for graphics performance.
Idle power consumption isn't the defining limit. The article already said that the cooler keeps the temperature low while also keeping noise levels in check. The result of keeping the temperature low is that AMD can more aggressively tune for performance per watt.
The other point which wasn't really made in the article is that the idle noise is higher but consider how many GPUs exhaust their heat into the case. That means higher case fan noise which could cancel out the idle noise difference. This card's radiator can be set to exhaust directly out of the case.
It's an engineering card as much as it is for gaming. It's a great solid modeling card with OpenCL. The way AMD is building its driver foundation will pay off big in the next quarter.
I don't know that I agree about that. Even people who game a lot probably use their computer for other things and it sucks to be using more watts while idle. That being said, the increase is not a whole lot.
Gaming is a luxury activity. People who are really concerned about power usage would, at the very least, stick with a low-wattage GPU like a 750 Ti or something and turn down the quality settings. Or, if you really want to be green, don't do 3D gaming at all.
That's not really true. I don't mind my gfx card pulling a lot of power while I'm gaming. But I want it to sip power when it's doing nothing. And since any card spends most of its time idling, idling is actually very important (if not most important) in overal (yearly) power consumption.
Btw I never said that idle power consumption is the defining limit, I said power consumption is the defining limit. It's a give that any Watt you save while idling is generally a Watt of extra headroom when running at full power. The lower the baseline load the more room for actual, functional (graphics) power consumption. And as it turns out I was right in my assumption that the actual graphics card minus the cooler pump idle power consumption is competitive with nVidia's.
As an analyst , I Guarantee AMD’s Success by taking the following simple steps:
1. To Stop wasting money on R&D investments altogether at once. 2. To employ a bunch of marketers like Chizow, N7, AMDesperate, . . . to Spread Rumors and bash best products of the competition, constantly. 3. To Invest saved money (R&D wasted money on new techs like HBM, Low level API Mantle, Premium water cooler, etc, etc) in Hardware Review sites to Magnify your products Strengths and the competition’s Weaknesses. (Note: Consumers won’t judge your product against the competition in practice, They just accept what they see in Hardware Review sites & Forums)
I just gave these advices to some companies in the past, and believe me, one have the best CPU out there, and the other make the best GPU. Innovation is not an R&D’s fruth, it’s a Marketing FRUTH.
Astroturfing got Samsung smacked with a penalty, but a smart company would hire astroturfers who are good at disguising their bias, not obvious trolls.
AMD only hope left is that company with better lithography like Samsung for example buy it entirely. You're welcome, Samsung. Hope you will not forget my as always brilliant advices.
Perhaps because the 970 should have been withdrawn from the market for fraud? It should have been relabeled the 965 and consumers who bought one should have been offered more than just a refund.
Of course this is nonsense, if the 970 launched at its corrected specs, would you have a problem with its product placement? Of course not. But let's all act as if this is the first and last time a cut down ASIC is sold at a lower price:performance segment nonetheless!
right because that 0.5 partition really hindered its performance lol. Lets face it , the 970 is an excellent performer with more vram than last gen nvidia's top dog (870 ti) and performing within 15% from nvidia's top tier gtx 980 for $200 less...what more there is to say?
Even better, there are various vendors that sell a short version of the GTX970 (including Asus and Gigabyte for example), so it can take on the Nano card directly, as a good choice for a mini-ITX based HTPC. And unlike the Nano, the 970 DOES have HDMI 2.0, so you can get 4k 60 Hz on your TV.
Also. I understand it's a little early but I thought this card was supposed to blow the GTX 980Ti out of the water with it's new memory. The performance to price ratio is decent but I was expecting a bit larger jump in performance increase. Perhaps with the driver updates things will change.
Hum, unless I missed it, I didn't see any mention of the fact that this card only supports DX12 level 12_0, where nVidia's 9xx-series support 12_1. That, combined with the lack of HDMI 2.0 and the 4 GB limit, makes the Fury X into a poor choice for the longer term. It is a dated architecture, pumped up to higher performance levels.
Whilst it's beyond me why they skimped on HDMI 2.0 - there's adapters if you really want to run this card on a TV. It's not such a huge drama tho, the cards will drive DP monitors in the vast majority, so, I'm much more sad at the missing DVI out.
I think the reason why there's no HDMI 2.0 is simple: they re-used their dated architecture, and did not spend time on developing new features, such as HDMI 2.0 or 12_1 support.
With nVidia already having this technology on the market for more than half a year, AMD is starting to drop behind. They were losing sales to nVidia, and their new offerings don't seem compelling enough to regain their lost marketshare, hence their profits will be limited, hence their investment in R&D for the next generation will be limited. Which is a problem, since they need to invest more just to get where nVidia already is. It looks like they may be going down the same downward spiral as their CPU division.
Well at least AMD aren't cheating by allowing the driver to remove AF despite what settings are selected in game. Just so they can win benchmarks. How about some fair, like for like benchmarking and see where these cards really stand.
As for the consoles having 8 GB of RAM, not only is that shared, but the OS uses 3 GB to 3.5 GB, meaning there is only a max of 5 GB for the games on those consoles. A typical PC being used with this card will have 8 to 16 GB plus the 4 GB on the card. Giving a total of 12 GB to 20 GB.
In all honesty at 4K resolutions, how important is Anti-Aliasing on the eye? I can't imagine it being necessary at all, let alone 4xMSAA.
Anti-aliasing is required for the same reason that no AA still sticks out on 3D titles on an iPad, but in my experience with a 32-inch 4K Asus, post-process AA (SMAA, FXAA) does the job just fine.
I sincerely hope the Overclocking limitations are related to software, a $1000 card with liquid cooling ought to be able to pull higher clocks than that...
Out of curiosity, is it really possible for an Xbox One/PlayStation 4 game to take up over 4GB of memory just for graphics, since just 5GB total are usable for games?
When ported to PC yes. That is because we usualyl get enhanced graphics settings that they do not.
PC ports are also less efficient because of low budget ports. Which just compounds the issue more.
Computers have to be more powerful than their console counter parts in order to play equivalent games due to sloppy coding, and enhanced visual options.
I recently got an AMD Fury X, but I'm running into an issue with my games. I've tried with Battlefield 4, Crisis 3, Quantum Break, ReCore, The Division, and they all have the same distortion. Any ideas or anyone that can make suggestions? I don't know how to trouble shoot this. Here is a screenshot of Crisis 3: https://vjkc5g-ch3301.files.1drv.com/y3m_mcTTTddOj...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
458 Comments
Back to Article
Stuka87 - Thursday, July 2, 2015 - link
Thanks for all your efforts in getting this up Ryan!nathanddrews - Thursday, July 2, 2015 - link
Worth the wait, as usual.Refuge - Thursday, July 2, 2015 - link
Thanks for the review Ryan, I hope you are feeling better.jay401 - Friday, July 3, 2015 - link
Hear hear!akamateau - Tuesday, July 14, 2015 - link
Fury X CRUSHES ALL nVidia SILICON with DX12 and Mantle.Ryan knows this but he doesn't want you to know.
In fact Radeon 290x si 33% faster than GTX 980 Ti with BOTH DX12 and Mantle. It is equal to Titan X.
nVidia siliocn is rubbish with DX12!!!
http://wccftech.com/amd-r9-290x-fast-titan-dx12-en...
http://www.eteknix.com/amd-r9-290x-goes-head-to-he...
Refuge - Thursday, July 23, 2015 - link
Those are draw calls, that isn't how you compare GPU's. lol.Thatguy97 - Thursday, July 2, 2015 - link
Finallykrumme - Friday, July 3, 2015 - link
A good, thoughtfull, balanced review. From a person that clearly cares for gfx development and us as consumers. And thats what matters.Thatguy97 - Friday, July 3, 2015 - link
IndeedLiviuTM - Saturday, July 4, 2015 - link
You can say that again.Samus - Saturday, July 4, 2015 - link
Being an NVidia use for 3 generations, I'm finding it hard to ignore this cards value, especially since I've invested $100 each on my last two NVidia cards (including my SLI setup) adding liquid cooling. The brackets alone are $30.Even if this card is less efficient per watt than NVidia's, the difference is negligible when considering kw/$. It's like comparing different brand of LED bulbs, some use 10-20% less energy but the overall value isn't as good because the more efficient ones cost more, don't dim, have a light buzz noise, etc.
After reading this review I find the Fury X more impressive than I otherwise would have.
Alexvrb - Sunday, July 5, 2015 - link
Yeah a lot of reviews painted doom and gloom but the watercooler has to be factored into that price. Noise and system heat removal of the closed loop cooler are really nice. I still think they should launch the vanilla Fury at $499 - if it gets close to the performance of the Fury X they'll have a decent card on their hands. To me though the one I'll be keeping an eye out for is Nano. If they can get something like 80% of the performance at roughly half the power, that would make a lot of sense for more moderately spec'd systems. Regardless of what flavor, I'll be interested to see if third parties will soon launch tools to bump the voltage up and tinker with HBM clocks.chizow - Monday, July 6, 2015 - link
Water cooling if anything has proven to be a negative so far for Fury X with all the concerns of pump whine and in the end where is the actual benefit of water cooling when it still ends up slower than 980Ti with virtually no overclocking headroom?Based on Ryan's review Fury Air we'll most likely see the downsides of leakage on TDP and its also expected to be 7/8th SP/TMU. Fury Nano also appears to be poised as a niche part that will cost as much if not more than Fury X, which is amazing because at 80-85% of Fury X it won't be any faster than the GTX 980 at 1440p and below and right in that same TDP range too. It will have the benefit of form factor but will that be enough to justify a massive premium?
Alexvrb - Monday, July 6, 2015 - link
You can get a bad batch of pumps in any CLC. Cooler Master screwed up (and not for the first time!) but the fixed units seem to be fine and for the units out there with a whine just RMA them. I'm certainly not going to buy one, but I know people that love water cooled components and like the simplicity and warranty of a CL system.Nobody knows the price of the Nano, nor final performance. I think they'd be crazy to price it over $550 even factoring in the form factor - unless someone releases a low-profile model, then they can charge whatever they want for it. We also don't know final performance of Fury compared to Fury X, though I already said they should price it more aggressively. I don't think leakage will be that big of an issue as they'll probably cap thermals. Clocks will vary depending on load but they do on Maxwell too - it's the new norm for stock aircooled graphics cards.
As for overclocking, yeah that was really terrible. Until people are able to tinker with voltage controls and the memory, there's little point. Even then, set some good fan profiles.
Refuge - Thursday, July 23, 2015 - link
To be honest, the wine I've seen on these isn't anything more than any other CLC I've ever seen in the wild.I feel like this was blown a bit out of proportion. Maybe I'm going deaf, maybe I didn't see a real example. I'm not sure.
tritiumosu3 - Thursday, July 2, 2015 - link
"AMD Is nothing if not the perineal underdog"...
perineal =/= perennial! You should probably fix that...
Ryan Smith - Thursday, July 2, 2015 - link
Thanks. Fixed. It was right, and then the spell-checker undid things on me...ddriver - Thursday, July 2, 2015 - link
I'd say after the Hecktor RuiNz fiasco, "perpetual underdog" might be more appropriate.testbug00 - Sunday, July 5, 2015 - link
Er, what fiasco did Hector Ruiz create for AMD?Samus - Monday, July 6, 2015 - link
I'm wondering the same thing. When Hector Ruiz left Motorola, they fell apart, and when he joined AMD, they out-engineered and out-manufactured Intel with quality control parity. I guess the fiasco would be when Hector Ruiz left AMD, because then they fell apart.chizow - Monday, July 6, 2015 - link
Oh, and also forgot his biggest mistake was vastly overpaying for ATI, leading both companies on this downward spiral of crippling debt and unrealized potential.chizow - Monday, July 6, 2015 - link
Uh...Bulldozer happened on Ruiz's watch, and he also wasn't able to capitalize on K8's early performance leadership. Beyond that he orchestrated the sale of their fabs to ATIC culminating in the usurious take or pay WSA with GloFo that still cripples them to this day. But of course, it was no surprise why he did this, he traded AMD's fabs for a position as GloFo's CEO which he was forced to resign from in shame due to insider trading allegations. Yep, Ruiz was truly a crook but AMD fanboys love to throw stones at Huang. :Dtipoo - Thursday, July 2, 2015 - link
Nooo please put it back, it was so much better with Anandtech referring to AMD as the taint :PHOOfan 1 - Thursday, July 2, 2015 - link
At least he didn't spell it "perianal"Wreckage - Thursday, July 2, 2015 - link
It's silly to paint AMD as the underdog. It was not that long ago that they were able to buy ATI (a company that was bigger than NVIDIA). I remember at the time a lot of people were saying that NVIDIA was doomed and could never stand up to the might of a combined AMD + ATI. AMD is not the underdog, AMD got beat by the underdog.Drumsticks - Thursday, July 2, 2015 - link
I mean, AMD has a market cap of ~2B, compared to 11B of Nvidia and ~140B of Intel. They also have only ~25% of the dGPU market I believe. While I don't know a lot about stocks and I'm sure this doesn't tell the whole story, I'm not sure you could ever sell Nvidia as the underdog here.Kjella - Thursday, July 2, 2015 - link
Sorry but that is plain wrong as nVidia wasn't just bigger than ATI, they were bigger than AMD. Their market cap in Q2 2006 was $9.06 billion, on the purchase date AMD was worth $8.84 billion and ATI $4.2 billion. It took a massive cash/stock deal worth $5.6 billion to buy ATI, including over $2 billion in loans. AMD stretched to the limit to make this happen, three days later Intel introduced the Core 2 processor and it all went downhill from there as AMD couldn't invest more and struggled to pay interest on falling sales. And AMD made an enemy of nVidia, which Intel could use to boot nVidia out of the chipset/integrated graphics market by not licensing QPI/DMI with nVidia having nowhere to go. It cost them $1.5 billion, but Intel has made back that several times over since.kspirit - Thursday, July 2, 2015 - link
That was pretty savage of Intel, TBH. I'm impressed.Iketh - Monday, July 6, 2015 - link
or you could say AMD purposely finalized the purchase just before Core2 was introduced... after Core2, the purchase would have been impossibleWreckage - Thursday, July 2, 2015 - link
http://money.cnn.com/2006/07/24/technology/nvidia_...AMD was worth $8.5B and ATI was worth $5B at the time of the merger making them worth about twice what NVIDIA was worth at the time ($7B)
In 2004 NVIDIA had a market cap of $2.4B and ATI was at $4.3B nearly twice.
http://www.tomshardware.com/news/nvidias-market-sh...
NVIDIA was the underdog until the combined AMD+ATI collapsed and lost most of their value. They are Goliath brought down by David.
anandreader106 - Thursday, July 2, 2015 - link
@Wreckage Not quite. Cash reserves play a role in evaluating a company's net worth. When AMD acquired ATI, they spent considerable money to do so and plunged themselves into debt. The resulting valuation of AMD was not simply the combined valuations of AMD and ATI pre-acquisition. Far from it.AMD is the undisputed underdog in 2015, and has been for many years before that. That is why Ryan gave so much praise to AMD in the article. For them to even be competitive at the high end, given their resources and competition, is nothing short of impressive.
If you cannot at least acknowledge that, than your view on this product and the GPU market is completely warped. As consumers we are all better off with a Fury X in the market.
Yojimbo - Thursday, July 2, 2015 - link
Yes, NVIDIA was definitely the underdog at the time of the AMD purchase of ATI. Many people were leaving NVIDIA for dead. NVIDIA had recently lost its ability to make chipsets for Intel processors, and after AMD bought ATI it was presumed (rightly so) that NVIDIA would no longer be able to make chipsets for AMD processors. It was thought that the discrete GPU market might dry up with fusion CPU/GPU chips taking over the market.chizow - Thursday, July 2, 2015 - link
Yep, I remember after the merger happened most AMD/ATI fans were rejoicing as they felt it would spell the end of both Nvidia and Intel, Future is Fusion and all that promise lol. Many like myself were pointing out the fact AMD overpayed for ATI and that they would collapse under the weight of all that debt given ATI's revenue and profits didn't come close to justifying the purchase price.My how things have played out completely differently! It's like the incredible shrinking company. At this point it really is in AMD and their fan's best interest if they are just bought out and broken up for scraps, at least someone with deep pockets might be able to revive some of their core products and turn things around.
Ranger101 - Friday, July 3, 2015 - link
Well done Mr Smith. I would go so far as to say THE best Fury X review on the internet barnone. The most important ingredient is BALANCE. Something that other reviews sorely lack.
In particular the PCPer and HardOCP articles read like they were written by the green
goblin himself and consequently suffer a MASSIVE credibility failure.
Yes Nvidia has a better performing card in the 980TI but it was refreshing to see credit
given to AMD where it was due. Only dolts and fanatical AMD haters (I'm not quite sure
what category chizow falls into, probably both and a third "Nvidia shill") would deny that
we need AMD AND Nvidia for the consumer to win.
Thanks Anandtech.
Michael Bay - Friday, July 3, 2015 - link
Except chizow never stated he wishes to see AMD dead.I guess it`s your butthurt talking.
chizow - Friday, July 3, 2015 - link
Yep, just AMD fanboys ;)"What's Left of AMD" can keep making SoCs and console APUs or whatever other widgets under the umbrella of some monster conglomerate like Samsung, Qualcomm or Microsoft and I'm perfectly OK with that. Maybe I'll even buy an AMD product again.
medi03 - Sunday, July 5, 2015 - link
"AMD going away won't matter to anyone but their few remaining devout fanboys'So kind (paid?) nVidia troll chizow is.
chizow - Monday, July 6, 2015 - link
@medi03 no worries I look forward to the day (unpaid?) AMD fantroll's like you can free yourselves from the mediocrity that is AMD.chizow - Friday, July 3, 2015 - link
Yet, still 3rd rate. The overwhelming majority of the market has gone on just fine without AMD being relevant in the CPU market, and recently, the same has happened in the GPU market. AMD going away won't matter to anyone but their few remaining devout fanboys like Ranger101.piiman - Friday, July 3, 2015 - link
"AMD going away won't matter to anyone but their few remaining devout fanboys'Hmmm you'll think different when GPU prices go up up up. Competition is good for consumers and without it you will pay more, literally.
chizow - Sunday, July 5, 2015 - link
@piiman - I guess we'll see soon enough, I'm confident it won't make any difference given GPU prices have gone up and up anyways. If anything we may see price stabilization as we've seen in the CPU industry.medi03 - Sunday, July 5, 2015 - link
Another portion of bulshit from nVidia troll.AMD never ever had more than 25% of CPU share. Doom to Intel, my ass.
Even in Prescott times Intell was selling more CPUs and for higher price.
chizow - Monday, July 6, 2015 - link
@medi03 AMD was up to 30% a few times and they did certainly have performance leadership at the time of K8 but of course they wanted to charge anyone for the privilege. Higher price? No, $450 for entry level Athlon 64, much more than what they charged in the past and certainly much more than Intel was charging at the time going up to $1500 on the high end with their FX chips.Samus - Monday, July 6, 2015 - link
Best interest? Broken up for scraps? You do realize how important AMD is to people who are Intel\NVidia fans right?Without AMD, Intel and NVidia are unchallenged, and we'll be back to paying $250 for a low-end video card and $300 for a mid-range CPU. There would be no GTX 750's or Pentium G3258's in the <$100 tier.
chizow - Monday, July 6, 2015 - link
@Samus, they're irrelevant in the CPU market and have been for years, and yet amazingly, prices are as low as ever since Intel began dominating AMD in performance when they launched Core 2. Since then I've upgraded 5x and have not paid more than $300 for a high-end Intel CPU. How does this happen without competition from AMD as you claim? Oh right, because Intel is still competing with itself and needs to provide enough improvement in order to entice me to buy another one of their products and "upgrade".The exact same thing will happen in the GPU sector, with or without AMD. Not worried at all, in fact I'm looking forward to the day a company with deep pockets buys out AMD and reinvigorates their products, I may actually have a reason to buy AMD (or whatever it is called after being bought out) again!
Iketh - Monday, July 6, 2015 - link
you overestimate the human drive... if another isn't pushing us, we will get lazy and that's not an argument... what we'll do instead to make people upgrade is release products in steps planned out much further into the future that are even smaller steps than how intel is releasing nowsilverblue - Friday, July 3, 2015 - link
I think this chart shows a better view of who was the underdog and when:http://i59.tinypic.com/5uk3e9.jpg
ATi were ahead for the 9xxx series, and that's it. Moreover, NVIDIA's chipset struggles with Intel were in 2009 and settled in early 2011, something that would've benefitted NVIDIA far more than Intel's settlement with AMD as it would've done far less damage to NVIDIA's financials over a much shorter period of time.
The lack of higher end APUs hasn't helped, nor has the issue with actually trying to get a GPU onto a CPU die in the first place. Remember that when Intel tried it with Clarkdale/Arrandale, the graphics and IMC were 45nm, sitting alongside everything else which was 32nm.
chizow - Friday, July 3, 2015 - link
I think you have to look at a bigger sample than that, riding on the 9000 series momentum, AMD was competitive for years with a near 50/50 share through the X800/X1900 series. And then G80/R600 happened and they never really recovered. There was a minor blip with Cypress vs. Fermi where AMD got close again but Nvidia quickly righted things with GF106 and GF110 (GTX 570/580).Scali - Tuesday, July 7, 2015 - link
nVidia wasn't the underdog in terms of technology. nVidia was the choice of gamers. ATi was big because they had been around since the early days of CGA and Hercules, and had lots of OEM contracts.In terms of technology and performance, ATi was always struggling to keep up with nVidia, and they didn't reach parity until the Radeon 8500/9700-era, even though nVidia was the newcomer and ATi had been active in the PC market since the mid-80s.
Frenetic Pony - Thursday, July 2, 2015 - link
Well done analysis, though the kick in the head was Bulldozer and it's utter failure. Core 2 wasn't really AMD's downfall so much as Core/Sandy Bridge, which came at the exact wrong time for the utter failure of Bulldozer. This combined with AMD's dismal failure to market its graphics card has cost them billions. Even this article calls the 290x problematic, a card that offered the same performance as the original Titan at a fraction of the price. Based on empirical data the 290/x should have been almost continuously sold until the introduction of Nvidia's Maxwell architecture.Instead people continued to buy the much less performant per dollar Nvidia cards and/or waited for "the good GPU company" to put out their new architecture. AMD's performance in marketing has been utterly appalling at the same time Nvidia's has been extremely tight. Whether that will, or even can, change next year remains to be seen.
bennyg - Saturday, July 4, 2015 - link
Marketing performance. Exactly.Except efficiency was not good enough across the generations of 28nm GCN in an era where efficiency + thermal/power limits constrain performance, and look what Nvidia did over a similar era from Fermi (which was at market when GCN 1.0 was released) to Kepler to Maxwell. Plus efficiency is kind of the ultimate marketing buzzword in all areas of tech and not having any ability to mention it (plus having generally inferor products) hamstrung their marketing all along
xenol - Monday, July 6, 2015 - link
Efficiency is important because of three things:1. If your TDP is through the rough, you'll have issues with your cooling setup. Any time you introduce a bigger cooling setup because your cards run that hot, you're going to be mocked for it and people are going to be weary of it. With 22nm or 20nm nowhere in sight for GPUs, efficiency had to be a priority, otherwise you're going to ship cards that take up three slots or ship with water coolers.
2. You also can't just play to the desktop market. Laptops are still the preferred computing platform and even if people are going for a desktop, AIOs are looking much more appealing than a monitor/tower combo. So you want to have any shot in either market, you have to build an efficient chip. And you have to convince people they "need" this chip, because Intel's iGPUs do what most people want just fine anyway.
3. Businesses and such with "always on" computers would like it if their computers ate less power. Even if you can save a handful of watts, multiplying that by thousands and they add up to an appreciable amount of savings.
xenol - Monday, July 6, 2015 - link
(Also by "computing platform" I mean the platform people choose when they want a computer)medi03 - Sunday, July 5, 2015 - link
ATI is the reason both Microsoft and Sony use AMDs APUs to power their consoles.It might be the reason why APUs even exist.
tipoo - Thursday, July 2, 2015 - link
That was then, this is now. Now, AMD together with the acquisition, has a lower market cap than Nvidia.Murloc - Thursday, July 2, 2015 - link
yeah, no.ddriver - Thursday, July 2, 2015 - link
ATI wasn't bigger, AMD just paid a preposterous and entirely unrealistic amount of money for it. Soon after the merger, AMD + ATI was worth less than what they paid for the latter, ultimately leading to the loss of its foundries, putting it in an even worse position. Let's face it, AMD was, and historically has always been betrayed, its sole purpose is to create the illusion of competition so that the big boys don't look bad for running unopposed, even if this is what happens in practice.Just when AMD got lucky with Athlon a mole was sent to make sure AMD stays down.
testbug00 - Sunday, July 5, 2015 - link
foundries didn't go because AMD bought ATI. That might have accelerated it by a few years however.Foundry issue and cost to AMD dates back to the 1990's and 2000-2001.
5150Joker - Thursday, July 2, 2015 - link
True, AMD was at a much better position in 2006 vs NVIDIA, they just got owned.3DVagabond - Friday, July 3, 2015 - link
When was Intel the underdog? Because that's who's knocked them down (The aren't out yet.).chizow - Friday, July 3, 2015 - link
While Intel wasn't the underdog in terms of marketshare, they were in terms of technology and performance with the Athlon 64 vs. Pentium 4. Intel had a dog on their hands that they managed to weather the storm with, until they got Conroe'd in 2006. Now, they are down and most likely out, as Zen even if it delivers as promised (40% IPC increase just isn't enough) will take years to gain traction in the CPU market. Time AMD simply does not have, especially given how far behind they've fallen in the dGPU market.nikaldro - Friday, July 3, 2015 - link
That's 40% over excavator, so about 60%? over vishera.If they manage to get good enough IPC on 8 cores, at a good price, they may really make a comeback
chizow - Monday, July 6, 2015 - link
Well, best of luck with this. :)bgo - Friday, July 3, 2015 - link
Well during the P4 era Intel bribed OEMs to not use Athlon chips, which they later had to pay $1.25bn to AMD for. While one could argue the monetary losses may have been partially made up for, the settlement came at the end of 2009, so too little too late. Intel bought themselves time with their bribes, and that's what really enabled them to weather the storms.chizow - Monday, July 6, 2015 - link
No, if you read the actual results and AMD's own testimony, they couldn't produce enough chips and offer them at a low enough price to OEMs compared to what Intel was just giving away as subsidies.piiman - Friday, July 3, 2015 - link
" AMD got beat by the underdog. "And that makes them the underdog now.
boozed - Thursday, July 2, 2015 - link
"Mantle is essentially depreciated at this point"Deprecated, surely?
chrisgon - Thursday, July 2, 2015 - link
"Which is not say I’m looking to paint a poor picture of the company – AMD Is nothing if not the perineal underdog who constantly manages to surprise us with what they can do with less"I think the word you were looking for is perennial. Unless you truly meant to refer to AMD as the taint.
ArKritz - Thursday, July 2, 2015 - link
I think there's about a 50-50 chance of that...ingwe - Thursday, July 2, 2015 - link
Thanks for the effort that went into this article. I hope you are feeling better.In general this article makes me continue to feel sad for AMD.
UltraWide - Thursday, July 2, 2015 - link
What took so long? :)Ryan Smith - Thursday, July 2, 2015 - link
I was out of commission for a week with a nasty cold (which I likely picked up from the AMD trip).CajunArson - Thursday, July 2, 2015 - link
"(which I likely picked up from the AMD trip)."It seems AMD has stooped to using biological warfare!
chizow - Thursday, July 2, 2015 - link
Hope you are feeling betterRanger101 - Friday, July 3, 2015 - link
Here we go, Chizoo once again attempting to align himself with the reviewer, as he does at PCPER no doubt attempting to gain much needed credibility. I'm afraid it's too late bro as you are universally recognised as a psychotic AMD hater. But feigning concern for the reviewer's health is a laughable cheap shot, even by your bottom feeding standards.Michael Bay - Friday, July 3, 2015 - link
Boy, is your behind in flames!chizow - Friday, July 3, 2015 - link
lmfao, yeah, wishing someone who was sick well is now some covert attempt to woo favors towards my sekret Nvidia agenda.You're an idiot, delete your account from the internet and quietly wait in the corner with your tinfoil dunce cap until AMD implodes, thanks.
Ranger101 - Friday, July 10, 2015 - link
Good to see you charging in to defend Chizoo there Michael, he does need help. The point is, dearest Chizoo, You can't on the one hand @ PCPER claim that Ryan Smith conveniently got sick to avoid raining on the fury like other websites and then pretend that you are concerned about his health @ Anandtech. It's too late for damage control bro, people are now well aware of your mentality or lack thereof.Wreckage - Thursday, July 2, 2015 - link
The irony of getting a cold from AMDsaru44 - Friday, July 3, 2015 - link
lol :DNavvie - Thursday, July 2, 2015 - link
"Which is not say I’m looking" (paragraph 5, first line).Missing a "to" I think.
watzupken - Thursday, July 2, 2015 - link
Brilliant review. Well worth the wait. Thanks Ryan.Taracta - Thursday, July 2, 2015 - link
ROPs, ROPs, ROPs! Hows can they ~ double everything else and keep the same amount of ROPs and expect to win?Thatguy97 - Thursday, July 2, 2015 - link
maybe something to do with cost or yieldtipoo - Thursday, July 2, 2015 - link
They literally hit the size limits interposers can scale up to with this chip - so they can't make it any bigger to pack more transistors for more ROPs, until a die shrink. So they decided on a tradeoff, favouring other things than ROPs.Kevin G - Thursday, July 2, 2015 - link
They had a monster shader count and likely would be fine if they went to 3840 max to make room for more ROPs. 96 or 128 ROPs would have been impressive and really made this chip push lots of pixels. With HBM and the new delta color compression algorithm, there should be enough bandwidth to support these additional ROPs without bottle necking them.AMD also scaled the number of TMUs with the shaders but it likely wouldn't have hurt to have increased them by 50% too. Alternatively AMD could have redesigned the TMUs to have better 16 bit per channel texture support. Either of these changes would have put the texel throughput well beyond the GM200's theoretical throughput. I have a feeling that this is one of the bottlenecks that helps the GM200 pull ahead of Fiji.
tipoo - Friday, July 3, 2015 - link
Not saying it was the best tradeoff - just explaining. They quite literally could not go bigger in this case.testbug00 - Sunday, July 5, 2015 - link
the performances scaling as resolution increase is better than Nvidia, implying the ROPs aren't the bottleneck...chizow - Sunday, July 5, 2015 - link
No, that implies the shaders are the bottleneck at higher resolutions while ROP/fillrate/geometry remained constant. While Nvidia's bottleneck at lower resolutions isn't shader bound but their higher ROP/fillrate allows them to realize this benefit in actual FPS, AMD's ROPs are saturated and simply can't produce more frames.Ryan Smith - Thursday, July 2, 2015 - link
Right now there's not a lot of evidence for R9 Fury X being ROP limited. The performance we're seeing does not have any tell-tale signs of being ROP-bound, only hints here and there that may be the ROPs, or could just as well be the front-end.While Hawaii was due for the update, I'm not so sure we need to jump up in ROPs again so soon.
chizow - Thursday, July 2, 2015 - link
What about geometry Ryan? ROPs are often used interchangeably with Geometry/Set-up engine, there is definitely something going on with Fury X at lower resolutions, in instances where SP performance is no problem, it just can't draw/fill pixels fast enough and performs VERY similarly to previous gen or weaker cards (290X/390X and 980). TechReport actually has quite a few theoreticals that show this, where their pixel fill is way behind GM200 and much closer to Hawaii/GM204.extide - Thursday, July 2, 2015 - link
Yeah my bet is on Geometry. Check out the Synthetics page. It own the Pixel and Texel fillrate tests, but loses on the Tessellation test which has a large dependency on geometry. nVidia has also been historically very strong with geometry.CajunArson - Thursday, July 2, 2015 - link
Thanks for the review! While the conclusions aren't really any different than all the other reputable review sites on the Interwebs, you were very thorough and brought an interesting perspective to the table too. Better late than never!NikosD - Thursday, July 2, 2015 - link
You must use the latest nightly build of LAV filters, in order to be able to use the 4K H.264 DXVA decoder of AMD cards.All previous builds fall back to SW mode.
tynopik - Thursday, July 2, 2015 - link
"today’s launch of the Fiji GPU"andychow - Thursday, July 2, 2015 - link
Best review ever. Worth the wait. Get sick more often!tynopik - Thursday, July 2, 2015 - link
pg 2 - compression taking palcelimitedaccess - Thursday, July 2, 2015 - link
Ryan, regarding Mantle performance back in the R9 285 review (http://www.anandtech.com/show/8460/amd-radeon-r9-2... you wrote that AMD stated the issue with performance regression was that developers had not yet optimized for Tonga's newer architecture. While here you state that the performance regression is due to AMD having not optimized on the driver side. What is the actual case? What is the actual weighting given these three categories? -Hardware Driver
API
Software/Game
What I'm wondering is if we make an assumption that upcoming low level APIs will have similar behavior as Mantle what will happen going forward as more GPU architectures are introduced and newer games are introduced? If the onus shifts especially heavily towards the software side it it seems more realistic in practice that developers will have much more narrower scope in which optimize for.
I'm wondering if Anandtech could possibly look more indept into this issue as to how it pertains to the move towards low level APIs used in the future as it could have large implications in terms of the software/hardware support relationship going forward.
Ryan Smith - Thursday, July 2, 2015 - link
"What is the actual case? What is the actual weighting given these three categories? -"Right now the ball appears to be solidly in AMD's court. They are taking responsibility for the poor performance of certain Mantle titles on R9 Fury X.
As it stands I hesitate to read into this too much for DX12/Vulkan. Those are going to be finalized, widely supported APIs, unlike Mantle which has gone from production to retirement in the span of just over a year.
limitedaccess - Thursday, July 2, 2015 - link
Thanks for the response. I guess we will see more for certain as time moves on.My concern is if lower level APIs require more architecture specific optimizations and the burden is shifted to developers in practice that will cause some rather "interesting" implications.
Also what would be of interest is how much of reviewers test suites will still look at DX11 performance as a possible fallback should this become a possible issue.
testbug00 - Sunday, July 5, 2015 - link
You don't need architecture improvements to use DX12/Vulkan/etc. The APIs merely allow you to implement them over DX11 if you choose to. You can write a DX12 game without optimizing for any GPUs (although, not doing so for GCN given consoles are GCN would be a tad silly).If developers are aiming to put low level stuff in whenever they can than the issue becomes that due to AMD's "GCN everywhere" approach developers may just start coding for PS4, porting that code to Xbox DX12 and than porting that to PC with higher textures/better shadows/effects. In which Nvidia could take massive performance deficites to AMD due to not getting the same amount of extra performance from DX12.
Don't see that happening in the next 5 years. At least, not with most games that are console+PC and need huge performance. You may see it in a lot of Indie/small studio cross platform games however.
RG1975 - Thursday, July 2, 2015 - link
AMD is getting there but, they still have a little bit to go to bring us a new "9700 Pro". That card devastated all Nvidia cards back then. That's what I'm waiting for to come from AMD before I switch back.Thatguy97 - Thursday, July 2, 2015 - link
would you say amd is now the "geforce fx 5800"piroroadkill - Thursday, July 2, 2015 - link
Everyone who bought a Geforce FX card should feel bad, because the AMD offerings were massively better. But now AMD is close to NVIDIA, it's still time to rag on AMD, huh?That said, of course if I had $650 to spend, you bet your ass I'd buy a 980 Ti.
Thatguy97 - Thursday, July 2, 2015 - link
oh believe me i remember they felt bad lol but im not ragging on amd but nvidia stole their thunder with the 980 tiKateH - Thursday, July 2, 2015 - link
C'mon, Fury isn't even close to the Geforce FX level of fail. It's really hard to overstate how bad the FX5800 was, compared to the Radeon 9700 and even the Geforce 4600Ti.The Fury X wins some 4K benchmarks, the 980Ti wins some. The 980Ti uses a bit less power but the Fury X is cooler and quieter.
Geforce FX level of fail would be if the Fury X was released 3 months from now to go up against the 980Ti with 390X levels of performance and an air cooler.
Thatguy97 - Thursday, July 2, 2015 - link
To be fair the 5950 ultra was actually decentMorawka - Thursday, July 2, 2015 - link
your understating nvidia's scores.. the won 90% of all benchmarks, not just "some". a full 120W more power under furmark load and they are using HBM!!looncraz - Thursday, July 2, 2015 - link
Furmark power load means nothing, it is just a good way to stress test and see how much power the GPU is capable of pulling in a worst-case scenario and how it behaves in that scenario.While gaming, the difference is miniscule and no one will care one bit.
Also, they didn't win 90% of the benchmarks at 4K, though they certainly did at 1440. However, the real world isn't that simple. A 10% performance difference in GPUs may as well be zero difference, there are pretty much no game features which only require a 10% higher performance GPU to use... or even 15%.
As for the value argument, I'd say they are about even. The Fury X will run cooler and quieter, take up less space, and will undoubtedly improve to parity or beyond the 980Ti in performance with driver updates. For a number of reasons, the Fury X should actually age better, as well. But that really only matters for people who keep their cards for three years or more (which most people usually do). The 980Ti has a RAM capacity advantage and an excellent - and known - overclocking capacity and currently performs unnoticeably better.
I'd also expect two Fury X cards to outperform two 980Ti cards with XFire currently having better scaling than SLI.
chizow - Thursday, July 2, 2015 - link
The differences in minimums aren't miniscule at all, and you also seem to be discounting the fact 980Ti overclocks much better than Fury X. Sure XDMA CF scales better when it works, but AMD has shown time and again, they're completely unreliable for timely CF fixes for popular games to the point CF is clearly a negative for them right now.looncraz - Friday, July 3, 2015 - link
We don't yet know how the Fury X will overclock with unlocked voltages.SLI is almost just as unreliable as CF, ever peruse the forums? That, and quite often you can get profiles from the wild wired web well before the companies release their support - especially on AMD's side.
chizow - Friday, July 3, 2015 - link
@looncrazWe do know Fury X is an exceptionally poor overclocker at stock and already uses more power than the competition. Who's fault is it that we don't have proper overclocking capabilities when AMD was the one who publicly claimed this card was an "Overclocker's Dream?" Maybe they meant you could Overclock it, in your Dreams?
SLI is not as unreliable as CF, Nvidia actually offers timely updates on Day 1 and works with the developers to implement SLI support. In cases where there isn't a Day 1 profile, SLI has always provided more granular control over SLI profile bits vs. AMD's black box approach of a loadable binary, or wholesale game profile copies (which can break other things, like AA compatibility bits).
silverblue - Friday, July 3, 2015 - link
No, he did actually mention the 980Ti's excellent overclocking ability. Conversely, at no point did he mention Fury X's overclocking ability, presumably because there isn't any.Refuge - Friday, July 3, 2015 - link
He does mention it, and does say that it isn't really possible until they get modified bios with unlocked voltages.e36Jeff - Thursday, July 2, 2015 - link
first off, its 81W, not 120W(467-386). Second, unless you are running furmark as your screen saver, its pretty irrelevant. It merely serves to demonstrate the maximum amount of power the GPU is allowed to use(and given that the 980 Ti's is 1W less than in gaming, it indicates it is being artfically limited because it knows its running furmark).The important power number is the in game power usage, where the gap is 20W.
Ryan Smith - Thursday, July 2, 2015 - link
There is no "artificial" limiting on the GTX 980 Ti in FurMark. The card has a 250W limit, and it tends to hit it in both games and FurMark. Unlike the R9 Fury X, NVIDIA did not build in a bunch of thermal/electrical headroom in to the reference design.kn00tcn - Thursday, July 2, 2015 - link
because furmark is normal usage right!? hbm magically lowers the gpu core's power right!? wtf is wrong with younandnandnand - Thursday, July 2, 2015 - link
AMD's Fury X has failed. 980 Ti is simply better.In 2016 NVIDIA will ship GPUs with HBM version 2.0, which will have greater bandwidth and capacity than these HBM cards. AMD will be truly dead.
looncraz - Friday, July 3, 2015 - link
You do realize HBM was designed by AMD with Hynix, right? That is why AMD got first dibs.Want to see that kind of innovation again in the future? You best hope AMD sticks around, because they're the only ones innovating at all.
nVidia is like Apple, they're good at making pretty looking products and throwing the best of what others created into making it work well, then they throw their software into the mix and call it a premium product.
Intel hasn't innovated on the CPU front since the advent of the Pentium 4. Core * CPUs are derived from the Penitum M, which was derived from the Pentium Pro.
Kutark - Friday, July 3, 2015 - link
Man you are pegging the hipster meter BIG TIME. Get serious. "Intel hasn't innovated on the CPU front since the advent of the Pentium 4..." That has to be THE dumbest shit i've read in a long time.Say what you will about nvidia, but maxwell is a pristinely engineered chip.
While i agree with you that AMD sticking around is good, you can't be pissed at nvidia if they become a monopoly because AMD just can't resist buying tickets on the fail train...
chizow - Friday, July 3, 2015 - link
Pretty much, AMD supporters/fans/apologists love to parrot the meme that Intel hasn't innovated since original i7 or whatever, and while development there has certainly slowed, we have a number of 18 core e5-2699v3 servers in my data center at work, Broadwell Iris Pro iGPs that handily beat AMD APU and approach low-end dGPU perf, and ultrabooks and tablets that run on fanless 5W Core M CPUs. Oh, and I've upgraded also managed to find meaningful desktop upgrades every few years for no more than $300 since Core 2 put me back in Intel's camp for the first time in nearly a decade.looncraz - Friday, July 3, 2015 - link
None of what you stated is innovation, merely minor evolution. The core design is the same, gaining only ~5% or so IPC per generation, same basic layouts, same basic tech. Are you sure you know what "innovation" means?Bulldozer modules were an innovative design. A failure, but still very innovative. Pentium Pro and Pentium 4 were both innovative designs, both seeking performance in very different ways.
Multi-core CPUs were innovative (AMD), HBM is innovative (AMD+Hynix), multi-GPU was innovative (3dfx), SMT was innovative (IBM, Alpha), CPU+GPU was innovative (Cyrix, IIRC)... you get the idea.
Doing the exact same thing, more or less the exact same way, but slightly better, is not innovation.
chizow - Sunday, July 5, 2015 - link
Huh? So putting Core level performance in a passive design that is as thin as a legal pad and has 10 hours of battery life isn't innovation?Increasing iGPU performance to the point it not only provides top-end CPU performance, and close to dGPU performance, while convincingly beating AMD's entire reason for buying ATI, their Fusion APUs isn't innovation?
And how about the data center where Intel's *18* core CPUs are using the same TDP and sockets, in the same U rack units as their 4 and 6 core equivalents of just a few years ago?
Intel is still innovating in different ways, that may not directly impact the desktop CPU market but it would be extremely ignorant to claim they aren't addressing their core growth and risk areas with new and innovative products.
I've bought more Intel products in recent years vs. prior strictly because of these new innovations that are allowing me to have high performance computing in different form factors and use cases, beyond being tethered to my desktop PC.
looncraz - Friday, July 3, 2015 - link
Show me intel CPU innovations since after the pentium 4.Mind you, innovations can be failures, they can be great successes, or they can be ho-hum.
P6->Core->Nehalem->Sandy Bridge->Haswell->Skylake
The only changes are evolutionary or as a result of process changes (which I don't consider CPU innovations).
This is not to say that they aren't fantastic products - I'm rocking an i7-2600k for a reason - they just aren't innovative products. Indeed, nVidia's Maxwell is a wonderfully designed and engineered GPU, and products based on it are of the highest quality and performance. That doesn't make them innovative in any way. Nothing technically wrong with that, but I wonder how long before someone else came up with a suitable RAM just for GPUs if AMD hadn't done it?
chizow - Sunday, July 5, 2015 - link
I've listed them above and despite slowing the pace of improvements on the desktop CPU side you are still looking at 30-45% improvement clock for clock between Nehalem and Haswell, along with pretty massive improvements in stock clock speed. Not bad given they've had literally zero pressure from AMD. If anything, Intel dominating in a virtual monopoly has afforded me much cheaper and consistent CPU upgrades, all of which provided significant improvements over the previous platform:E6600 $284
Q6600 $299
i7 920 $199!
i7 4770K $229
i7 5820K $299
All cheaper than the $450 AMD wanted for their ENTRY level Athlon 64 when they finally got the lead over Intel, which made it an easy choice to go to Intel for the first time in nearly a decade after AMD got Conroe'd in 2006.
silverblue - Monday, July 6, 2015 - link
I could swear that you've posted this before.I think the drop in prices were more of an attempt to strangle AMD than anything else. Intel can afford it, after all.
chizow - Monday, July 6, 2015 - link
Of course I've posted it elsewhere because it bears repeating, the nonsensical meme AMD fanboys love to parrot about AMD being necessary for low prices and strong competition is a farce. I've enjoyed unparalleled stability at a similar or higher level of relative performance in the years that AMD has become UNCOMPETITIVE in the CPU market. There is no reason to expect otherwise in the dGPU market.zoglike@yahoo.com - Monday, July 6, 2015 - link
Really? Intel hasn't innovated? I really hope you are trolling because if you believe that I fear for you.chizow - Thursday, July 2, 2015 - link
Let's not also discount the fact that's just stock comparisons, once you overclock the cards as many are interested in doing in this $650 bracket, especially with AMD's clams Fury X is an "Overclocker's Dream", we quickly see the 980Ti cannot be touched by Fury X, water cooler or not.Fury X wouldn't have been the failure it is today if not for AMD setting unrealistic and ultimately, unattained expectations. 390X WCE at $550-$600 and its a solid alternative. $650 new "Premium" Brand that doesn't OC at all, has only 4GB, has pump whine issues and is slower than Nvidia's same priced $650 980Ti that launched 3 weeks before it just doesn't get the job done after AMD hyped it from the top brass down.
andychow - Thursday, July 2, 2015 - link
Yeah, "Overclocker's dream", only overclocks by 75 MHz. Just by that statement, AMD has totally lost me.looncraz - Friday, July 3, 2015 - link
75MHz on a factory low-volting GPU is actually to be expected. If the voltage scaled automatically, like nVidia's, there is no telling where it would go. Hopefully someone cracks the voltage lock and gets to cranking of the hertz.chizow - Friday, July 3, 2015 - link
North of 400W is probably where we'll go, but I look forward to AMD exposing these voltage controls, it makes you wonder why they didn't release them from the outset given they made the claims the card was an "Overclocker's Dream" despite the fact it is anything but this.Refuge - Friday, July 3, 2015 - link
It isn't unlocked yet, so nobody has overclocked it yet.chizow - Monday, July 6, 2015 - link
But but...AMD claimed it was an Overclocker's Dream??? Just another good example of what AMD says and reality being incongruent.Thatguy97 - Thursday, July 2, 2015 - link
would you say amd is now the "geforce fx 5800"sabrewings - Thursday, July 2, 2015 - link
That wasn't so much due to ATI's excellence. It had a lot to do with NVIDIA dropping the ball horribly, off a cliff, into a black hole.They learned their lessons and turned it around. I don't think either company "lost" necessarily, but I will say NVIDIA won. They do more with less. More performance with less power, less transistors, less SPs, and less bandwidth. Both cards perform admirably, but we all know the Fury X would've been more expensive had the 980 Ti not launched where it did. So, to perform arguably on par, AMD is living with smaller margins on probably smaller volume while Nvidia has plenty of volume with the 980 Ti and their base cost is less as they're essentially using Titan X throw away chips.
looncraz - Thursday, July 2, 2015 - link
They still had to pay for those "Titan X throw away chips" and they cost more per chip to produce than AMD's Fiji GPU. Also, nVidia apparently had to not cut down the GPU as much as they were planning as a response to AMD's suspected performance. Consumers win, of course, but it isn't like nVidia did something magical, they simply bit the bullet and undercut their own offerings by barely cutting down the Titan X to make the 980Ti.That said, it is very telling that the AMD GCN architecture is less balanced in relation to modern games than the nVidia architecture, however the GCN architecture has far more features that are going unused. That is one long-standing habit ATi and, now, AMD engineers have had: plan for the future in their current chips. It's actually a bad habit as it uses silicon and transistors just sitting around sucking up power and wasting space for, usually, years before the features finally become useful... and then, by that time, the performance level delivered by those dormant bits is intentionally outdone by the competition to make AMD look inferior.
AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them.
AMD's only possible upcoming saving grace could be that they might be on Samsung's 14nm LPP FinFet tech at GloFo and nVidia will be on TSMC's 16nm FinFet tech. If AMD plays it right they can keep this advantage for a couple generations and maximize the benefits that could bring.
vladx - Thursday, July 2, 2015 - link
Afaik, even though TSMC's GinFet will be 16nm it's a superior process overall to GloFo's 14nm FF so I dount AMD will gain any advantage.testbug00 - Sunday, July 5, 2015 - link
TSMC's FinFET 16nm process might be better than GloFo's own canceled 14XM or whatever they called it.Better than Samsung's 14nm? Dubious. Very unlikely.
chizow - Sunday, July 5, 2015 - link
Why is it dubious? What's the biggest chip Samsung has fabbed? If they start producing chips bigger than the 100mm^2 chips for Apple, then we can talk but as much flak as TSMC gets flak over delays/problems, they still produce what are arguably the world's most advanced seminconductors, right there next to Intel's biggest chips in size and complexity.D. Lister - Thursday, July 2, 2015 - link
"AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them."AMD is often first at announcing features. Nvidia is often first at implementing them properly. It is clever marketing vs clever engineering. At the end of the day, one gets more customers than the other.
sabrewings - Thursday, July 2, 2015 - link
While you're right that Nvidia paid for the chips used in 980 Tis, they're still most likely not fit for Titan X use and are cut to remove the underperforming sections. Without really knowing what their GM200 yields are like, I'd be willing to be the $1000 price of the Titan X was already paying for the 980 Ti chips. So, Nvidia gets to play with binned chips to sell at $650 while AMD has to rely on fully up chips added to an expensive interposer with more expensive memory and a more expensive cooling solution to meet the same price point for performance. Nvidia definitely forced AMD into a corner here, so as I said I would say they won.Though, I don't necessarily say that AMD lost, they just make it look much harder to do what Nvidia was already doing and making bookoo cash at that. This only makes AMD's problems worse as they won't get the volume to gain marketshare and they're not hitting the margins needed to heavily reinvest in R&D for the next round.
Kutark - Friday, July 3, 2015 - link
So basically what you're saying is Nvidia is a better run company with smarter people working there.squngy - Friday, July 3, 2015 - link
"and they cost more per chip to produce than AMD's Fiji GPU."Unless AMD has a genie making it for them that's impossible.
Not only is fiji larger, it also uses a totally new technology (HBM).
JumpingJack - Saturday, July 4, 2015 - link
"AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them."AMD fanboys make it sound like AMD can actually walk on water. AMD did work with Hynix, but the magic of HBM comes in the density from die stacking, which AMD did nothing (they are no longer the actual chipmaker as you probably know). As for interposers, this is not new technology, interposers are well established techniques for condensing an array of devices into one package.
AMD deserves credit for bringing the technology to market, no doubt, but their actually IP contribution is quite small.
ianmills - Thursday, July 2, 2015 - link
Good that you are feeling better Ryan and thanks for the review :)That being said Anandtech needs keep us better informed when things come up.... The way this site handled it though is gonna lose this site readers...
Kristian Vättö - Thursday, July 2, 2015 - link
Ryan tweeted about the Fiji schedule several times and we were also open about it in the comments whenever someone asked, even though it wasn't relevant to the article in question. It's not like we were secretive about it and I think a full article of an article delay would be a little overkill.sabrewings - Thursday, July 2, 2015 - link
Those tweets are even featured on the site in the side bar. Not sure how much clearer it could get without an article about a delayed article.testbug00 - Sunday, July 5, 2015 - link
Pipeline story... Dunno title, but, for text, explain it there. Have a link to THG as owned by same company now if readers want to read a review immediately.Twitter is non-ideal.
funkforce - Monday, July 6, 2015 - link
The problem isn't only with the delays, it is that since Ryan took over as Editor in Chief I suspect his workload is too large.Because this also happened with the Nvidia GTX 960 review. He told 5-6 people (including me) for 5 weeks that it would come, and then it didn’t and he stopped responding to inquires about it.
Now in what way is that a good way to build a good relationship and trust between you and your readers?
I love Ryan's writing, this article was one of the best I've read in a long time. But not everyone is good at everything, maybe Ryan needs to focus on only GPU reviews and not running the site or whatever his other responsibilities are as Edit. in Chief.
Because the Reviews are what most ppl. come here for and what built this site. You guys are amazing, but AT never used to miss releasing articles the same day NDA was lifted in the past that I can remember. And promising things and then not delivering, sticking your head in the sand and not even apologizing isn’t a way to build up trust and uphold and strengthen the large following this site has.
I love this site, been reading it since the 1st year it came out, and that's why I care and I want you to continue and prosper.
Since a lot of ppl. can’t reed the twitter feed then what you did here: http://www.anandtech.com/show/8923/nvidia-launches...
Is the way to go if something comes up, but then you have to deliver on your promises.
Kristian Vättö - Thursday, July 2, 2015 - link
Just to add, if there are any ideas of keeping you guys better informed, please fire away. In the meantime, Twitter is probably the best way to stay updated on whatever each one of us is doing :)chizow - Thursday, July 2, 2015 - link
I think a pipeline story would've been good there, I mean using social media to convey the message to readership that may not even pay attention to it (I don't even think twitter sidebar shows on my iPhone 6 plus) is not a great way to do things.A few words saying the review would be late for XYZ reasons with a teaser to the effect of "but you can see the results here at 2015 AT Bench" would've sufficed for most and also given assurance the bulk of testing and work was done, and that AT wasn't just late for XYZ reasons.
Refuge - Friday, July 3, 2015 - link
I've never tweeted in my life.Yet I saw 4 seperate times last week where it was mentioned that Ryan was sick, and the Fury X review was coming as soon as he felt better.
chizow - Friday, July 3, 2015 - link
Where did you read this news though? Some forum thread? Twitter sidebar? I mean I guess whne everyone is looking for a front page story for the review, something in the actual front page content might have been the best way to get the message across. Even something as simple as a link to the bench results would've gone a long way to help educate/inform prospective buyers in a timely manner, don't you think? Because at the end of the day, that's what these reviews are meant for, right?Ian Cutress - Friday, July 3, 2015 - link
Twitter sidebar, the first line in the review that ended on the front page on the day of launch and in the comments for every review since launch day.testbug00 - Sunday, July 5, 2015 - link
Not everyone reads comments, and, the twitter feed is not viewable on my Six Plus. And likely not for anyone viewing on a phone and some tablets.chizow - Sunday, July 5, 2015 - link
Yeah, all non-ideal for anyone actually looking for the review, those just make it seem more like AT was trying to hide the fact their Fury X review wasn't ready despite there being no reason and a legitimate reason for it.Again, even a pipeline story with Ryan being sick and a link to the bench results would've been tons better in informing AT's readership, but I guess we'll be sure to comb through half conversations and completely unrelated comments in the future to stay informed.
Digidi - Thursday, July 2, 2015 - link
Thank you for the good Review. The only thing which i'm missing is a synthetic Benchmark of the polygon outpout rate. Because this seams to be the bottleneck of Fury XRyan Smith - Thursday, July 2, 2015 - link
We do have TessMark results in the synthetics section. You would be hard pressed to hit the polygon limit without using tessellation to generate those polygons.Digidi - Friday, July 3, 2015 - link
Thank you Ryan for the reply.I don't understand the difference between Tesselation and Polygonoutput. I thought there are 2 ways of polyogon Output.
1. is tesselation where the gpu integrates smaller triangles into a big triangle
2. I thought Polygonoutput is the rate of triangles which the gpu can handle when they get it from the cpu.
Ryan Smith - Friday, July 3, 2015 - link
Tessellation is where the GPU creates smaller triangles from a larger triangle.Digidi - Friday, July 3, 2015 - link
and where are the larger triangle come from?Digidi - Friday, July 3, 2015 - link
And the larger triangle come frome the cpu? So what i miss is a Benchmark mit small triangles from CPUDigidi - Friday, July 3, 2015 - link
Thank you. I have one more question. Where are the large triangle Comes from?I think they come from the cpu. what is when small triangles come from the cpu and not from the gpu. That Benchmark i'm missing
silverblue - Friday, July 3, 2015 - link
NVIDIA have a page on the subject (it's quite old, but the principle is the same):http://http.developer.nvidia.com/GPUGems2/gpugems2...
Ryan Smith - Friday, July 3, 2015 - link
One wouldn't send a bunch of small triangles from the CPU. It's inefficient.Digidi - Saturday, July 4, 2015 - link
Thank you for the Link silverblue and thank you for the answer Ryan. But why is it inefficient to get as much as possible polygons from the cpu?Is tessellation on the gpu so much better, than the transfer of the Polygons from cpu to gpu?
silverblue - Sunday, July 5, 2015 - link
Most modern GPUs are far better at generating geometry than even the best CPUs.Scali - Tuesday, July 7, 2015 - link
The CPU being a bottleneck with processing triangles is why nVidia introduced the first 'GPU' with the GeForce256: the geometry is uploaded only once to VRAM, and the GPU will perform T&L for each frame, to avoid the CPU bottleneck.Tessellation is the next step here: Very detailed geometry takes a lot of VRAM and a lot of bandwidth, causing a bottleneck trying to feed the rasterizers. So instead of storing all geometry in VRAM, you store a low-res mesh together with extra information (eg displacement maps) so the GPU can generate the geometry on-the-fly, to feed the backend more efficiently.
CrazyElf - Thursday, July 2, 2015 - link
Thanks for the review - that must have taken quite a bit to write.I hope you are feeling better now and back to normal.
As far as the GPU, yeah it's a disappointment that they were not able to beat the 980Ti. They needed 96 ROPs (I'd trade 3840 SP for that even for die), 8GB of VRAM (you might run out of VRAM before you run out of core), and probably 12 ACEs as well. Maybe 1/32 FP64 like on the GM200 would have helped too.
This thing needs to go down to $550 USD now. That and custom PCBs.
D. Lister - Thursday, July 2, 2015 - link
"This thing needs to go down to $550 USD now."AMD knew where their product stood compared to the competition.
I believe Fury X would have made a much bigger splash if it launched at $600. As things stand, price is sadly the ONLY area where the X equals the Ti.
I mean, AMD has always remained highly competitive primarily through performance/$. If they don't even have that now, then what do they have left?
extide - Monday, July 6, 2015 - link
No, 96 ROPs would not have saved it. The card has PLENTY of pixel power, its GEOMETRY that is the issue, which is something that AMD/ATI has always lagged nVidia on. Kind of unfortunate here ... :(logas - Thursday, July 2, 2015 - link
As a long time AMD fan I'm disappointed, project quantum uses an Intel cpu? Instead of sad it would have been funny if they used a VIA cpu. And yes there are those of us who use their pc with a tv from the couch and want hdmi 2.0 for 4k, still waiting.BTW review great, just AMD still disappointing.
piroroadkill - Thursday, July 2, 2015 - link
Why would you be disappointed that Quantum uses a 4790K? It's the best chip for the job, AMD doesn't make any decent high end CPUs these days..K_Space - Thursday, July 2, 2015 - link
+1AMD was quite clear: they listened the community who wanted flexibility; the cream of the cream for Z97 is without a doubt the 4790K; however that won't be the only CPU choice.
Kutark - Friday, July 3, 2015 - link
You realize you answered your own question right?He is disappointed because AMD simply doesn't have a CPU that can properly back 2x Fury X's.
That's just sad, it really is.
silverblue - Friday, July 3, 2015 - link
And in GRID Autosport, not even Intel has a CPU that can properly back a SINGLE Fury X... :)tipoo - Thursday, July 2, 2015 - link
Would you rather have a high end PC with an AMD CPU, with much lower per-core performance? I can see the irony, sure, but it's still the better choice right now.CrazyElf - Thursday, July 2, 2015 - link
Ryan, one other question, the VRMs - how hot do they get? Reportedly, some reviews are seeing pretty high VRM temperatures. The 6phase DIrectFET design does allow for a lot of error, but it's still best to have cool running VRMs.Stuka87 - Thursday, July 2, 2015 - link
Some reviews are showing temps of 100C, and are making a huge deal about it. But these same reviewers have shown a past nVidia card that had them up close to 120C, and thought nothing of it. The fact is the VRM's are rated for 150C, and do not lose any efficiency until they hit ~130C.There is NOTHING wrong with the Fury having them at 100C.
Ryan Smith - Thursday, July 2, 2015 - link
The current tools do not report VRM temperatures for the card (AFAIK). I've taken an IR thermometer to the card as well, though there's nothing terribly interesting to report there.guld82 - Thursday, July 2, 2015 - link
Civilization: Beyond EarthThe bigger advantage of Mantle is really the minimum framerates, and here the R9 Fury X soars. At 1440p the R9 Fury X delivers a minimum framerate of 50.5fps
1440p should be changed to 4k
jeffrey - Thursday, July 2, 2015 - link
Ryan Smith, any update on GTX 960Ryan Smith - Thursday, July 2, 2015 - link
As soon as Fury is out of the way.chizow - Thursday, July 2, 2015 - link
Fair review Ryan, unfortunately for AMD Fury X will go down as an underwhelming product that failed to meet the overhyped build up from AMD and their fans. Its not a terrible product by itself, as it does perform quite well, but it simply didn't live up to its billing, much of which came directly from AMD themselves when they made very public claims like:1) HBM enables them to make the World's Fastest GPU. This didn't happen.
2) Easily beats the 980Ti, based on their internal benchmarks. This didn't happen either.
3) Fury X is an Overclocker's Dream. We've seen anything but this being the case.
4) Water Cooling allows this to be a cool and quiet part. Except that pump whine, that AMD said was fixed in shipping samples, but wasn't.
5) 4GB is enough. Doesn't look like it, especially at the resolutions and settings a card like this is expected to run.
Add to that the very limited supply at launch and Fury X launch will ultimately be viewed as a flop. I just don't know where AMD is going to go from here. R9 300 Rebrandeon happened (told you AMD fanboys this months ago) and those parts still aren't selling. R9 Fury X while still AMD's best performing part is still 3rd fastest overall at the same price point as the faster 980Ti, and in extremely limited supplies. Will this be enough to sustain AMD into 2016 where the hopes of Zen and Arctic Islands turning around their fortunes loom on the horizon, we'll see, but until then it will be a bumpy road for AMD with some cloudy skies on the horizon!
Thatguy97 - Thursday, July 2, 2015 - link
i fear thisStuka87 - Thursday, July 2, 2015 - link
The pump whine was fixed. Only very early cards have the old pump, later cards do not. And even with the louder pump, its STILL quieter than a reference 980Ti.chizow - Thursday, July 2, 2015 - link
Even if that is the case, that's not what AMD was telling the press when it was brought to their attention during the review phase. Obviously it would be difficult, if not impossible for AMD to correct the problem in shipping samples given how rushed they were just getting review samples out to the press.AMD was dishonest about the pump issue plain and simple, and just hope the pump whine falls below any individual's noise tolerance thresholds.
As for comparisons to 980Ti, the Fury X will certainly be quieter in terms of pure dB under load, but the noise profile of that pump whine is going to be far more disturbing at any other point in time.
mapesdhs - Friday, July 3, 2015 - link
Beats me why nobody makes more of the practicality issues of trying to fit such a card in a case which in all likelyhood (for this class of GPU) _already has_ a water cooler for the CPU, and don't get me started on how one is supposed to cram in two of these things for CF (not that I'd bother given the state of the drivers; any DX9 fix yet? It's been over a year).Without a clear performance advantage, which it doesn't have, it needed to be usefully cheaper, which it's not. Add in the lesser VRAM and no HDMI 2.0 and IMO AMD has blown this one. it wouldn't be so bad except it was AMD that chucked out much of the prelaunch hype. Other sites have differences to the 980 Ti a lot more than 10% at 1440 (less so at 4K of course, though with only 4GB and CF build issues I don't see 4K as being a fit for this card anyway). Factory oc'd 980 Tis are only a little more, but significantly quicker even at 4K.
chizow - Sunday, July 5, 2015 - link
Yeah, Fury X is not really a smaller form factor, its just different. Fitting that double thick rad is going to pose a much bigger problem for most cases vs. a full sized 9.5" blower, given Nvidia's NVTTM reference fits most any mini-ITX case that can take 2 slots.As for Fury X price and perf, I think the 980Ti preemptively cut into AMD's plans and they just didn't want to take another cut in price when they had their sights set on that $800+ range. But yeah Fury X and by proxy, Fury Air and Fury Nano will be extremely vulnerable at 1080p and 1440p given they will be slower than Fury X which already has slower and last-gen cards like the 290X/390X/780Ti and GTX 980 on its heels.
I don't think AMD could've afforded more price compression or there's simply no spots that make any sense for Fury Air and Fury Nano, which again goes to my point they should've just launched these parts as the top end of their new R9 300 series stack instead of Rebrandeon + Fury strategy.
K_Space - Thursday, July 2, 2015 - link
Between now and 2016 (preferably before the holiday season) I see AMD dropping the Fury X price and churning up better drivers; so it's not all too bleak. But it's still annoying that both of these could have been fixed before launch.dragonsqrrl - Thursday, July 2, 2015 - link
Yep, the Fury X is essentially vaporware at this point. It basically doesn't exist. Some tech journalists with inside information have estimated that fewer than 1000 were available for NA at launch. Definitely some supply issues to say the least, which I suspect is mostly due to the HBM.I have no idea why AMD hyped up Fiji so much prior to launch. In a sense they just made it that much more difficult for themselves. What kind of reaction were they expecting with rhetoric like "HBM has allowed us to create the fastest GPU in the world", along with some of the most cherry picked pre-launch internal benchmarks ever conceived? It just seems like they've given up and are only trying to engage their most zealous fanboys at this point.
All that being said, I don't think Fury X is a terrible card. In fact I think it's the only card in AMDs current lineup even worth considering. But unfortunately for AMD, the 980Ti is the superior card right now in practically every way.
chizow - Thursday, July 2, 2015 - link
Yep, it is almost as if they set themselves up to fail, but now it makes more sense in terms of their timing and delivery. They basically used Fury X to prop up their Rebrandeon stack of 300 series, as they needed a flagship launch with Fury X in the hopes it would lift all sails in the eyes of the public. We know Rebrandeon 300 series was set in stone and ready to go as far back as Financial Analsyts Day (Hi again all AMD fanboys who said I was wrong) with early image leaks and drivers confirming this as well.But Fury X wasn't ready. Not enough chips and cards ready, cooler still showing problems, limited worldwide launch (I think 10K max globally). I think AMD wanted to say and show something at Computex but quickly changed course once it was known Nvidia would be HARD launching the 980Ti at Computex.
980Ti launch changed the narrative completely, and while AMD couldn't change course on what they planned to do with the R9 Rebrandeon 300 series and a new "Ultra premium" label Fury X using Fiji, they were forced to cut prices significantly.
In reality, at these price points and with Fury X's relative performance, they really should've just named it R9 390X WCE and called it a day, but I think they were just blindsided by the 980Ti not just in performance being so close to Titan X, but also in price. No way they would've thought Nvidia would ask just $650 for 97% of Titan X's performance.
So yeah, brilliant moves by Nvidia, they've done just about everything right and executed flawlessly with Maxwell Mk2 since they surprised everyone with the 970/980 launch last year. All the song and dance by AMD leading up to Fury X was just that, an attempt to impress investors, tech press, loyal fans, but wow that must have been hard for them to get up on stage and say and do the things they did knowing they didn't have the card in hand to back up those claims.
kn00tcn - Thursday, July 2, 2015 - link
do you want a nobel prize after all that multiple post gloating? you're not the one leaking, we already knew fiji was the only new gpu, i never saw any 'fanboys' as you call them saying the 3 series will be new & awesome... like you're talking to an empty room & patting yourself on the backguess who is employed at amd? the guy that did marketing at nvidia for a few years, why do you think fury x is called fury x?
FLAWLESS maxwell hahahahaha.... 970 memory aside, how about all the TDR crashes in recent drivers, they even had to put out a hotfix after WHQL (are we also going to ignore kepler driver regression?)
yes amd has to impress everyone, that is the job of marketing & the reality of depending on TSMC with its cancelled 32nm & delayed/unusable 20nm... every company needs to hype so they dont implode, all these employees have families but you're probably not thinking of them
how the heck is near performance at cold & quiet operation a flop!? there are still 2 more air cooled fiji releases, including a 175watt one
'4gb isnt enough', did you even look at the review? this isnt geforce FX or 2900xt, talk about a reverse fanboy...
chizow - Thursday, July 2, 2015 - link
Wow awesome where were all these nuggets of wisdom and warnings of caution tempering the expectations of AMD fans in the last few months? Oh right, no where to be found! Yep, plenty with high conviction and authority insisting R9 300 won't be a rebrand, that Fiji and HBM would lead AMD to the promise land and be faster than the overpriced Nvidia offerings of 980, Titan X etc etc.http://www.anandtech.com/show/9239/amd-financial-a...
http://www.anandtech.com/show/9266/amd-hbm-deep-di...
http://www.anandtech.com/show/9383/amd-radeon-live...
http://www.anandtech.com/show/9241/amd-announces-o...
http://www.anandtech.com/show/9236/amd-announces-r...
No Nobel Prize needed, the ability to gloat and say I told you so to all the AMD fanboys/apologists/supporters is plenty! Funny how none of them bothered to show up and say they were wrong, today!
And yes the 970, they stumbled with the memory bandwidth mistake, but did it matter? No, not at all. Why? Because the error was insignificant and did not diminish its value, price or performance AT ALL. No one cared about the 3.5GB snafu outside of AMD fanboys, because 970 delivered where it mattered, in games!
Let's completely ignore the fact 970/980 have led Nvidia to 10 months of dominance at 77.5% market share, or the fact the 970 by itself has TRIPLED the sales of AMD's entire R9 200 series on Steam! So yes, Nvidia has executed flawlessly and as a result, they have pushed AMD to the brink in the dGPU market.
And no, 4GB isn't enough, did YOU read the review? Ryan voiced concern throughout the entire 4GB discussion, saying while it took some effort, he was able to "break" the Fuiry X and force a 4GB limit. That's only getting to be a BIGGER problem once you CF these cards and start cranking up settings. So yeah, if you are plunking down $650 on a flagship card today, why would you bother with that concern hanging over your head when for the same price, you can buy yourself 50% more headroom? Talk about reverse fanboyism, 3.5GB isn't enough on a perf midrange card, but its jolly good A-OK for a flagship card "Optimized for 4K" huh?
And speaking of those employees and families. You don't think it isn't in their best interest, and that they aren't secretly hoping AMD folds and gets bought out or they get severance packages to find another job? LOL. Its a sinking ship, if they aren't getting laid off they're leaving for greener pastures. Everyone there is just trying to stay afloat hoping some of these rumors a company with deep pockets will come and save them from the sinking dead weight that has become of ATI/AMD.
D. Lister - Thursday, July 2, 2015 - link
My concern is, the longer AMD's current situation lingers, the higher the chance that the new buyers would simply cannibalize AMD's tech and IPs and permanently put down the brand "AMD", due to the the amount of negative public opinion attached to it.chizow - Monday, July 6, 2015 - link
@D. Lister sorry missed this. I think AMD as a brand/trademark will be dead regardless. It has carried value brand connotation for some time and there was even some concern about it when AMD chose to drop the name ATI from their graphics cards a few years back. Radeon however I think will live on to whoever buys them up, as it still carries good marketplace brand recognition.Intel999 - Friday, July 3, 2015 - link
@ChizowDude, what's the deal? Did an AMD logoed truck run over your dog or something.
Seems like every article regarding AMD has you spewing out hate against them. I think we all realize Nvidia is in the lead. Why exert so much energy to put down a company that you have no intention of ever buying from?
AMD wasn't even competing in the high end prior to the Fury X release. So any sales they get are sales that would have gone to the 980 by default. So they have improved their position. A home run? No.
Take pleasure in knowing you are a member of the winning team. Take a chill pill and maybe the comments sections can be more informative for the rest of us.
I, for one, would prefer to not having to skip over three long winded tirades on each page that start with Chizow.
chizow - Friday, July 3, 2015 - link
@Intel999, if you want to digest your news in a vacuum, stick your head in the sand and ignore the comments section as you've already self-prescribed!For others, a FORUM is a place to discuss ideas, exchange points of view, provide perspective and to keep both companies and fans/proponents ACCOUNTABLE and honest. If you have a problem, maybe the internet isn't a place for you!
Do you go around in every Nvidia or Intel thread or news article and ask yourself the same anytime AMD is mentioned or brought up? What does this tell you about your own posting tendencies???
Again, if you, for one, would prefer to skip over my posts, feel free to do so! lol.
silverblue - Friday, July 3, 2015 - link
I think you need to blame sites such as WCCFTech rather than fanboys/enthusiasts in general for the "Fury X will trounce 980 Ti/Titan X" rumours.Also, if the 970 memory fiasco didn't matter, why was there a spate of returns? It's obvious that the users weren't big enough NVIDIA fanboys to work around the issue... going by your logic, that is.
The 970 isn't a mid-range card to anybody who isn't already rocking a 980 or above. 960, sure.
Fury X is an experiment, one that could've done with more memory of course, and I usually don't buy into the idea of experiments, but at least it wasn't a 5800/Parhelia/2900 - it's still the third best card out there with little breathing space between all three (depending on game, of course), not quite what AMD promised unless they plan to fix everything with a killer driver set (unlikely). The vanilla Fury with its GDDR5 may stand to outperform it, albeit at a slightly higher power level.
chizow - Friday, July 3, 2015 - link
No silverblue, you contributed just as much to the unrealistic expectations during the Rebrandeon run-up along with unrealistic expectations for HBM and Fury X. But in the end it doesn't really matter, AMD failed to meet their goal even though Nvidia handed it to them on a silver platter by launching the 980Ti 3 weeks ahead of AMD.And spate of returns for the 970 memory fiasco? Have any proof of that? Because I have plenty of proof that shows Nvidia rode the strength of the 970 to record revenues, near-record market share, and a 3:1 ownership ratio on Steam compared to the entire R9 200 series.
If Fury X is an experiment as you claim, it was certainly a bigger failure than what was documented here at a time AMD could least afford it, being the only new GPU they will be launching in 2015 to combat Nvidia's onslaught of Maxwell chips.
mapesdhs - Friday, July 3, 2015 - link
A lot of the 970 hate reminded me of the way some people carried on dumping on OCZ long after any trace of their old issues were remotely relevant. Sites did say that the 970 RAM issue made no difference to how it behaved in games, but of course people choose to believe what suits them; I even read comments from some saying they wanted it to be all deliberate as that would more closely match their existing biased opinions of NVIDIA.I would have loved to have seen the Fury X be a proper rival to the 980 Ti, the market needs the competition, but AMD has goofed on this one. It's not as big a fiasco as BD, but it's bad enough given the end goal is to make money and further the tech.
Fan boys will buy the card of course, but they'll never post honestly about CF issues, build issues, VRAM limits, etc.
It's not as if AMD didn't know NV could chuck out a 6GB card, remember NV was originally going to do that with the 780 Ti but didn't bother in the end because they didn't have to. Releasing the 980 Ti before the Fury X was very clever, it completely took the the wind out of AMD's sails. I was expecting it to be at least level with a 980 Ti if it didn't have a price advantage, but it loses on all counts (for all the 4K hype, 1440 is far more relevant atm).
silverblue - Friday, July 3, 2015 - link
How about you present proof of such indescretions? I believe my words contained a heavy dose of IF and WAIT AND SEE. Speculation instead of presenting facts when none existed at the time. Didn't you say Tahiti was going to be a part of the 300 series when in fact it never was? I also don't recall saying Fury X would do this or do that, so the burden of proof is indeed upon you.Returns?
http://www.techpowerup.com/209409/perfectly-functi...
http://www.kitguru.net/components/graphic-cards/an...
http://www.guru3d.com/news-story/return-rates-less...
I can provide more if you like. The number of returns wasn't exactly a big issue for NVIDIA, but it still happened. A minor factor which may have resulted in a low number of returns was the readiness for firms such as Amazon and NewEgg to offer 20-30% rebates, though I imagine that wasn't a common occurrence.
Fury X isn't a failure as an experiment, the idea was to integrate a brand new memory architecture into a GPU and that worked, thus paving the way for more cards to incorporate it or something similar in the near future (and showing NVIDIA that they can go ahead with their plans to do the exact same thing). The only failure is marketing it as a 4K card when it clearly isn't. An 8GB card would've been ideal and I'd imagine that the next flagship will correct that, but once the cost drops, throwing 2GB HBM at a mid-range card or an APU could be feasible.
chizow - Sunday, July 5, 2015 - link
I've already posted the links and you clearly state you don't think AMD would Rebrandeon their entire 300 desktop retail series when they clearly did. I'm sure I didn't say anything about Tahiti being rebranded either, since it was obvious Tonga was being rebranded and basically the same thing as Tahiti, but you were clearly skeptical the x90 part would just be a Hawaii rebrand when indeed that became the case.And lmao at your links, you do realize that just corroborates my point the "spate of 970 returns" you claimed was a non-issue right? 5% is within range of typical RMA rates so to claim Nvidia experienced higher than normal return rates due to the 3.5GB memory fiasco is nonsense plain and simple.
And how isn't Fury X a failed experiment when AMD clearly had to make a number of concessions to accommodate HBM, which ultimately led to 4GB limitations on their flagship part that is meant to go up against 6GB and 12GB and even falls short of its own 8GB rebranded siblings?
silverblue - Monday, July 6, 2015 - link
No, this is what was said in the comments for http://www.anandtech.com/comments/9241/amd-announc...You: "And what if the desktop line-up follows suit? We can ignore all of them too? No, not a fanboy at all, defend/deflect at all costs!"
Myself: "What if?
Nobody knows yet. Patience, grasshopper."
Dated 15th May. You'll note that this was a month prior to the launch date of the 300 series. Now, unless you had insider information, there wasn't actually any proof of what the 300 series was at that time. You'll also note the "Nobody knows yet." in my post in response to yours. That is an accurate reflection of the situation at that time. I think you're going to need to point out the exact statement that I made. I did say that I expected the 380 to be the 290, which was indeed incorrect, but again without inside information, and without me stating that these would indeed be the retail products, there was no instance of me stating my opinions as fact. I think that should be clear.
RMA return rates: https://www.pugetsystems.com/labs/articles/Video-C...
Fury X may or may not seem like a failed experiment to you - I'm unsure as to what classifies as such in your eyes - but even with the extra RAM on its competitors, the gap between them and Fury X at 4K isn't exactly large, so... does Titan X need 12GB? I doubt it very much, and in my opinion it wouldn't have the horsepower to drive playable performance at that level.
chizow - Monday, July 6, 2015 - link
There's plenty of other posts from you stating similar Silverblue, hinting at tweaks to silicon and GCN level when none of that actually happened. And there was actually plenty of additional proof besides what AMD already provided with their OEM and mobile rebrand stacks. The driver INFs I mentioned have always been a solid indicator of upcoming GPUs and they clearly pointed to a full stack of R300 Rebrandeons.As for RMA rates lol yep, 5% is well within expected RMA return rates, so spate is not only overstated, its inaccurate characterization when most 970 users would not notice or not care to return a card that still functions without issue to this day.
And how do you know the gap between them isn't large? We've already seen numerous reports of lower min FPS, massive frame drops/stutters, and hitching on Fury X as it hits its VRAM limit. Its a gap that will only grow in newer games that use more VRAM or in multi-GPU solutions that haven't been tested yet that allow the end-user to crank up settings even higher. How do you know 12GB is or isn't needed if you haven't tested the hardware yourself? While 1xTitan X isn't enough to drive the settings that will exceed 6GB, 2x in SLI certainly is and already I've seen a number of games such as AC: Unity, GTA5, and SoM use more than 6GB at just 1440p. I fully expect "next-gen" games to pressure VRAM even further.
5150Joker - Thursday, July 2, 2015 - link
If you visit the Anandtech forums, there's still a few AMD hardcore fanboys like Silverforce and RussianSensation making up excuses for Fury X and AMD. Those guys live in a fantasy land and honestly, the impact of Fury X's failure wouldn't have been as significant if stupid fanboys like the two I mentioned hadn't hyped Fury X so much.To AMD's credit, they did force NVIDIA to price 980 Ti at $650 and release it earlier, I guess that means something to those that wanted Titan X performance for $350 less. Unfortunately for them, their fanboys are more of a cancer than help.
chizow - Friday, July 3, 2015 - link
Hahah yeah I don't visit the forums much anymore, mods tried getting all heavy-handed in moderation a few years back with some of the mods being the biggest AMD fanboys/trolls around. They also allowed daily random new accounts to accuse people like myself of shilling and when I retaliated, they again threatened action so yeah, simpler this way. :)I've seen some of RS's postings in the article comments sections though, he used to be a lot more even keeled back then but yeah at some point his mindset turned into best bang for the buck (basically devolving into 2-gen old FS/FT prices) trumping anything new without considering the reality, what he advocates just isn't fast enough for those looking for an UPGRADE. I also got a big chuckle out of his claims 7970 is some kind of god card when it was literally the worst price:perfomance increase in the history of GPUs, causing this entire 28nm price escalation to begin with.
But yeah, can't say I remember Silverforce, not surprising though they overhyped Fury X and the benefits of HBM to the moon, there's a handful of those out there and then they wonder why everyone is down on AMD after none of what they hoped/hyped for actually happens.
mapesdhs - Friday, July 3, 2015 - link
I eventually obtained a couple of 7970s to bench; sure it was quick, but I was shocked how loud the cards were (despite having big aftermarket coolers, really no better than the equivalent 3GB 580s), and the CF issues were a nightmare.D. Lister - Thursday, July 2, 2015 - link
@chizowPersonally I think the reason behind the current FX shortages is that Fury X was originally meant to be air-cooled, trouncing 980 by 5-10 % and priced at $650 - but then NV rather sneakily launched the Ti, a much more potent gpu compared to an air-cooled FX, at the same price, screwing up AMD's plan to launch at Computex. So to reach some performance parity at the given price point, AMD had to hurriedly put CLCs on some of the FXs and then OC the heck out of them (that's why the original "overclockers' dream" is now an OC nightmare - no more headroom left) and push their launch to E3.
So I guess once AMD finish respecing their original air-cooled stock, supplies would gradually improve.
chizow - Friday, July 3, 2015 - link
Its possible, WCE Fury was rumored even last year I believe, so I don't think it was a last ditch effort by AMD. I do think however, it was meant to challenge Titan, especially with the new premium branding and it just fell way short of not only Titan X, but also 980Ti.I do fully agree though they've basically eaten up their entire OC headroom in a full on attempt to beat 980Ti, and they still didn't make it. Keep in mind we're dealing with a 275W rated part with the benefit of no additional leakage from temperature. The 275W rated air cooled version is no doubt going to be clocked slower and/or have functional units disabled as it won't benefit from lower leakage from operating temps.
I think the delayed/staggered approach to launch is because they are still binning and finalizing the specs of every chip, Fury X is easy, full ASIC, but after that they're having a harder time filling or determining the specs on the Nano and Air Fury. Meanwhile, Nvidia has had months to stockpile not only Titan X chips, but anything that was cut down for the 980Ti and we've seen much better stock levels despite strong sell outs on the marketplace.
silverblue - Thursday, July 2, 2015 - link
It's very difficult to find, but I've managed to locate one here in the UK from CCL Online... a PowerColor model (AX R9 FURY X 4GBHBM-DH) for £550. Shame about the price, though it isn't any more expensive than the 980 Ti.mapesdhs - Friday, July 3, 2015 - link
In which case I'd get the 980 Ti every time. Less hassle & issues all round. Think ahead for adding a 2nd card, the build issues with a 980 Ti are nill.deathBOB - Thursday, July 2, 2015 - link
Despite losing to the 980ti, I still think this card offers a good value proposition. I would gladly give up a small amount of performance for better acoustics and more flexible packaging with the CLC (assuming AMD fixes the noise issue other sites are reporting).silverblue - Thursday, July 2, 2015 - link
I never thought I'd say this, but drivers have to be causing some of the issues.Fury X isn't such a bad deal; the quality of the product along with noise and temperatures (or the lack of them) can land it in smaller cases than the 980Ti. I do have to wonder why it performs comparatively better at 4K than at 1440p (I've seen weird regressions at 720p in the past), and why delta compression isn't helping as much as I expected. Overclockers may even find Fury (air-cooled) to be faster.
tviceman - Thursday, July 2, 2015 - link
If you overclock at all, it's not small performance you'd be giving up. The 980 TI regularly gets a 20% performance gain from OCing, while Fury X is currently only getting 5-7%. At 1440p, you're looking at a 25% performance difference.looncraz - Thursday, July 2, 2015 - link
We don't know Fury X's full overclocking abilities yet, no one has unlocked the voltage. It might take a bios mod (shame on you AMD for locking the voltage at all, though!), but it will happen, and then we'll get an idea of what can be done.chizow - Thursday, July 2, 2015 - link
Or more likely, AMD already "overclocked" and squeezed every last drop but 5-10% out of Fury X in a full on attempt to match 980Ti, and they still didn't make it.andychow - Friday, July 3, 2015 - link
This makes no sense at all. AMD clearly claimed you could overclock this, but they lock the voltage? The only explanation is that the chip is already clocked at the highest level. I never believed I would agree with that troll chizow, but here we are.chizow - Friday, July 3, 2015 - link
Haha well, maybe you should go back and read some of those "troll" posts and see who was trolling whom. ;)Margalus - Friday, July 3, 2015 - link
They also claimed in their own pre-release benchmarks that Fury X beat the 980Ti and Titan in every single benchmark. Now when independent testers get them, it turns out to be 180° different.. The Fury X gets beaten in every single benchmark. So they are probably lying about the overclocking also...silverblue - Friday, July 3, 2015 - link
Yeah, the burden of proof is on AMD to explain how they reached those results - specs, drivers, clock speeds and so on.n0x1ous - Thursday, July 2, 2015 - link
Great review Ryan. It was worth the wait and full of the deep analysis that we have come to expect and which make you the best in the business. Glad you are feeling better, but do try to keep us better updated on the status should something like this happen again.Mr Perfect - Thursday, July 2, 2015 - link
Why does Frame Rate Target Control have a max cap of 90FPS when 120Hz and 144Hz displays are the new big thing? I would think that a 144FPS cap would be great for running older titles on a new 144Hz screen.Ian Cutress - Friday, July 3, 2015 - link
FRTC is probably still young and needs to be vetted. Its use, aside from the eSports extreme frame rate issue, is touting it more as an energy saving technology for mobile devices. With any luck, the range will increase over time, but don't forget that AMD is putting investment into Freesync, and Freesync-over-HDMI (we reported on it a while back), hoping that it becomes the norm in the future.nnof - Thursday, July 2, 2015 - link
Nice benchmarks overall. Curious as to why you would not test Fury OC's against the 980TI's OC?Including The Witcher 3 into the benchmark rotation would also be nice to see.
tviceman - Thursday, July 2, 2015 - link
All of this. It would be nice for Anandtech to do separate OCing articles pitting cards of similar prices against each other.chizow - Thursday, July 2, 2015 - link
We already know the outcome but yes it would be nice to see nonetheless. Fury "OC'd" can't even beat non-OC'd 980Ti in most of those results, so add another 15-20% lead to 980 Ti and call it a day.Ryan Smith - Thursday, July 2, 2015 - link
"Curious as to why you would not test Fury OC's against the 980TI's OC?"As a matter of policy we never do that. While its one thing to draw conclusions about reference performance with a single card, drawing conclusions about overclocking performance with a single card is a far trickier proposition. Depending on how good/bad each card is, one could get wildly different outcomes.
If we had a few cards for each, it would be a start for getting enough data points to cancel our variance. But 1 card isn't enough.
chizow - Thursday, July 2, 2015 - link
You could test the same cards at multiple frequencies Ryan, that way you're not trying to give an impression of "max OC" performance, but more an expected range of performance you might expect IF you were able to OC that much on either card.D. Lister - Friday, July 3, 2015 - link
Ryan, to us, the readers, AT is just one of several sources of information, and to us, the result of your review sample is just one of the results of many other review samples. As a journalist, one would expect you to have done at least some investigation regarding the "overclockers' dream" claim, posted your numbers and left the conclusion making to those whose own money is actually going to be spent on this product - us, the customers.I totally understand if you couldn't because of ill health, but, with all due respect, saying that you couldn't review a review sample because there weren't enough review samples to find some scientifically accurate mean performance number, at least to me appears as a reason with less than stellar validity.
silverblue - Friday, July 3, 2015 - link
I can understand some of the criticisms posted here, but let's remember that this is a free site. Additionally, I doubt there were many Fury X samples sent out. KitGuru certainly didn't get one (*titter*). Finally, we've already established that Fury X has practically sold out everywhere, so AT would have needed to purchase a Fury X AFTER release and BEFORE they went out of stock in order to satisfy the questions about sample quality and pump whine.nagi603 - Thursday, July 2, 2015 - link
"if you absolutely must have the lowest load noise possible from a reference card, the R9 Fury X should easily impress you."Or, you know, mod the hell out of your card. I have a 290X in a very quiet room, and can't hear it, thanks to the Accelero Xtreme IV I bolted onto it. It does look monstrously big, but still, not even the Fury X can touch that lack of system noise.
looncraz - Thursday, July 2, 2015 - link
The 5870 was the fastest GPU when it was released and the the 290X was the fastest GPU when it was released. This article makes it sound like AMD has been unable to keep up at all, but they've been trading blows. nVidia simply has had the means to counter effectively.The 290X beat nVidia's $1,000 Titan. nVidia had to quickly respond with a 780Ti which undercut their top dog. nVidia had to release the 780Ti at a seriously low price in order to compete with the, then unreleased, Fury X and had to give the GPU 95% of the performance of their $1,000 Titan X.
nVidia is barely keeping ahead of AMD in performance, but was well ahead in efficiency. AMD just about brought that to parity with THEIR HBM tech, which nVidia will also be using.
Oh, anyone know the last time nVidia actually innovated with their GPUs? GSync doesn't count, that is an ages-old idea they simply had enough clout to see implemented, and PhysX doesn't count, since they simply purchased the company who created it.
tviceman - Thursday, July 2, 2015 - link
The 5870 was the fastest for 7 months, but it wasn't because it beat Nvidia's competition against it. Nvidia's competition against it was many months late, and when it finally came out was clearly faster. The 7970 was the fastest for 10 weeks, then was either slower or traded blows with the GTX 680. The 290x traded blows with Titan but was not clearly faster and was then eclipsed by the 780 TI 5 days later.All in all, since GTX 480 came out in March of 2010, Nvidia has solidly held the single GPU performance crown. Sometimes by a small margin (GTX 680 launch vs. HD 7970), sometimes by a massive margin (GTX Titan vs. 7970Ghz), but besides a 10 week stint, Nvidia has been in the lead for over the past 5 years.
kn00tcn - Thursday, July 2, 2015 - link
check reviews with newer drivers, 7970 has increased more than 680, sometimes similar with 290x vs 780/780ti depending on game (it's a mess to dig up info, some of it is coming from kepler complaints)speaking of drivers, 390x using a different set than 290x in reviews, that sure makes launch reviews pointless...
chizow - Thursday, July 2, 2015 - link
I see AMD fanboys/proponents say this often, so I'll ask you.Is performance at the time you purchase and in the near future more important to you? Or are you buying for unrealized potential that may only be unlocked when you are ready to upgrade those cards again?
But I guess that is a fundamental difference and one of the main reasons I prefer Nvidia. I'd much rather buy something knowing I'm going to get Day 1 drivers, timely updates, feature support as advertised when I buy, over the constant promise and long delays between significant updates and feature gaps.
silverblue - Friday, July 3, 2015 - link
Good point, however NVIDIA has made large gains in drivers in the past, so there is definitely performance left on the table for them as well. I think the issue here is that NVIDIA has seemed - to the casual observer - to be less interested in delivering performance improvements for anything prior to Maxwell, perhaps as a method of pushing people to buy their new products. Of course, this wouldn't cause you any issues considering you're already on Maxwell 2.0, but what about the guy who bought a 680 which hasn't aged so well? Not everybody can afford a new card every generation, let alone two top end cards.chizow - Sunday, July 5, 2015 - link
Again, it fundamentally speaks to Nvidia designing hardware and using their transistor budget to meet the demands of games that will be relevant during the course of that card's useful life.Meanwhile, AMD may focus on archs that provide greater longevity, but really, who cares if it was always running a deficit for most of its useful life just to catch up and take the lead when you're running settings in new games that are borderline unplayable to begin with?
Some examples for GCN vs. Kepler would be AMD's focus on compute, where they always had a lead over Nvidia in games like Dirt that started using Global Illumination, while Kepler focused on geometry and tessellation, which allowed it to beat AMD in most relevant games of the DX9 to DX11 transition era.
Now, Nvidia presses its advantage as its compute has caught up and exceeded GCN with Kepler, while maintaining their advantage with geometry and tesseletion, so we see in these games, GCN and Kepler both fall behind. That's just called progress. The guy who thinks his 680 should still keep pace with a new gen architecture meant to take advantage of features in new gen games probably just needs to look back at history to understand, new gen archs are always going to run new gen games better than older archs.
chizow - Thursday, July 2, 2015 - link
+1, exactly, except for a few momentary anomalies, Nvidia has held the single GPU performance crown and won every generation since G80. They did their best with the small die strategy for as long as they could, but they quickly learned they'd never get there against Nvidia's monster 500+mm^2 chips, so they went big die as well. Fiji was a good effort, but as we can see, it fell short and may be the last grand effort we see from AMD.Subyman - Thursday, July 2, 2015 - link
I think it is also very important to note that the 980TI is an excellent overclocker. That was the main reason why I chose it over the Fury X. A 980TI is practically guaranteed to get a 20%+ overclock, while the Fury X barely puts out a 7% increase. That sealed the deal for me.ex_User - Thursday, July 2, 2015 - link
Guys, have mercy on the planet and your parents who pay electricity bills. 400W to play a silly game? C'mon!silverblue - Friday, July 3, 2015 - link
I've said before and I'll say it again, cheaper, more efficient cards are the way forwards. The GTX 750 and 750Ti were important in hammering this point home.I just wish developers would try to get titles working at 60fps at high details on these sorts of cards instead of expecting us to pay for their poor coding (I'm looking at you, Ubisoft Montréal/Rocksteady).
mapesdhs - Friday, July 3, 2015 - link
Some care about power, others don't, most are inbetween. I was glad to switch from two 580s to a single 980, the former did use a lot of juice.gozulin - Thursday, July 2, 2015 - link
Great work, Ryan. One typo in the conclusion though (thanks a lot OSX!):"would like to see AMD succeed and proposer".
It should be prosper instead of proposer.
Ryan Smith - Thursday, July 2, 2015 - link
Thanks. Fixed.darkfalz - Thursday, July 2, 2015 - link
I really doubt the overall gaming experience of 4K 30-50 FPS is better than 1440p 50-100 FPS. Targeting 4K is silly. 4K is still SLI territory even for 980 Ti.kn00tcn - Thursday, July 2, 2015 - link
it's still playable, it beats consoles, & it beats past generations of cards, it would be silly if a 7970 did it (actually i found it silly in the 5870 days when eyefinity was pushed, most everything had to have some reduced settings)but consider this... push 4k, get people on 4k, get game devs to think about image/texture quality, get monitor prices to fall, get displayport everywhere, be ready early so that it's standard
plus fury gets relatively worse at lower resolutions so what can they do other than optimize the driver's cpu load
mr_tawan - Thursday, July 2, 2015 - link
Is someone noticed that the 'Radeon' logo on the side of the card in pictures at page 11 are in opposite orientation? Is it 'flippable' or something ?n0x1ous - Thursday, July 2, 2015 - link
they do this for press shots so you can clearly read the label. It is NOT flippable and appears the right way when mounted in a typical computer case.Agent Smith - Thursday, July 2, 2015 - link
Cheers Ryan!der - Thursday, July 2, 2015 - link
ANANDTECH KILLED THIS REVIEW!RealBeast - Thursday, July 2, 2015 - link
@der, go take your medicine and get back into your padded cell.Great review Ryan, a bit sad that AMD cannot get ahead on anything these days.
They really need to pull out a rabbit to give NVIDIA and/or Intel something to chase for a change.
They must realize that their survival is at stake.
versesuvius - Thursday, July 2, 2015 - link
Nano is the card to wait for. It will sell millions and millions and millions. And AMD is a fool to offer it for anything over $300. Despite the 4 GB ram limitation, it will run every game currently on the market and in the next 4 years fine on average to high systems which is the absolute, dictatorial majority of systems kids all over the world play on. The Enthuuuuusiasts can philosophize all they can, but it does not change anything. The size and power requirements of Nano makes is the card of choice from 1 to Entusiast - 1 range of computer users from home to academia to industry. Well done AMD.IlllI - Thursday, July 2, 2015 - link
great article, I've got a few questions.what is the difference between edram and hbm?
do you think we'll ever see hbm on a cpu?
do you think better amd drivers will close the performance gap in lower resolutions?
do you think nvidia pays companies to optimize for their gpus and put less focus on amd gpus? especially in 'the way it is meant to be played' sponsored games?
Kevin G - Thursday, July 2, 2015 - link
eDRAM uses a single die or integrated on-die with logic. HBM is composed of several DRAM dies stacked on top of each other.eDRAM tends to be connected via a proprietary link making each implementation unique where as HBM has a JEDEC standard interface.
HBM on a CPU is only a matter of time. Next year HBM2 arrives and will bring capacities that a complete consumer system can utilize.
Fjij seemingly does have some driver issues due to some weird frame time spikes. Fixing these will resolute in a smooth experience but likely won't increase the raw FPS could by much.
With DX12 and Vulkan coming, I'd expect titles just going into development will focus on those new APIs than any vendor specific technology. This does mean that the importance of drivers will only increase.
ajlueke - Thursday, July 2, 2015 - link
"HBM on a CPU is only a matter of time." That is actually one of the more interesting and exciting things coming out of the Fiji launch. The effect of slower system memory on AMD APUs has been pretty well documented. It will be interesting to see if we get socket AM4 motherboards with built in HBM2 memory for APUs to use as opposed to using system ram at all. It's also exciting to see that Nvidia is adopting this memory the next go around and who knows how small and powerful they can get their own GPUs to be. Definitely a great time for the industry!Since the Fury X is reasonably close to the 980 Ti, I would love to pick one up. AMD put a lot of the legwork in developing HBM, and without the Fury X, Nvidia likely wouldn't have even created the $649 variant that essentially obsoleted the Titan X. For those reasons feel like they deserve my money. And also I do want to play around with custom BIOS on this card a bit.
Now...if only there were any available. Newegg? Tiger? Amazon? Anyone? If they can't keep the supply chains full, impatience might drive me to team green after all.
silverblue - Friday, July 3, 2015 - link
Nah, just HBM for graphics memory. As HSA APUs shouldn't require the memory to be in two places at the same time, this will alleviate the latency of copying data from system memory to graphics memory. What's more, they don't really need more than 2GB for an APU.I'm not sure, however, that such bandwidth will make a massive difference. The difference in performance between 2133MHz and 2400MHz DDR3 is far smaller than that between 1866 and 2133 in general. You'd need to beef up the APU to take advantage of the bandwidth, which in turn makes for a larger chip. 2GB would have 250GB/s+ bandwidth with HBM1 at 500MHz, nearly ten times what is currently available, and it would seem a huge waste without more ROPs at the very least. At 14nm, sure, but not until then.
silverblue - Friday, July 3, 2015 - link
Fixing the peaks and troughs would improve the average frame rates a little, I imagine, but not by a large amount.Drivers are a sore point especially considering the CPU load in GRID Autosport for the Fury X. Could they not just contract some of this out? VCE was unsupported for a while, TrueAudio looks to be going to same way, and if NVIDIA's drivers are less demanding than AMD's, surely there must be something that can be done to improve the situation?
ajlueke - Thursday, July 2, 2015 - link
"do you think nvidia pays companies to optimize for their gpus and put less focus on amd gpus? especially in 'the way it is meant to be played' sponsored games?"I have noticed quite a few people spitting fire about this all over the interwebs these days. The state of PC ports in general is likely more to blame than anything NVidia is doing to sabotage AMD.
To differentiate the PC versions from their console counterparts and get people like us to buy $600 video cards the PC versions need some visual upgrades. That can included anything from high res textures to Physics and particle and hair effects. That is what NVidias Gameworks is all about. Most of the rumors surrounding NVidia deliberately deoptimizing a game at the expense of AMD revolve around Hairworks and Witcher 3. Hairworks is based off tessellation, which NVidia GPUs excel at compared to their AMD counterparts. Now why didn't NVidia just employ TressFX, a similar hair rendering technology used in tomb raider that performed well on both cards?
TressFX is actually a DirectCompute based technology co-developed by AMD. NVidia scaled back much of the DirectCompute functionality in their Maxwell 2 GPUs to avoid cannibalizing their own workstation GPU business. Workstation GPU margins tend to be extremely high, as businesses can afford to shell out more dough for hardware. The Titan Black was such a DirectCompute beast, that many workstation users purchased it over much higher priced workstation cards. The Titan X and GTX 980 are now far less attractive options for workstations, but unable to perform as well using TressFX. The solution is to develop a technology using what your GPU does do well "tessellation", and get developers to use it. The decision was likely made purely for business reasons and only hurt AMD as tessellation was a weak point for their cards, although less so for the R9 Fury X.
The real problem here is likely shoddy PC ports in general. Big studio titles generally are developed for console first, and ported to PC later. In the previous console generation that meant having a group develop a title for the PowerPC based Xbox 360, the Cell based PS3, and then finally porting to x86 based PC systems often after the console titles had already launched.
With the shift to the new generation of consoles, both the Xbox One and Sony PS4 are AMD x86 based. Meaning it should be extremely easy to port these games to similarly x86 based PCs. However, Mortal Kombat X, and Batman Arkham Knight are two titles that recently had horrendous PC launches. In both cases the port was farmed out to a studio other than the primary studio working on the console version. The interesting part is that MKX was not a Gameworks title, while Arkham Knight was offered for free with 980 Ti video cards. I highly doubt NVidia would add Gameworks purely to screw over AMD when the result was a major title they promoted with their flagship card doesn't even work. It is actually a huge embarrassment. Both more NVidia, but more so for the studio handling the PC port. The new console era was supposed to be a golden age for PC ports, but instead it seems like and excuse for studios to farm the work out and devote even less time to the PC platform. A trend I hope doesn't continue.
looncraz - Thursday, July 2, 2015 - link
1. eDRAM takes up more space and more energy and is slower than HBM.2. HBM will make sense for GPUs/APUs, but not for use as system RAM.
3. Yes, almost a guarantee. But how long that will take is anybody's guess.
4. They don't "pay them" so to speak, they just have contractual restrictions during the game development phase that prevents AMD from getting an early enough and frequent enough snapshot of games so they can optimize their drivers in anticipation of the game's release. This makes nVidia look better in those games. The next hurdle is that GameWorks is intentionally designed to abuse nVidia's strengths over AMD and even their own older generation cards. Crysis 2's tessellation is the most blatant example.
Dribble - Monday, July 6, 2015 - link
"Crysis 2's tessellation is the most blatant example"No it wasn't. What you see in the 2D wireframe mode is completely different to what the 3D mode has to draw as it doesn't do the same culling. The whole thing was just another meaningless conspiracy theory.
mr_tawan - Friday, July 3, 2015 - link
> do you think nvidia pays companies to optimize for their gpus and put less focus on amd gpus? especially in 'the way it is meant to be played' sponsored gamesI don't think too many game developers checks the device ID and lower the game performance when it's not the sponser's card. However, I think through the developer relationship program (or something like that), the game with those logo tends to perform better with the respective GPU vendor as the game was developed with that vendor in mind, and with support form the vendor.
The game would be tested against the other vendors as well, but might not be as much as with the sponser.
mindbomb - Thursday, July 2, 2015 - link
Hi, I'd like to point out why the mantle performance was low. It was due to memory overcommitment lowering performance due to the 4GB of vram on the fury x, not due to a driver bug (it's a low level api anyway, there is not much for the driver to do). BF4's mantle renderer needs a surprisingly high amount of vram for optimal performance. This is also why the 2GB tonga struggles at 1080p.YukaKun - Thursday, July 2, 2015 - link
Good to know you're better now, Ryan.I really enjoyed the great length and depth of your review.
Cheers!
Socius - Thursday, July 2, 2015 - link
Curious as to why both toms hardware and anandtech were kind to the fury x. Is anandtech changing review style based on editorial requests from toms now? Because an extremely important point is stock overclock performance. And while the article mentions the tiny 4% performance boost the fury x gets from overclocking, it doesn't show what the 980ti can do. Or even more importantly...that the standard gtx 980 can overclock 30-40% and come out ahead of the fury x, leaving the fury x a rather expensive piece of limited hardware concept at best. Also important to mention that the video encoding test was pointless as Nvidia has moved away from CUDA accelerated video encoding in favour of NVENC hardware encoding. In fact a few drivers ago they had fully disabled CUDA accelerated encoding to promote the switchover.YukaKun - Thursday, July 2, 2015 - link
Then you must have selective reading, because they do mention it. In particular, they say if they just got a 7% OC, then the card will perform basically the same, and they it did.No need to do an OC to the 980ti in that scenario.
Plus, they also mention the Fury X is still locked for OC. Give MSI and Sapphire (maybe AMD as well) until they deliver on their promise of the Fury having better control.
Cheers!
YukaKun - Thursday, July 2, 2015 - link
* and then it did *Edit function when? :(
Cheers!
Socius - Thursday, July 2, 2015 - link
Again going back to the problem of "missing test data" in this review. Under a 11% GPU Clock OC (highest possible), which resulted in a net 5% FPS gain, the card hit nearly 400W (just the card, not total system) and 65C after a long session. Which means any more heat than this, and throttling comes into play even harder. This package is thermally restricted. That's why AMD went with an all in one cooler in the first place...because it wanted to clock it up as high as possible for consumers, but knew the design was a heat monster.Outside of full custom loops, you won't be able to get much more out of this design even with fully unlocked voltage, your issue is still heat first. This is why it's important to show the standard GTX 980 OC'd compared to the Fury X OC'd. Because that completely changes the value proposition for the Fury X. But both Tom's and Anandtech have been afraid to be harsh on AMD in their reviews.
chizow - Thursday, July 2, 2015 - link
Great point, it further backs the point myself and others have made that AMD already "overclocked" Fury X in an all out attempt to beat 980Ti and came close to hitting the chips thermal limits, necessitating the water cooler. We've seen in the past, especially with Hawaii, that lower operating temps = lower leakage = lower power draw under load, so its a very real possibility they could not have hit these frequencies stably with the full chip without WC.When you look at what we know about the Air-Cooled Fury, this is even more likely, as AMD's slides listed it as a 275W part, but it is cut down and/or clocked lower.
Socius - Thursday, July 2, 2015 - link
What people also overlook is the fact that the Fury X is using more power than the 980ti, for example, while benefiting from a reduction of 20w-30w from using HBM. So the actual power efficiency of the GPU is even lower than it shows.chizow - Thursday, July 2, 2015 - link
Yep, another great point. What overhead they got from HBM was quickly gobbled up in Fiji's "overclock" to hit 980Ti perf.Ryan Smith - Thursday, July 2, 2015 - link
The Fury X does not throttle at 65C. 65C is when the card begins ramping up the fan speed. It would need to hit 75C to start throttling, which given the enormous capacity of the cooler would be very hard to achieve.Zinabas - Friday, July 3, 2015 - link
Well that's just wrong, chart clearly shows "Total System Power" which has a 415w - 388w = 27w power difference between 980TI and Fury X. Reading is not your strong point.275w card vs. a 250w card... 25w difference, who knew math was so easy.
kn00tcn - Thursday, July 2, 2015 - link
are you supposed to beat it up? last time i checked, water cooling is rather expensive & limited, are you forgetting there are 2 more air cooled lower price launches coming?Socius - Thursday, July 2, 2015 - link
I wasn't saying to put it up against a water cooled gtx 980. I was saying a stock reference design gtx 980 when overclocked to the max, will be competitive with and possibly even beat the fury x when overclocked to the max.Also the 2 air cooled versions coming out will be using a cut down die.
Ryan Smith - Thursday, July 2, 2015 - link
To be clear here, we do not coordinate with Tom's in any way. I have not talked to anyone at Tom's about their review, and honestly I haven't even had a chance to read it yet. The opinions you see here reflect those opinions of AnandTech and only AnandTech.As for overclocking, I already answered this elsewhere, but we never do direct cross-vendor comparisons of OC'd cards. It's not scientifically sound, due to variance.
Finally, for video encoding, that test is meant to be representative of a high-end NLE, which will use GPU abilities to accelerate compositing and other effects, but will not use a fixed-function encoder like NVENC. When you're putting together broadcast quality video, you do not want to force encoding to take place in real time, as it may produce results worse than offline encoding.
mapesdhs - Friday, July 3, 2015 - link
The one plus side of doing roundup reviews of oc'd cards is that very quickly just about all available models *are* oc'd versions, and reference cards cost more, so such roundups are very useful for judging the performance, etc. of real shipping products as opposed to reference cards which nobody in their right mind would ever buy.ruthan - Thursday, July 2, 2015 - link
From this review is feel something like - they are trying, they are poor - we have to be kind to them. Ryan wrote that cooling is great, i read few other review were cooling was criticized, as noisy and incosistent - i know if there is also a bit of good heart instead of facts, or it differs card by card, but review samples are usualy more polished - if arent from some local it shop.AMD is very big company with lots of people with huge salaries for sake of our society we should be cruel to such big companies failures, but if they die, they will free the space and other companies could emerge.
Murloc - Thursday, July 2, 2015 - link
no because the barriers to entry in this market are just too huge, and handing a monopoly to nvidia would make such an entry even more difficult.e36Jeff - Thursday, July 2, 2015 - link
Do you think that there is some more optimizations in the drivers that would increase the performance of the fury x vs 980 Ti? I understand that its a bit like peering into a crystal ball to know what kind of performance driver updates will bring, but I'm thinking your guesses would be more educated than most peoples.Ryan Smith - Thursday, July 2, 2015 - link
I am going to decline to peer into the crystal ball at this time.cruzinbill - Thursday, July 2, 2015 - link
I still don't understand how these numbers are correct. The 290X performance is completely wrong on the Beyond Earth test. Where you are pulling only 86.3 avg fps on a standard 290X I am pulling 110.44. I noticed the same when you showed Star Swarm test before. I can supply proof if need be.Ryan Smith - Thursday, July 2, 2015 - link
Are you using MSAA? We're not doing anything special here on Civ, and these numbers are consistent and repeatable.cruzinbill - Thursday, July 2, 2015 - link
Yes same MSAA level. If I disable MSAA I get 118.74 avg fps.Oxford Guy - Friday, July 3, 2015 - link
Theirs is probably throttling.Ryan Smith - Friday, July 3, 2015 - link
The 290X by its very nature throttles, which is why we also have the "uber" results in there. Those aren't throttling, and are consistent from run-to-run.TheinsanegamerN - Saturday, July 4, 2015 - link
So you guys are using a stock 290x? that would make sense then. Third party cooled models work much better and more consistently, and many have overclocks.chizow - Sunday, July 5, 2015 - link
Yes this goes to AT's long-standing policy to use the reference cooler and stock clocks provided by the IHVs, going back to a dust-up over AnandTech using non-reference cooled and overclocked EVGA cards. AMD fanboys got super butthurt over it and now they reap the policy that they sowed.The solution for AMD is to design a better cooler or a chip that is adequately cooled by their reference coolers; it looks like they got the memo. Expect air cooled Fury and all the 300 series Rebrandeons to not run into these problems as all will be custom/air-cooled from the outset.
FMinus - Tuesday, July 7, 2015 - link
It's not really realistic tho, I can't get a reference AMD 280, 280x, 290, 290x anywhere in Europe except maybe used, and I couldn't get those said reference cards a month after launch of the said cards. Same pretty much goes for nvidia, but a lot of high-end nvidia cards still use the blower design for some reason, so I can at least get a reference look alike there, not so with AMD cards.Which again makes the results a bit skewed if the performance of said cards is really that much dependent on better cooling solutions, since better cooling solutions is all people can buy.
nandnandnand - Thursday, July 2, 2015 - link
"Depending on what you want to believe the name is either a throwback to AMD’s pre-Radeon (late 1990s) video card lineup, the ATI Rage Fury family. Alternatively, in Greek mythology the Furies were deities of vengeance who had an interesting relationship with the Greek Titans that is completely unsuitable for publication in an all-ages technical magazine, and as such the Fury name may be a dig at NVIDIA’s Titan branding."Alright, that's pretty funny.
Moonshot - Thursday, July 2, 2015 - link
Took you guys a while to get this review out, but now I can see why. It's very detailed. Though I am not impressed with the card itself, it is a decent effort by AMD, but the product feels unfinished, almost beta quality. No HDMI 2.0 is also a big downer, along with "only" 4GB" RAM.Read more at:
https://moonshot.re
Glenwing - Thursday, July 2, 2015 - link
Great review :) Been waiting eagerly for it, Anandtech as always is so much more detailed than other reviews.In case you're interested though, I did pick out a few minor typos ;) Page numbers going by the URLs...
Page 11: "while they do not have a high-profile supercomputer to draw a name form"
Page 14: "and still holds “most punishing shooter” title in our benchmark suite" probably should be "holds the"
Page 19: The middle chart is labeled "Medum Quality"
Page 25: "AMD is either not exposing voltages in their drivers or our existing tools [...] does not know how to read the data"
Page 26: "True overclocking is going to have to involve BIOS modding, a risker and warranty-voiding strategy"
Page 27: "the R9 Fury X does have some advantages, that at least in comparing reference cards to reference cards, NVIDIA cannot touch" very minor, but I believe the comma should be after "that" rather than "advantages"
Page 27: "it is also $100 cheaper, and a more traditional air-cooled card design as well" I believe should be either "and has a [...] card design" or "and is a [...] card"
Cheers :) Sorry if I seem nitpicky. If anything, it means I read every word :P
Ryan Smith - Thursday, July 2, 2015 - link
"Sorry if I seem nitpicky."There is no need to apologize. Thank you for pointing those out. They have been fixed.
chizow - Thursday, July 2, 2015 - link
Ryan, on the Mantle discussion page:"Mantle is essentially depreciated at this point, and while AMD isn’t going out of their way to break backwards compatibility they aren’t going to put resources into helping it either. The experiment that is Mantle has come to an end."
"Depreciated" should probably be deprecated when referring to software that still technically works but is no longer supported or recommended. Good discussion nonetheless, that paints a clearer picture of the current status of Mantle as well as some of the fine tuning that needed to be done per GPU/arch.
Also, just wondering, if you still feel AMD launched and performed the run-up of Fury X exactly as they planned, it does sound like you're more in agreement with me that 980Ti really caught them off guard and changed their plans for the worse.
5150Joker - Thursday, July 2, 2015 - link
Good review and for those that called [H] review's somehow biased, well here's AT where the Fury X again get's its ass handed to it by 980 Ti and Titan X.meacupla - Thursday, July 2, 2015 - link
Yeah, this seems to be just an architectural problem. Fury X is properly faster than 290X, but I guess you can only polish a turd so much.8.9 billion transistors on Fury X vs. 8 billion on 980 Ti. 980Ti is not even using all of those and it's still faster.
mchart - Thursday, July 2, 2015 - link
The problem is that a ton of those transistors are used for the memory with the Fury. With the same amount of ROPs the Fury X is just really held back.I think once they have access to a die shrink and can increase the ROPS and tweak some things what they have with Fiji will be a monster. Because of their lack of resources compared to NVidia they couldn't sit around and do what NVidia is doing, and instead had to just spend their money designing something that will work better once they have the smaller process available.
Innokentij - Friday, July 3, 2015 - link
You need to be mad to think hardocp is not a legit review site where they show u all the numbers u need to know and the time frame. I must admit i added a lot of people on the forums to biased and not worth to pay attention to cause of the bashing of that site.Michael Bay - Saturday, July 4, 2015 - link
Nobody cares about what u have to admit, lunatic.Innokentij - Sunday, July 5, 2015 - link
Dont post anything more on the internett to u learned to be a grown up, child.crashtech - Thursday, July 2, 2015 - link
Thanks, Ryan. I wonder if there is any chance of doing a multi-GPU shootout between Fury X and 980ti, since as you alluded to in your closing remarks, "4K is arguably still the domain of multi-GPU setups." There will be two countervailing effects in such a test, first, close proximity of cards will not adversely affect Fury X, but CPU bottlenecking could be exacerbated by having two GPUs to feed.mchart - Thursday, July 2, 2015 - link
With the proper case setup the close proximity of aircooled cards like the 980Ti isn't as big of a deal as it may seem due to their cooler design drawing in air from the end of the card. If you have a fan blowing fresh air straight into the GPU's intake on the end of the card you alleviate a lot of the proximity issues. Slanting the cards rather then keep them at a 90 degree angle helps as well. (See cases like the Alienware Area 51 case)I'm not sure how relevant a crossfire test of the Fury X is to most people as many will not be able to use more then one Fury X due to the 120mm fan requirement.
I feel like a lot of people don't account for this issue as much as they should. Those running multi-GPU builds will naturally gravitate now more so towards NVidia simply because they do not have room for more then one Fury x in their case.
mapesdhs - Friday, July 3, 2015 - link
This is one of the biggest problems with Fury X, yet a lot of reviews gloss over it way too much. Just fitting it in a normal case which will likely already have a water cooler for the CPU will be a problem for many.Phartindust - Saturday, July 4, 2015 - link
I just don't understand the issue of finding a place for the rad. I'm using a Phantom 410 - a case that has been around for years. It has no less than 5 mounting locations for a 120mm fan/rad. I find it hard to believe that current cases are no longer able to accommodate 2 120mm rads. Even a lot of mini ITX cases can do this like the BitFenix Phenom.Ryan Smith - Friday, July 3, 2015 - link
At the moment the answer is no, simply due to a lack of time. There's too much other stuff we need to look at first, and I don't have a second Fury X card right now anyhow.5150Joker - Thursday, July 2, 2015 - link
Ryan, any reason you didn't include OC results for 980 Ti? I think that's a pretty big oversight because it would have shown just how far 980 Ti gets ahead of Fury X with them both OC'd.5150Joker - Thursday, July 2, 2015 - link
Nevermind, saw your review to chizow. I still think you guys should include OC vs OC with a disclaimer that performance may vary a bit due to sample quality.Death666Angel - Thursday, July 2, 2015 - link
Launch dates for the R9 290s in the first page table are:"06/18/15"
Don't think so. ;-P
Frihed - Thursday, July 2, 2015 - link
While 4GB is "enough" for a single Furry X, as it lacks raw performance to really go beyond that, do you think it will be the same for a 2x4GB crossfire build? It seems we would need a little more to fully enjoy the performance gain provided by the crossfire.Ryan Smith - Friday, July 3, 2015 - link
Crossfire will likely exacerbate the problem a bit. You need a few extra buffers for AFR, so that further eats into the memory pool.chizow - Sunday, July 5, 2015 - link
Yep, also more likely to use up GPU overhead to enable any sliders that were disabled to run adequately with just 1xGPU, crank up AA modes etc. Quite easy to surpass 4GB and even 6GB under these conditions, even at 1440p.wiak - Thursday, July 2, 2015 - link
just to trow this herei believe the current cooler might be used on the Fury X2 (MAXX?)
why? both the power and cooling is over engineered, so that might point to it being used on another card in the future, why? economy of scale, if amd gets alot of 500W coolers from cooler master then they get them cheaper..
meacupla - Friday, July 3, 2015 - link
I think the cooler is over-engineered because AMD didn't want a repeat of hot and loud 290/290X stock coolers, because they imagined someone with a crap case would buy one and because even the weakest CLLC coolermaster had was capable of handling a Fury X.These 120mm CLLC are far superior to heatpipe heatsinks on GPUs. It's the equivalent of slapping a 120mm CPU tower cooler onto your GPU. Standard GPU heatsinks, on the other hand, are on a similar performance level to low profile CPU coolers.
You can probably imagine the issue with putting on a 120mm tower cooler onto a GPU.
rs2 - Thursday, July 2, 2015 - link
Even at 1440p with all of the settings maxed (8xMSAA, post-processing on, Map set 4, x64 tessellation) I can get the 980 Ti to score below 300fps in TessMark. The Fury X numbers are underwhelming by comparison.rs2 - Thursday, July 2, 2015 - link
can't*Nagorak - Sunday, July 5, 2015 - link
TessMark...sounds like a really exciting game.mac2j - Thursday, July 2, 2015 - link
I've always rooted for AMD in the CPU and GPU arenas so I'm sad to see the writing so clearly etched on the wall now. They've all but surrendered in the CPU arena and now a sub-par flagship GPU marketed at 4K gaming BUT without HDMI 2.0 support - really? I just hope whoever buys them can keep them in tact and we don't end up with a sell off resulting in an Intel CPU monopoly and an Nvidia GPU monopoly.althaz - Thursday, July 2, 2015 - link
Seems like a decent card - but not one that (as somebody who is brand-agnostic) I could justifiably buy. Hopefully it comes in with a price-drop or we see a factory OC versions which bumps performance up another 10%.I would rather buy an AMD card right now, purely because Freesync monitors are totally happening and they aren't overly expensive compared to regular monitors.
On the other hand, if this matched the GTX 980 Ti precisely in performance, the fact that it has only 4Gb of VRAM is certainly an issue.
So whilst this product is impressive compared to its predecessors, overall, it's (IMO) mostly a disappointment.
Hopefully next year (I'm gunna need a new GFX card pretty badly by then) brings some more impressive cards.
mapesdhs - Friday, July 3, 2015 - link
People were saying 'hopefully' last year, and look where we are now.D. Lister - Friday, July 3, 2015 - link
@AMDThere, there. You did alright champ, you did alright. *pats on back* You went against the reigning world champion, and still survived. Hopefully some korean cellphone tycoons saw how hard you fought and may sponsor you for another shot at the title.
BillyHerrington - Friday, July 3, 2015 - link
So, AMD terrible 1440p performance is because of driver cpu bottleneck ?Do you think AMD will be able to fix this in the future driver release ?
silverblue - Friday, July 3, 2015 - link
It's been a major issue for a while, and one that doesn't look to be solved anytime soon. Perhaps it's one of the reasons for developing Mantle in the first place...?az060693 - Friday, July 3, 2015 - link
Kind of disappointed in all honesty. No real OC capability despite the water cooler? So basically you have a card that costs the same as a 980 Ti but with worse performance, less vRAM, no HDMI 2.0, and a broken 4k decoder? Not really a very good prospect for AMD unless they're willing to cut prices again.The R9 Nano will suffer the most from the lack of HDMI 2.0 as it's the card that could be used in a gaming HTPC. It's going to be competing directly with the mini GTX 970 and I really don't know if it can win that battle.
Will Robinson - Friday, July 3, 2015 - link
Whew!.Now that's a video card and architecture review.
Mad props for all the work that involved Ryan.
The current FuryX is a fine card and highlights a lot of the technology we are going to see on the next gen 14/16nm cards.CLLC and HBM are just great and it will be very interesting to how slick Nvidia can make their upcoming Pascal cards.
+1 to AT for this solid review.
chizow - Sunday, July 5, 2015 - link
Still waiting for that day of reckoning that you've promised for years, Will. Did it happen? :Dyannigr2 - Friday, July 3, 2015 - link
You haven't read a gpu review before you read Anandtech's review.Thank you Ryan :)
HollyDOL - Friday, July 3, 2015 - link
Kudos to Ryan for the review...Have to say though, Fury X disappointed me. I am not AMD/ATI fan (ever since I spent dreadful 2 months forcing my DirectX 9 semestral work to run on ATI card) but I have hoped for quite a bit more, if only for the guts to be early adopter of HBM and heating up the GPU arena a bit for nVidia to try harder.
Bateluer - Friday, July 3, 2015 - link
Did you experience the whine with the pump that others have mentioned? I don't think I saw anything in this article about it.Innokentij - Friday, July 3, 2015 - link
This was only a problem on the review sample and maybe some leaked cards to the market, they changed the pump for the retail versions.http://www.guru3d.com/news-story/amd-fixes-r9-fury...
chizow - Monday, July 6, 2015 - link
Incorrect, its a widespread problem that AMD is trying to sweep under the rug for this initial batch of cards.http://www.pcper.com/reviews/Graphics-Cards/Retail...
http://www.tomshardware.com/reviews/amd-radeon-r9-...
Ryan Smith - Friday, July 3, 2015 - link
The noise from the pump is not something I'd classify as a "whine", or annoying for that matter. It is a pump that rocks under load, but it doesn't have much room to ramp down at idle, so it's still putting out a bit of noise at idle relative to air cooled cards.chizow - Monday, July 6, 2015 - link
Did your sample have the colored sticker or the debossed black/black SM logo Ryan?Ryan Smith - Monday, July 6, 2015 - link
http://images.anandtech.com/doci/9390/OpenLid.jpgInnokentij - Friday, July 3, 2015 - link
Could you please add min, average and max fps to all game benhcmarks? I love Anandtech and as u showed in shadow of mordor the average fps is a bad way to look at performance where the average was same but the min fps was worlds apart. Is why i only care about hardocp and & hardwarecanucks reviews since they post that information. This is just not good enough for me, i hope u can accommodate us that want the whole picture.Ryan Smith - Friday, July 3, 2015 - link
We include minimums as they make sense on games. The problem is that minimums are unreliable, which is why we cannot include it on every single game.Innokentij - Sunday, July 5, 2015 - link
Explain what you percive as unreliable? I could only think of when the benchmark loads and or transist betwen scenes?Ryan Smith - Sunday, July 5, 2015 - link
One run may have a minimum of 40fps and the next run 50fps, for example. Some games are very sensitive to what's going on behind the scenes.Innokentij - Friday, July 3, 2015 - link
Also why didnt u include 700 and 900 series in tesstelation benchmarks? Atleast the 900 series should have shamed the inefficient GNC series and showed that it at best on kepler performance. Dont get me wrong the review was okey but it was like a AMD card, just 90% there but at same price point.Ryan Smith - Friday, July 3, 2015 - link
I'm not sure I follow. Those cards are accounted for in our TessMark benchmarks: http://www.anandtech.com/show/9390/the-amd-radeon-...Innokentij - Sunday, July 5, 2015 - link
Dont know how i missed this, thanks alot!toyotabedzrock - Friday, July 3, 2015 - link
You know I think you could fit a few more ads as content is still visible.That said, the interposer could be made in two or for smaller dies and just fab the phy on the interposer.
Oxford Guy - Friday, July 3, 2015 - link
"There is no getting around the fact that NVIDIA’s Maxwell 2 GPUs are very well done, very performant, and very efficient"Not the 970. 28 GB/s VRAM bandwidth with XOR contention is the antithesis of efficiency.
samer1970 - Friday, July 3, 2015 - link
If AMD sells this Card for $550 they will win this round granted.besides this is the only card with high performance that fits in a small case.
as for the power , we have 700 watts SFX from Silverstone and it is enough for 2 cards in small mAtx case.
The ricing isnt just right thats all . $650 is too much for this card for competition.
samer1970 - Friday, July 3, 2015 - link
AMD can still win this round by making a 3xGPU single card , given he space Advantage that allows them to make a single card 3 or 4 GPU onboard.the only problem would be the power for the 4 GPU single card ... so I guess they will stop at 3xGPUs on single card.
nvidia cant do that .
Ian Cutress - Friday, July 3, 2015 - link
If you have multiple GPU silicon on a card, you still have to have a PCIe switch in order to navigate the PCIe 3.0 x16 input into the card. You can see that the dual GPU card has a PLX8747 on it, which splits the PCIe 3.0 x16 lanes from the CPU into two lots of x16, one of each GPU (PLX chips do multiplexing with a FIFO buffer). Having three or four GPUs means you have to look into more expensive PCIe switches, like the PLX8764 or the PLX 8780, then provide sufficient routing which most likely adds PCB layers.Oxford Guy - Friday, July 3, 2015 - link
You really should have made the point that, in order to get similar load noise level out of the 980 Ti, you're looking at greater expense because the cooler will need to be upgraded. That extra expense could be quite significant, especially if buyers opt for a closed loop cooler.Oxford Guy - Friday, July 3, 2015 - link
The other point is that AMD had better offer this card without the CLC so people who run water loops won't have to buy a fancy CLC that they're not going to use.meacupla - Friday, July 3, 2015 - link
But you would just get an air cooler that you're not going to use.And I doubt there would be any price difference, because they'll get beefy air coolers.
Oxford Guy - Friday, July 3, 2015 - link
sell the card bareNagorak - Sunday, July 5, 2015 - link
That sounds like a recipe for disaster.Oxford Guy - Sunday, July 5, 2015 - link
No, it sounds efficient. There is no sound reason to manufacture and sell a complex part that people aren't going to use. There is no sound reason to buy said part if one plans to not use it.Only landfill owners and misanthropes would cheer this sort of business practice. GPUs should be offered bare as an option for watercooling folk.
Nagorak - Monday, July 6, 2015 - link
I think more waste would be generated from cards bought by 'tards who try running them with no heatsink than would be saved by omitting the cooler. Don't underestimate the number of people who know absolutely nothing about computers and who will make absolutely idiotic mistakes like that.Oxford Guy - Thursday, July 9, 2015 - link
False dilemma fallacy, really. People buy OEM CPUs and install them in their machines all the time. Do they not put a cooler on the CPU because they're stupid? You'll need to come up with a better reason why my idea isn't a good one.Archetype - Friday, July 3, 2015 - link
I have to wonder... If 980TI owner were to go for water cooling... What would the total power consumption and price be then?HollyDOL - Friday, July 3, 2015 - link
Hard to tell... when I moved to water, overall power consumption went down (watercooled cpu and gpu)... but I run with big radiator and pump (passive cooling without fan on radiator). As for the entry price the water cooling is more expensive in general... I doubt 980Ti could meet same price level with water loop, even this small like Fury X has...meacupla - Friday, July 3, 2015 - link
Power consumption would not change much, because you're adding in a pump and swapping fans.Pump alone can be 3W and higher, although most are around 18W at full speed. 18W is good enough for a CPU+GPU+2xRAD loop. (someone might want to correct me on this)
The pump you find in these CLLC are around 3W to 4W, with the radiator fan being 2W on average. But those numbers are only if they are running at full speed, which usually isn't necessary.
zodiacfml - Friday, July 3, 2015 - link
Love the work here! Thanks!The R9 Nano is what truly Fiji is but due to the costs of HBM, AMD would want to quickly recover those costs by overclocking the Nano and fit it with a liquid cooler so that it competes with the 980 Ti.
I believe the R9 Nano will be cheaper as it will have noticeably lower performance especially outside 4K resolutions and it doesn't have the liquid cooler. It is the Fury cards that are binned which makes more sense and simple.
This is just a small part of the big picture. HBM technology is what they need for their APUs, desperately. HBM is also the next step for more integration, built-in RAM.
nightbringer57 - Friday, July 3, 2015 - link
Well sadly, even if the 4GB limitation of HBM1 somehow is somewhat enough for a simple VRAM use, it will probably be way too tight as a shared main memory for CPU + GPU.zodiacfml - Friday, July 3, 2015 - link
True but their APUs doesn't compete in the high-end desktop/laptop in terms of CPU so they fall in the entry level and mid-range. With a 4GB APU, a design can leave out the DIMM slots for cost and size reduction. It can also be used as a high-end tablet CPU (probably on the next process node). At the mid-range, they might have 2GB APUs complemented with DIMM slots.xxx5x - Friday, July 3, 2015 - link
I have high hopes for Nano:- cheaper than Fury X
- WC block for it
- OC-able to Fury X
This would make it one very small singleslot powerhouse.
Refuge - Friday, July 3, 2015 - link
I'm excited too, I just hope it doesn't make sourcing an appropriate PSU a nightmare.zodiacfml - Friday, July 3, 2015 - link
Right. I just hope it doesn't approach the Fury cards in terms of price. I have a gut feeling that it is a significantly cheaper, taking a clue from the exclusion of the Fury branding.Notmyusualid - Friday, July 3, 2015 - link
Very nice, good effort.But I'll be waiting for the next TSMC node, 16nm?
Lots of nice small cases to put this in though...
Refuge - Friday, July 3, 2015 - link
So is this a driver issue mostly? If so, how do you feel about their ability or likely-hood of improving 1440p performance?Ryan Smith - Friday, July 3, 2015 - link
At this point it would be a welcome surprise, but a surprise none the less.jagadiesel1 - Saturday, July 4, 2015 - link
Ryan - what do you think are some of the issues in AMD getting good drivers out? It seems to be making a significant dent in their GPUs' overall performance.dalewb - Friday, July 3, 2015 - link
Wow I can't believe I read the entire article - whew, but that was great. I only understood about half of it, but I'm now that much more intelligent (I think). :)LoccOtHaN - Friday, July 3, 2015 - link
OMG is a Monster :D I will have it in near future ;-)Now i can see that Drivers needs tweaking -> and we need new Omega for Fury-X, imagine the +30-40% in every game DX11.1 and in Win_X DX12 OMG thats All.... THX for revievv
Refuge - Friday, July 3, 2015 - link
That is a little hopeful isn't it? :Psilverblue - Friday, July 3, 2015 - link
To the point of lunacy. It'd need to be a title that currently plays like a dog's dinner, everything else would be a few percent here and there.loguerto - Friday, July 3, 2015 - link
Thank you Ryan, thank you so much on at least mentioning Vulkan aside of dx12.royalcrown - Friday, July 3, 2015 - link
"Almost 7 years ago to this day, ANANDTECH formally announced their “FURY X review stratagy.":D J/K Ryan, no big deal, people get sick !
royalcrown - Friday, July 3, 2015 - link
Wide and slow makes it fast ?So basically HBM is the 40 or 80 pin ribbon cable of today right Ryan ? Do you see serial busses being abandoned for parallel again someday in your opinion ?
Refuge - Friday, July 3, 2015 - link
It almost feels like we go in cycles with that... doesn't it?Ryan Smith - Friday, July 3, 2015 - link
We're definitely swinging back towards highly parallel buses at the moment.http://www.anandtech.com/show/9266/amd-hbm-deep-di...
althaz - Friday, July 3, 2015 - link
Horses for courses. HBM is perfectly suited to graphics memory - wide is really the only thing you need, event he "slow" speed of HBM is beyond what's needed. Making the memory faster makes virtually no performance difference. The only reason it even operates as fast as it does is to get the bandwidth up (bandwidth is the product of speed and width).Think of a hose - with the same total pressure, a thinner hose will have a higher "speed" (as the water coming out will have a faster velocity) than a wider hose, but when all you need to do is fill the bucket, a wider hose is obviously better - which is the case here.
mdriftmeyer - Sunday, July 5, 2015 - link
You're attempt at Nozzle pressure in a Fluid Dynamics analogy is solid, but if he cannot fathom that a wider bandwith bus allows for less round trips than he's an idiot.zodiacfml - Saturday, July 4, 2015 - link
They have gone highly parallel because HBM is primarily created for mobile devices in the future.Shagatron - Friday, July 3, 2015 - link
I had to stop reading after the author used the term "reticle limit" to describe a 28nm chip. I really wish the "enthusiast press" wouldn't use real terms they don't understand.Chaser - Friday, July 3, 2015 - link
Oh yeah that invalidated the entire review. /facepalmStrychn9ne - Saturday, July 4, 2015 - link
Great review here! It was a good read going through all the technical details of the card I must say. The Fury X is an awesome card for sure. I am trying to wait for next gen to buy a new card as my 280X is holding it's own for now, but this thing makes it tempting not to wait. As for the performance, I expect it will perform better with the next driver release. The performance is more than fine even now despite the few losses it had in the benches. I suspect that AMD kind of rushed the driver out for this thing and didn't get enough time to polish it fully. The scaling down to lower resolutions kind of points that way for me anyways.Peichen - Saturday, July 4, 2015 - link
AMD/ATI, what a fail. Over the past 15 years I have only gone Nvidia twice for 6600GT and 9800GT but now I am using a GTX 980. Not a single mid-range/high-end card in AMD/ATI's line up is correctly priced. Lower price by 15-20% to take into account the power usage, poor driver and less features will make them more competitivejust4U - Saturday, July 4, 2015 - link
At the high end you "may" have a point.. but what is the 960 bringing to the table against the 380? Not much.. not much at all. How about the 970 vs the 390? Again.. not much.. and in crossfire/sli situations the 390 (in theory..) should be one helluva bang for the buck 4k setup.There will be a market for the FuryX.. and considering the efforts they put into it I don't believe it's going to get the 15-20% price drop your hoping for.
TheinsanegamerN - Saturday, July 4, 2015 - link
Slightly better performance while pulling less power and putting out less heat, and in the 970's case, is currently about $10 cheaper. Given that crossfire is less reliable than SLI, why WOULD you buy an AMD card?Oxford Guy - Saturday, July 4, 2015 - link
Maybe because people want decent performance above 3.5 GB of VRAM? Or they don't appreciate bait and switch, being lied to (ROP count, VRAM speed, nothing about the partitioning in the specs, cache size).medi03 - Sunday, July 5, 2015 - link
Freesync?Built-in water cooling?
Disgust for nVidia's shitty buisness practices?
A brain?
chizow - Monday, July 6, 2015 - link
How do you feel about the business practice of sending out a card with faults that you claimed were fixed?Or claims that you had the world's fastest GPU enabled by HBM?
Or claims/benches that your card was faster than 980Ti?
Or claims that your card was an Overclocker's Dream when it is anything but that and OCs 10% max?
A brain right? :)
sa365 - Tuesday, July 7, 2015 - link
How do you feel about the business practice of sending out a card with faulty, cheating drivers that lower IQ despite what you set in game so you can win/cheat in those said benchmarks. It's supposed to be apples to apples not apples to mandarins?How about we wait until unwinder writes the software for voltage unlocks before we test overclocking, those darn fruits again huh?
Nvidia will cheat their way through anything it seems.
It's pretty damning when you look at screens side by side, no AF Nvidia.
Margalus - Monday, July 6, 2015 - link
freesync? not as good as gsync and is still not free. It takes similar hardware added to the monitor just like gsync.built in water cooling? just something else to go wrong and be more expensive to repair, with the possibility of it ruining other computer components.
Disgust for NVidia's shitty business practices? what are those? Do you mean like not giving review samples of your cards to honest review sites because they told the truth about their cards so now you are afraid that they will tell the truth about your newest pos? Sounds like you should really hate AMD's shitty business practices.
TallestJon96 - Saturday, July 4, 2015 - link
This card is not the disappointment people make it out to be. One month ago this card would have been a MASSIVE success. What is strange to me is that they didn't reduce price, even slightly to compete with the new 980 ti. I suspect it was to avoid a price war, but I would say at $600 this card is attractive, but at $650 you only really want it for water cooling. I suspect the price will drop more quickly than the 980 ti.mccoy3 - Saturday, July 4, 2015 - link
So it is as expensive as the 980Ti by delivering less performance and requires watercooling. Once Nvidia settles for a TITAN Y including HBM, its all over for the red guys.just4U - Saturday, July 4, 2015 - link
Well that would be great news for AMD though wouldn't it since Nvidia would have to pay for the use of HBM in some form or another..Oxford Guy - Saturday, July 4, 2015 - link
AMD could have released a hot leaf blower like the GTX 480 and chose not to.chizow - Monday, July 6, 2015 - link
No, they couldn't have. Fury X is already a 275W and that's with the benefit of low temp leakage using a WC *AND* the benefit of a self-professed 15-20W TDP surplus from HBM. That means in order for Fury X to still fall 10% short of 980Ti, it is already using 25+20W, so 45W more power.Their CUSTOM cooled 7/8th cut Fury is going to be 275W typical board power as well and its cut down, so yeah the difference in functional unit power is most likely going to be the same as the difference in thermal leakage due to operating temperatures between water and custom air cooling. A hot leaf blower, especially one as poor as AMD's reference would only be able to cool a 6/8 cut Fiji or lower, but at that point you might as well get a Hawaii based card.
Oxford Guy - Thursday, July 9, 2015 - link
Your posts don't even try to sound sane. I wrote about the GTX 480, which was designed to run hot and loud. Nvidia also couldn't release a fully-enabled chip.Ignore the point about the low-grade cooler on the 480 which ran hot and was very loud.
Ignore the point about the card being set to run hot, which hurt performance per watt (see this article if you don't get it).
How much is Nvidia paying you to astroturf? Whatever it is, it's too much.
Margalus - Monday, July 6, 2015 - link
this AMD card pumps out more heat than any NVidia card. Just because it runs a tad cooler with water cooling doesn't mean the heat is not there. It's just removed faster with water cooling, but the heat is still generated and the card will blow out a lot more hot air into the room than any NVidia card.Oxford Guy - Friday, July 10, 2015 - link
If you can't afford AC then stick with something like a 750 Ti. Otherwise the extra heat is hardly a big deal.zodiacfml - Saturday, July 4, 2015 - link
My excitement with HBM has subsided as I realized that this is too costly to be implemented in AMD's APUs even next year. Yet, I hope they do as soon as possible even if it would mean HBM on a narrower bus.jburns - Saturday, July 4, 2015 - link
Probably the best graphics card review I've ever read! Detailed and balanced... Thanks Ryan for an excellent review.just4U - Saturday, July 4, 2015 - link
I thought it was great as well.. It had a lot more meat to it then I was expecting. Ryan might have been late to the party but he's getting more feedback than most other sites on his review so that shows that it was highly anticipated.B3an - Saturday, July 4, 2015 - link
I don't understand why the Fury X doesn't perform better... It's specs are considerably better than a 290X/390X and it's memory bandwidth is far higher than any other card out there... yet it still can't beat the 980 Ti and should also be faster than it already is compared to the 290X. It just doesn't make sense.just4U - Saturday, July 4, 2015 - link
Early drivers and perhaps the change over into a new form of memory tech has a bit of a tech curve that isn't fully realized yet.Oxford Guy - Saturday, July 4, 2015 - link
Perhaps DX11 is holding it back. As far as I understand it, Maxwell is more optimized for DX11 than AMD's cards are. AMD really should have sponsored a game engine or something so that there would have been a DX12 title available for benchmarkers with this card's launch.dominopourous - Saturday, July 4, 2015 - link
Great stuff. Can we get a benchmarks with these cards overclocked? I'm thinking the 980 Ti and the Titan X will scale much better with overclocking compared to Fury X.Mark_gb - Saturday, July 4, 2015 - link
Great review. With 1 exception.Once again, the 400 AMP number is tossed around as how much power the Fury X can handle. But think about that for one second. Even a EVGA SuperNOVA 1600 G2 Power Supply is extreme overkill for a system with a single Fury X in it, and its +12V rail only provides 133.3 amps.
That 400 AMP number is wrong. Very wrong. It should be 400 watts. Push 400 Amps into a Fury X and it most likely would literally explode. I would not want to be anywhere near that event.
AngelOfTheAbyss - Saturday, July 4, 2015 - link
The operating voltage of the Fury chip is probably around 1V, so 400A sounds correct (1V*400A = 400W).meacupla - Saturday, July 4, 2015 - link
okay, see, it's not 12V * 400A = 4800W. It's 1V (or around 1V) * 400A = 400W4800W would trip most 115VAC circuit breakers, as that would be 41A on 115VAC, before you even start accounting for conversion losses.
bugsy1339 - Saturday, July 4, 2015 - link
Anyone hear about Nvidia lowering thier graphics quality to get a higher frame rate in reviews vs Fury? Reference is semi accurate forum 7/3 (Nvidia reduces IQ to boost performance on 980TI? )sa365 - Sunday, July 5, 2015 - link
I too would like to know more re:bugsy1939 comment.Have Nvidia been caught out with lower IQ levels forced in the driver?
mikato - Tuesday, July 7, 2015 - link
Wow very interesting, thanks bugsy. I hope those guys at the various forums can work out the details and maybe a reputable tech reviewer will take a look.OrphanageExplosion - Saturday, July 4, 2015 - link
I'm still a bit perplexed about how AMD gets an absolute roasting for CrossFire frame-pacing - which only impacted a tiny amount of users - while the sub-optimal DirectX 11 driver (which will affect everyone to varying extents in CPU-bound scenarios) doesn't get anything like the same level of attention.I mean, AMD commands a niche when it comes to the value end of the market, but if you're combining a budget CPU with one of their value GPUs, chances are that in many games you're not going to see the same kind of performance you see from benchmarks carried out on mammoth i7 systems.
And here, we've reached a situation where not even the i7 benchmarking scenario can hide the impact of the driver on a $650 part, hence the poor 1440p performance (which is even worse at 1080p). Why invest all that R&D, time, effort and money into this mammoth piece of hardware and not improve the driver so we can actually see what it's capable of? Is AMD just sitting it out until DX12?
harrydr - Saturday, July 4, 2015 - link
With the black screen problem of r9 graphic cards not easy to support amd.Oxford Guy - Saturday, July 4, 2015 - link
Because lying to customers about VRAM performance, ROP count, and cache size is a far better way to conduct business.Oh, and the 970's specs are still false on Nvidia's website (claims 224 GB/s but that is impossible because of the 28 GB/s partition and the XOR contention — the more the slow partition is used the closer the other partition can get to the theoretical speed of 224 but the more it's used the more the faster partition is slowed by the 28 GB/s sloth — so a catch-22).
It's pretty amazing that Anandtech came out with a "Correcting the Specs" article but Nvidia is still claiming false numbers on their website.
Peichen - Monday, July 6, 2015 - link
And yet 970 is still faster. Nvidia is more efficient with resources than they let people on.Oxford Guy - Thursday, July 9, 2015 - link
The XOR contention and 28 GB/s sure is efficiency. If only the 8800 GT could have had VRAM that slow back in 2007.Gunbuster - Saturday, July 4, 2015 - link
Came for the chizow, was not disappointed.chizow - Monday, July 6, 2015 - link
:)madwolfa - Saturday, July 4, 2015 - link
"Throw a couple of these into a Micro-ATX SFF PC, and it will be the PSU, not the video cards, that become your biggest concern".I think the biggest concern here would be to fit a couple of 120mm radiators.
TheinsanegamerN - Saturday, July 4, 2015 - link
My current Micro-ATX case has room for dual 120mm rads and a 240mm rad. plenty of room thereTallestJon96 - Sunday, July 5, 2015 - link
This card and the 980 ti meet two interesting milestones in my mind. First, this is the first time 1080p isn't even considered. Pretty cool to be at the point where 1080p is considered at bit of a low resolution for high end cards.Second, it's the point where we have single cards can play games at 4k, with higher graphical settings, and have better performance than a ps4. So at this point, if a ps4 is playable, than 4k gaming is playable.
It's great to see higher and higher resolutions.
XtAzY - Sunday, July 5, 2015 - link
Geez these benchies are making my 580 looking ancient.MacGyver85 - Sunday, July 5, 2015 - link
Idle power does not start things off especially well for the R9 Fury X, though it’s not too poor either. The 82W at the wall is a distinct increase over NVIDIA’s latest cards, and even the R9 290X. On the other hand the R9 Fury X has to run a CLLC rather than simple fans. Further complicating factors is the fact that the card idles at 300MHz for the core, but the memory doesn’t idle at all. HBM is meant to have rather low power consumption under load versus GDDR5, but one wonders just how that compares at idle.I'd like to see you guys post power consumption numbers with power to the pump cut at idle, to answer the questions you pose. I'm pretty sure the card is competitive without the pump running (but still with the fan to have an equal comparison). If not it will give us more of an insight in what improvements AMD can give to HBM in the future with regards to power consumption. But I'd be very suprised if they haven't dealt with that during the design phase. After all, power consumption is THE defining limit for graphics performance.
Oxford Guy - Sunday, July 5, 2015 - link
Idle power consumption isn't the defining limit. The article already said that the cooler keeps the temperature low while also keeping noise levels in check. The result of keeping the temperature low is that AMD can more aggressively tune for performance per watt.Oxford Guy - Sunday, July 5, 2015 - link
This is a gaming card, not a card for casuals who spend most of their time with the GPU idling.Oxford Guy - Sunday, July 5, 2015 - link
The other point which wasn't really made in the article is that the idle noise is higher but consider how many GPUs exhaust their heat into the case. That means higher case fan noise which could cancel out the idle noise difference. This card's radiator can be set to exhaust directly out of the case.mdriftmeyer - Sunday, July 5, 2015 - link
It's an engineering card as much as it is for gaming. It's a great solid modeling card with OpenCL. The way AMD is building its driver foundation will pay off big in the next quarter.Nagorak - Monday, July 6, 2015 - link
I don't know that I agree about that. Even people who game a lot probably use their computer for other things and it sucks to be using more watts while idle. That being said, the increase is not a whole lot.Oxford Guy - Thursday, July 9, 2015 - link
Gaming is a luxury activity. People who are really concerned about power usage would, at the very least, stick with a low-wattage GPU like a 750 Ti or something and turn down the quality settings. Or, if you really want to be green, don't do 3D gaming at all.MacGyver85 - Wednesday, July 15, 2015 - link
That's not really true. I don't mind my gfx card pulling a lot of power while I'm gaming. But I want it to sip power when it's doing nothing. And since any card spends most of its time idling, idling is actually very important (if not most important) in overal (yearly) power consumption.Btw I never said that idle power consumption is the defining limit, I said power consumption is the defining limit. It's a give that any Watt you save while idling is generally a Watt of extra headroom when running at full power. The lower the baseline load the more room for actual, functional (graphics) power consumption. And as it turns out I was right in my assumption that the actual graphics card minus the cooler pump idle power consumption is competitive with nVidia's.
nader_21007 - Sunday, July 5, 2015 - link
As an analyst , I Guarantee AMD’s Success by taking the following simple steps:1. To Stop wasting money on R&D investments altogether at once.
2. To employ a bunch of marketers like Chizow, N7, AMDesperate, . . . to Spread Rumors and bash best products of the competition, constantly.
3. To Invest saved money (R&D wasted money on new techs like HBM, Low level API Mantle, Premium water cooler, etc, etc) in Hardware Review sites to Magnify your products Strengths and the competition’s Weaknesses.
(Note: Consumers won’t judge your product against the competition in practice, They just accept what they see in Hardware Review sites & Forums)
I just gave these advices to some companies in the past, and believe me, one have the best CPU out there, and the other make the best GPU. Innovation is not an R&D’s fruth, it’s a Marketing FRUTH.
Please contact me for more details, Regards.
Oxford Guy - Sunday, July 5, 2015 - link
Astroturfing got Samsung smacked with a penalty, but a smart company would hire astroturfers who are good at disguising their bias, not obvious trolls.SanX - Sunday, July 5, 2015 - link
AMD only hope left is that company with better lithography like Samsung for example buy it entirely. You're welcome, Samsung. Hope you will not forget my as always brilliant advices.amro12 - Sunday, July 5, 2015 - link
Why no 970? 290? At least a 970, it's better than that 290x up there...Oxford Guy - Sunday, July 5, 2015 - link
Perhaps because the 970 should have been withdrawn from the market for fraud? It should have been relabeled the 965 and consumers who bought one should have been offered more than just a refund.Innokentij - Monday, July 6, 2015 - link
To be from Oxford u seem to lack logical thinking.Oxford Guy - Thursday, July 9, 2015 - link
I'm logical enough to see a comment with no substance to it.chizow - Monday, July 6, 2015 - link
Of course this is nonsense, if the 970 launched at its corrected specs, would you have a problem with its product placement? Of course not. But let's all act as if this is the first and last time a cut down ASIC is sold at a lower price:performance segment nonetheless!Oxford Guy - Thursday, July 9, 2015 - link
Your post in no way rebuts what I wrote.Hxx - Monday, July 6, 2015 - link
right because that 0.5 partition really hindered its performance lol. Lets face it , the 970 is an excellent performer with more vram than last gen nvidia's top dog (870 ti) and performing within 15% from nvidia's top tier gtx 980 for $200 less...what more there is to say?Scali - Tuesday, July 7, 2015 - link
Even better, there are various vendors that sell a short version of the GTX970 (including Asus and Gigabyte for example), so it can take on the Nano card directly, as a good choice for a mini-ITX based HTPC.And unlike the Nano, the 970 DOES have HDMI 2.0, so you can get 4k 60 Hz on your TV.
Oxford Guy - Thursday, July 9, 2015 - link
28 GB/s + XOR contention is fast performance indeed, at half the speed of a midrange card from 2007.Gothmoth - Monday, July 6, 2015 - link
so in short another BULLDOZER.... :-(after all the hype not enough and too late.
i agree the card is not bad.. but after all the HYPE it IS a disappointment.
OC results are terrible... and AMD said it will be an overclockers dream.
add to that that i read many complains about the noisy watercooler (yes for retail versions not early preview versions).
iamserious - Monday, July 6, 2015 - link
It looks ugly. Loliamserious - Monday, July 6, 2015 - link
Also. I understand it's a little early but I thought this card was supposed to blow the GTX 980Ti out of the water with it's new memory. The performance to price ratio is decent but I was expecting a bit larger jump in performance increase. Perhaps with the driver updates things will change.Scali - Tuesday, July 7, 2015 - link
Hum, unless I missed it, I didn't see any mention of the fact that this card only supports DX12 level 12_0, where nVidia's 9xx-series support 12_1.That, combined with the lack of HDMI 2.0 and the 4 GB limit, makes the Fury X into a poor choice for the longer term. It is a dated architecture, pumped up to higher performance levels.
FMinus - Tuesday, July 7, 2015 - link
Whilst it's beyond me why they skimped on HDMI 2.0 - there's adapters if you really want to run this card on a TV. It's not such a huge drama tho, the cards will drive DP monitors in the vast majority, so, I'm much more sad at the missing DVI out.Scali - Wednesday, July 8, 2015 - link
I think the reason why there's no HDMI 2.0 is simple: they re-used their dated architecture, and did not spend time on developing new features, such as HDMI 2.0 or 12_1 support.With nVidia already having this technology on the market for more than half a year, AMD is starting to drop behind. They were losing sales to nVidia, and their new offerings don't seem compelling enough to regain their lost marketshare, hence their profits will be limited, hence their investment in R&D for the next generation will be limited. Which is a problem, since they need to invest more just to get where nVidia already is.
It looks like they may be going down the same downward spiral as their CPU division.
sa365 - Tuesday, July 7, 2015 - link
Well at least AMD aren't cheating by allowing the driver to remove AF despite what settings are selected in game. Just so they can win benchmarks.How about some fair, like for like benchmarking and see where these cards really stand.
FourEyedGeek - Tuesday, July 7, 2015 - link
As for the consoles having 8 GB of RAM, not only is that shared, but the OS uses 3 GB to 3.5 GB, meaning there is only a max of 5 GB for the games on those consoles. A typical PC being used with this card will have 8 to 16 GB plus the 4 GB on the card. Giving a total of 12 GB to 20 GB.In all honesty at 4K resolutions, how important is Anti-Aliasing on the eye? I can't imagine it being necessary at all, let alone 4xMSAA.
OrphanageExplosion - Wednesday, July 8, 2015 - link
Anti-aliasing is required for the same reason that no AA still sticks out on 3D titles on an iPad, but in my experience with a 32-inch 4K Asus, post-process AA (SMAA, FXAA) does the job just fine.Oxford Guy - Thursday, July 9, 2015 - link
"the OS uses 3 GB to 3.5 GB"That's insane bloat.
Nerdsinc - Monday, July 13, 2015 - link
I sincerely hope the Overclocking limitations are related to software, a $1000 card with liquid cooling ought to be able to pull higher clocks than that...yhselp - Saturday, July 18, 2015 - link
Out of curiosity, is it really possible for an Xbox One/PlayStation 4 game to take up over 4GB of memory just for graphics, since just 5GB total are usable for games?Refuge - Thursday, July 23, 2015 - link
When ported to PC yes. That is because we usualyl get enhanced graphics settings that they do not.PC ports are also less efficient because of low budget ports. Which just compounds the issue more.
Computers have to be more powerful than their console counter parts in order to play equivalent games due to sloppy coding, and enhanced visual options.
ludikraut - Thursday, July 23, 2015 - link
Lack of HDMI 2.0 support totally kills the card for me. Who the heck wants to look at 4K at 30Hz? Guess I'll be sticking with my GTX 970 for a while.l8r)
eodeot - Friday, July 31, 2015 - link
You didn't even mention AMDs poor power consumption while idle with multiple monitors or while playing back a video of any kind.For some reason AMD thinks that playing back 240p Youtube video requires 3d clocks and thus 3d power consumption, even if the video is paused.
AMD failed to address it for the past 5 years and you failed to mention it yet again. Nvidia fixed this long ago...
JJofLegend - Friday, October 14, 2016 - link
I recently got an AMD Fury X, but I'm running into an issue with my games. I've tried with Battlefield 4, Crisis 3, Quantum Break, ReCore, The Division, and they all have the same distortion. Any ideas or anyone that can make suggestions? I don't know how to trouble shoot this.Here is a screenshot of Crisis 3:
https://vjkc5g-ch3301.files.1drv.com/y3m_mcTTTddOj...