It's not that bad, and people won't hold any long term grudge. Intel has launched several CPUs and chipsets over the years that have actually failed to perform a certain advertised feature, but that feature was usually secondary and the product was still very good. It's all about what you get in real world performance for the money you spend. Who cares how the 970 uses its RAM if the performance is what you expected.
non nVidia user either, and I have been a Radeon user for the last 14 years until I got to 7870Ghz ED - dead; replaced with 280X - unstable out of the box; replaced with 2nd 280X - extremely unstable and currently in RMA
they way I see it, sadly this industry has gone to s**t; they need to pause for as long as they need, fix whatever they need to fix and resume delivery of solid products
This is shocking on nvidia's part. Remember nvidia and 'bumpgate' ? Where they were responsible for thousands and thousands of defective gpu parts in laptops. That took a lawsuit and years and years to get them to take responsibility. Finally last year they had to settle and pay out damages for their defective hardware.
Nice to hear anandtech will be doing research on this. Believing anything nvidia says abou the matter is foolishness personified without 3rd party testing and verification. If the issue is as substantial for performance as we've been seeing in tests not from nvidia then they have every reason to try and hide and cloud the issue to avoid the cost of fixing it for end-users.
It's early yet and more investigations are needed on the flawed hardware in all GTX 970 cards.
Hear hear! lost a laptop to a dead nvidia gpu in that fiqsco. all following the forum investigations on AT and OCN have seen the shocking performance loss when the VRAM flaw of the 970 manifests. we're sure to learn more as further tests occur and the full extent of nvidias mistake in 970 design is demonstrated. hopes are for gtx 970 users not needing to pursue a recall on these faulty cards.
What's sad is HP, the largest OEM to use NVidia components, took the brunt of the customer fallout in those bad laptops. Sure, the cooling systems weren't great but they met NVidia's TDP requirements, which were flawed, and it's still unclear if TSMC even made the chips to QA. Most customers don't even know or care who NVidia is, they just know their HP laptop broke, millions of them, and HP lost customers.
I used to repair them all the time (reflow) and it was a mic between the chip set that also carried the GPU not being able to report overheating so the fan would not ramp up, only when the CPU became warm (the fan would turn off if the CPU was at idle which caused it for the most part) and they used a rubber faom thermal pad to disapate heat from the chipset because of the amount of flex that the system displayed.
The Intel boards didn't have this problem because the chipset had a much weaker GPU inside of it making it possible for disapate all the heat through the motherboard its self.
HP didn't really mess up, and the Nvidia chipsets in question didn't experience conditions more adverse than their models competing ATI and Intel components. They kept everything within Nvidia-specified conditions and they still failed left and right. If Nvidia had demanded they be kept cooler, or spec'd them for lower clocks (and thus reducing both current draw and heat) then it wouldn't have been such an unmitigated disaster.
Furthermore it was ONLY a matter of heat and power draw. Heat accelerated the problem greatly, but even discrete GPUs with the same bump/pad and underfill materials were not immune to the problem. There's a reason it's called bumpgate and not fangate or thermalgate. Ramping up the fans sooner/faster/etc and otherwise boosting cooling is a band-aid and does not address the underlying problem. If you knew about the issue you could apply a BIOS update for many models that ramps up the fan more often (and sooner) but that mostly just delayed the inevitable. I lost a laptop to it, myself.
Just search "nvidia bumpgate" and you'll see what I'm talking about. High-lead bumps connected to eutectic pads? Wrong underfill materials? Uneven power draw causing some bumps to carry too much current? Yes, yes, and yes. They later started repackaging chips with different materials used for bumps and underfill, and manufacturers were more aggressive with cooling, and failures were vastly reduced.
Ack, meant to say "Furthermore it wasn't ONLY a matter of head and power draw". That whole paragraph doesn't make sense if it was only heat and power. Sorry.
Hmmm they are not faulty cards. They just have worse memory controller than 980 has, and when you compare the price of the cards, you can take cheaper 970 or much more expensive 980 with better memory controller. This more like "a not so good" feature than a bug, because it has been done for purpose.
That's worse. They're 4GB cards sold as 4GB cards, not 4GB cards terms and conditions may apply. Between Keplers having too little RAM and this I have way less faith that an NV card will hold onto its launch performance relative to other cards than I did half a year ago.
Small realistic assessments: most of the laptops i have seen fail have failed on the largest chip cracking its balls due to flexing of the mainboard. And most of these haven't had an nVidia in them!
So it's really a very common issue, nothing special. Was it more pronounced on the particular nVidia in question? Yes, in the regard that the cracking also happened by pure thermal cycling within specified thermal parameters. But i doubt it seriously affected the laptop life, unless your laptop had a rigid magnesium body.
You have no idea how bad it was. Sorry. Google Nvidia Bumpgate. It was a HUGE issue. The failure rate was staggering, MOST of the ones with Nvidia MCPs that relied on the integrated graphics died within a couple years. It wasn't due to mainboard flexing. It was due to mismatched pad-bump materials, incorrect underfill selection (not firm enough), and uneven power draw.
It wasn't quite as pronounced on dedicated graphics but there were certain models that were prone to early failure. Heat wasn't truly the cause (as it occurred within expected/rated operational temp ranges) but it did exacerbate the issue.
The worst thing about bumpgate was, as someone who had multiple videocard failures (particularly 7950GX2s because they ran so hot anyway) everybody else insisted *I* must be at fault for the failures. Everybody insisted there was no way I was seeing multiple failures within months of each other without something else being wrong like my PSU (nevermind that it was happening in multiple different computers). Especially the manufacturer ("there's nothing wrong with our cards") before the problem was admitted by nVidia.
They acknowledge that there is a performance problem, albeit slight - in the examples they chose to use. It isn't unreasonable to assume there might be edge cases where a program assumes that all 4GB is created equal and is more memory performance restricted.
Overall I'd bet this will end up a complete non-issue. That said, the fact that Nvidia "hid" this deficiency (by not clearly disclosing it from the beginning) just gives the internet drama machine way too much fuel. Stupid move on Nvidia's part.
Slower processors do not scale evenly under higher amounts of stress compared to faster processors. Having a 3-5% performance difference under identical heavier loads can be accounted for by lots of things, not just a difference in vRAM speeds. These are performance numbers are normal and what you would expect comparing these specs of cards.
You're calling a 1-3% performance difference (within margin of error in measurements) when comparing 2 products that differ in more than just memory configurations a performance problem? No, they do not acknowledge there is a performance problem, and even state, using those figures, to back up the claim that there isn't one.
(There might be a performance problem under specific workloads, but their examples don't show it)
"Slight" performance difference. And they clearly admit it: "there is very little change in the performance ... when it is using the 0.5GB segment." For Nvidia PR-speak, that's pretty clear. The single digit difference also makes it a non-issue, assuming that's as bad as it gets.
It would be naive to assume that Nvidia chose examples that showed a worst-case scenario for the issue. How much of the slower 0.5GB segment was even in use? >3.5GB could mean 3.51GB which of course would have next to no impact on performance. I'm not implying this is some huge issue for owners just that I's like to know more.
worse, >3.5gb could mean >4gb which would actually fit well with the -40% performance drops. going over 4gb would inevitably impace the performance heavily and not really show the impact of that last 0.5gb.
Why should they disclose it to consumers? From a marketing standpoint it would just cause confusion and doubt. It's a technical detail of the memory system which is of concern to developers. As far as consumers are concerned, the performance is what matters. If it were to really perform worse in situations for which they are selling the card to use, then a heads-up from them would be in order. It still wouldn't be a bug or a deficiency, it's a different card from the 980 and doesn't have to just be a 980 with some SMs disabled. However, they are claiming that there is no performance difference under normal usage. If that's the case, it's just a technical detail. Of course, one might use the card for something other than gaming, and although one would expect a lower level of support for the card being used for that purpose, the information should still be available if it makes a difference to the programming and performance expectations of the card. Such users represent less than 1% of people screaming about the issue. It's probably 45% AMD fanboys, 45% 970 owners worried their shiny new toy has a scratch, and 10% people who would just as soon be complaining if it were revealed that Ivory soap is actually only 99.43% pure.
i have noticed for a while that my new shiny toy has a scratch. i just never cared enough to try and polish it out.
games never consume more than 3.5gb or vram in my system, no matter what i do. granted, i am not on 4k, don't really force extreme aa or supersampling/dsr. today, there is only a handful of games that would need over 3.5gb vram usage at 1440p - mordor, ac:unity, fc4 (at least last 2 of which are far from stable).
290(x) and 980 users have reported 4gb vram usage with mordor starting at 1080p. yesterday i actually tested a whole bunch of resolutions and settings - the only settings i could get more than 3.5gb were all maxed, rendering at 5k and aa on. then it went to 4gb (and probably wanted to go over if the card had more to give). this is suspicious at best.
They didn't acknowledge it was a 'problem', don't infer selectively. I don't think it's an issue either though, and so far there's no evidence it was deliberately hidden. However, now it's known the 970 design does differ in some way, for PR sakes it makes sense for them to explain in more detail how the 970 works.
None of which though detracts from the fact that the 970 is a very good card, well priced, with excellent power/noise characteristics, as so many reviews stated. people are back-judging here, which is kinda lame.
It's not that simple. Finding a bug after releasing a processor is one thing. Having 0.5GB RAM on a card that could lead to problems (frame time spikes, stuttering, performance issues in computing) is something totally different. The first is something that the manufacturer didn't wanted to happen. The second is a marketing decision.
Nvidia come out with an explanation that shows average frame per second scores. Let's not forget that based on average fps scores, the old CF was excellent and with no problems. Then people started measuring frame times and AMD had to completely change it's drivers and abandon CF bridges.
I believe Nvidia is doing damage control. First it tries to buy time with this announcement, then probably they will use GeForce Experience to try to keep games from maxing out the memory usage on games with a GTX 970. Until it becomes a real problem with newer and more demanding games, new cards with new GPUs will be out and in some cases people will be advising others, who are searching for a good gaming monitor, to get a GSync one to help them with stuttering from their GTX 970.
It's pretty frustrating and is sure making me wish I'd just switched a lightbulb for an LED and gotten a 290 like a smart human instead of the 970 I got. I'm probably going to be switching to an AMD card sooner than I expected when I bought the 970 after a decade with NV cards. I bought that thing to last, and it chokes on one of the most important parts of what determines how long it lasts. Unlike with Intel, NV has competition. Oh well, at least I'll probably lose the 970's driver problems.
Hardware implementation details are generally not shared with consumers. Hardware is much more complex than most people know enough about it to understand and they always gloss over the details. Nvidia's claim is that it doesn't matter and the tests look like that might be true but who knows how it performs in other cases.
I have 2x Radeon 280x, and I haven't had problems with them. I did have the RMA one of the cards in less than a week (VRM exploded), but the replacement has been fine for over a year since.
Alright, I am admittedly a bit of an nVidia fanboy, but I really think this is a non-issue. The card DOES have 4GB of ram, and games ARE able to take advantage of all 4GB of that ram. Benchmarking software (which is typically shortsighted crap to begin with) reports the size of the main larger/faster partition of ram. It does not mean it is not there, and it does not mean that it goes unused, it just means that the software needs to be re-written to take the new architecture into account. The 512MB partition is slower than the main partition... but as far as I understand things, ram speed is rarely the bottleneck except in the highest end of cards (ie not this one), so there are likely few cases where it is a major issue. Never mind the fact that you don't really need more than 2GB of VRAM (today at least) unless you are pushing well above 1080p; in which case you ought to opt for higher end, and multiple higher end cards to deal with such a load.
It is a non-issue of a non-problem. But don't you worry, the writers at Tom's will get to the bottom of it. This is the closest thing we have had to excitement in hardware in quite a while, so journalists are craving for some kind of news or scandal to report.
The technical explanation of this is pretty simple. NVidia doesn't need to say anything further, other than WHY this really had to be implemented this way. But the way it works is simple.
Imagine you have a 4GB SSD and 3500MB is C: and 500MB is D:
You can still use all 4GB of it. The only scenario you'd have a problem is if you have a 3600MB file. You couldn't actually store the file anywhere. But in games this never happens. Images, vector\shadder cache and various computational data are all small chunks. It'll span across the two partitions without any real penalty. There might even be an optimization to put certain low priority data on the 500MB partition under stressful workloads, such as PhysX data...
But this does become a problem, for example, a CUDA program that uses a database (a single file) as that file will be capped at 3.5GB. This is an incredibly rare case, and admittedly, a poorly written program as the database should probably be broken up in to 500MB chunks to prevent memory fragmentation.
"The only scenario you'd have a problem is if you have a 3600MB file" To be pedantic, it will happen if you have a 3500+MB OPERATION, e.g. if you have two 1200MB megatextures and you for some reason want to perform an operation that combines both into a third texture. That requires 3600MB total vRAM. This is an example (texture compression occurs, and that's a massive texture to be working on all at once), but there might be some very weird edge cases where operating on a lot of large files may start to dip into the lower-speed zone. I can't see it being an issue for games even at extreme render and texture resolutions though.
The memory is fully virtualized on the GPU, has been since GeForce6. It doesn't matter if you have a single drawcall, or single object hitting the barrier or multiple. I don't think any special provisions on the application end are necessary.
I think the issue is best handled by the driver, which knows already which programs are demanding and which aren't, because demanding programs are either on its whitelist, or carry an NvOptimusEnablement token, or link NVAPI or CUDA. That's the rules set by nVidia Optimus, or are manually specified by the user. So then the driver could force the programs which are demanding to believe they have 3.5 GB and partition them into the fast section exclusively, and partition non-demanding programs such as DWM into the slow section first.
Given that DWM alone will happily eat a few hundred megs, we're really talking just about 200MB of actual performance-relevant difference. What's the big deal?
Windows occupies around 200+MB of VRAM on that slower partition as well. Hitting the 4GB or 3.51GB marker is really not that hard to do, especially when gaming @ 4K which is well within reach on this card (due to the higher amount of VRAM, in part).
The issue here is frame times. The stutter that occurs when crossing the magical VRAM barrier (ie swaps bigger than 3.5GB) is precisely the same as the stutter that occurs when using 2.1GB on a 2GB card. It hardly shows in frame rate, but does show in frame time. Basically we have a card here that handles like a 3.5GB VRAM capped card, but is marketed as a 4GB one. It does not matter that at THE CURRENT STATE IN GAMING not many games utilize over 3.5GB. What matters is that the last 0,5GB does not perform like one would expect from GDDR5, it actually performs as if you didn't have it at all.
Superlatives such as 'an incredibly rare case' are worthless. If there is one edge case today, there will be dozens in a years' time as demands on VRAM increase further.
I an confused about as to why Nvidia's statement isn't satisfactory in this case (consumer/performance wise). Obviously someone technical might be curious about more details, but as far as the whole "is it actually 4GB or not" issue goes, it seems like that has been resolved.
Also, the following I feel is inaccurate: "Despite the outward appearance of identical memory subsystems, there is an important difference here that makes a 512MB partition of VRAM less performant or otherwise decoupled from the other 3.5GB."
Nowhere did Nvidia say that the .5GB of RAM is less performant. What they said is that the 3.5GB allocation has higher priority, which is completely logical as you'd want the card to use that partition first. As for performance their statement doesn't say anything either way, instead it just goes into those examples, which to my mind show that there is in fact no significant performance hit. Now, one could certainly assume that the reason they didn't specifically say that the .5GB allocation performs the same is that it does perform slower, but that's just an assumption.
At the end of the day, it seems to me that there has been no wrong doing on Nvidia's part here other than not going into a bunch of technical detail into which they usually don't go in any case. They didn't lie about having 4GB of memory, and while the way that memory used is different between 980 and 970, it doesn't seem to affect performance.
But people HAVE reported performance issues. Don't you understand that this was HOW this issue was discovered in the first place, and people have found with certain games such as 'Mordor Shadow of Middle Earth', the moment the GPU dips in to that 0.5GB, there is a big drop in performance! If there was no performance issue, do you think we would even be discussing this, or that so many people would be in uproar about it? This is a SERIOUS problem, and it's only the tip of iceberg as more and more games need to utilise the full 4GB RAM. GTA V is just around the corner after all!
I think the issue was discovered when gpu monitoring tools weren't showing a full 4 GB allocation no matter what the settings, rather than by a performance drop. This is my experience as well, I top out at 3.5 GB on DA inquisition on 1440p.
But doesn't that mean for DA inquisition, you're card is effectively a 3.5GB card? Because the game is unaware or unable to use the other memory partition? Or am I misunderstanding something?
The game shouldn't matter as the driver is between the game/OS and the card. OSDs and programs that show VRAM amounts report the card incorrectly and that is when/why the problem arose.
Could you please share a link where multiple 970 users are in an uproar about big drops in performance related to this issue... because if such is the case, it is certainly a moment of shame for Nvidia.
If this is a "SERIOUS" problem (that your budget card might suffer a 3% perf hit vs the absolute premium card that costs far more) in the MOST extreme performance issues than you must lead a charmed life indexed
Those who are most angry are the ones who need to believe that the 970 is nothing more than a declocked 980 for the "smart guys" and that the 980 is for suckers with too much money
Reality is that this statement, and these tests, show a minor difference between the two. Given the price difference it should be expected, sorry.
Or you could just buy a top of the line AMD card for less than the NVIDIA 'budget' card and be happy without worrying about weird errata. Considering the 970 was sold as a declocked 980, it's pretty fair to expect a declocked 970.
Dude its not that at all, it's that when I paid 400 plus tax Canadian on a card based on the specs, 64 rops and 2Meg of l2 cash, with 4 gigs of ram running at memory bandwidth of 224, on a 256 bit bus, to only find out after 4 month that the specs that I based my decision was false.... that's the point, it's like being told you are getting an i5 to find out its an i3 with HT.. anyone that doesn't see that is just stupid, please bend over for me.
I'm having issues as well. Only using 1080p, but using heavily modded Skyrim with 200+ mods and over 20GB of additional meshes and textures from mods. When i stay in the <3500 zone, i have smooth sailing with 45+ fps, no drops. As soon as the cards hits 3500, i start getting the worst stuttering ever, constant drops to 0fps but with normal fps in between. Also, never see more than 3550 utilization, while my friends 670 with 4gb can use up to 3900 with no stutters, just lower fps in general.
If that is indeed the case then I apologize, but from the article above I didn't get that there was in fact an issue performance, all I saw was that diagnostic tools weren't reporting 4GB. If like you say there are indeed severe performance issues then yeah, that's definitely a problem.
Plenty of people would be in an uproar because people like to complain. Fanboys also like to point out flaws in products they don't support. There hasn't been any assessment that shows a big drop in performance or frame times. People noticed it because they have OSD screens on their keyboards or programs showing VRAM use and noticed it was showing 3.5GB and not 4. There hasn't been proof of any performance issues.
Dude when I bought this card I was under the impression that it had 64 rops and 2meg of l2 cash, I don't care if you, or anyone else says it's not needed or impacting performance not having that total: it's the fact that I spent 400 Canadian on specs issued by vendor, and partners.... it's like buying 4 core cpu, to only find out that it really is an i3 with ht....
That doesn't explain why the GTX 980 doesn't have the same issue. Nor why this issue only affects some people or if only a few people have tested their cards?
As much as I appreciate the quick and detailed response from Nvidia, this doesn't improve their reputation a single bit in my book. Nvidia has never been one to play it safe with memory buses, always preferring to use their "revolutionary" texture compression algorithms to make up for shortcomings on the VRAM side.
The GTX 550 Ti, GTX 660 and GTX 660 Ti come to mind when it comes to problems addressing VRAM. All of these cards featured a 192-bit GDDR5 interface, but all of these cards either came in 1GB/2GB configs instead of the expected 768MB/1.5GB/3GB configs. After the reviews here briefly discussed the feasibility of such an implementation, it seemed to be completely forgotten in no time. Unsurprisingly, the GTX 660 was pretty disappointing. Hopefully Nvidia has learned their lesson and will take steps to improve their work in this regard.
No, we can't agree that the GTX970 is the best card for the money when I was able to buy a Gigabyte Radeon R9 290 for $259 Canadian and flash the bios on it for free so it runs at 1050MHz instead of 943MHz stock. The GTX970s were all $380+ Canadian, so no, the GTX970 is NOT the best bang for buck card in the mid-high end.
Now that nVidia has once again shown how willing it is to rip off its own customers, I am especially glad I stayed away from them.
And the R9 290 doesn't have significant problems when you use all of its RAM, as opposed to what some GTX970 owners are reporting.
They're both fairly minor problems, but I'd personally take "ATI's" (solvable) software issues over a suboptimal hardware design that might NOT be fixable with software.
1-3% drop in perf constitutes significant problems on a part that is already 15-20% slower, but also costs about 40% less than the 980? If anything I'd say that premium on the 980 is really showing its value, for anyone who cares that much about that last 1-3% bit of performance.
@anubis44, idk as I've tried to explain many times to you and others like Creig, performance is not the only factor that goes into a buying decision. Luckily for Nvidia fans however, that still mostly binds Nvidia to price against AMD based on performance alone.
In any case, I'd say the rest of the marketplace disagrees with you. Despite AMD slashing prices in Q4, they still got slaughtered in the marketplace, so while they measured up well against Nvidia in terms of price and performance, they still got killed going up against the 970. Because additional support and features are worth the premium for many, and the main reason the majority will go with Nvidia if price and performance are close.
That one was a bit different though, and the stutter was considerably less than with this GTX 970.
GTX 660 had 0,5GB on a smaller bus if I remember correctly, but it did not handicap the card as much. You could use 1900 MB and still have a playable game.
Actually, we got all technical information we need: "and fewer crossbar resources to the memory system" is very condensed but technical explanation. There likely won't be anything more.
GT200: http://www.realworldtech.com/gt200/10/ "Loads are then issued across a whole warp and sent over the intra-chip crossbar bus to the GDDR3 memory controller. Store instructions are handled in a similar manner, first addresses are calculated and then the stores are sent across the intra-chip crossbar to the ROP units and then to the GDDR3 memory controller."
From all that in conjunction with NVidia's statement we can conclude that there are likely fewer ports on crossbar -> smaller effective bandwidth as SMX have to wait for their memory access.
Nice find, and I agree, its about as detailed as it is going to get without going overly technical. In essence, the missing performance was probably always there for any culled/cut SKUs based on a specific ASIC. At some point, the cut functional units are going to have to interface some other functional unit, so if one end isn't there, its only natural to assume the other interconnect will go unutilized or underutilized as a result.
I hope some Consumer Protection Agency in the USA will investigate and if appropriate, impose a huge fine on NVIDIA. NVIDIA should be taught a harsh lesson so that they may learn to respect consumers more.
Honestly this post sums up pretty much everything wrong with the US today.
NVIDIA gave people a BUDGET part that outperforms anything else and is nearly as good as their far more expensive premium part and people still search for excuses to sue . The card works AS ADVERTISED. It offers 4GH vram and you get it.
In your world a segmented memory model is class action and fine worthy... Judy amazing. Do you apply the same standard to whatever work it is you do?
224 Memory bandwith (GB/sec) But this only counts for 3.5 GB of the memory not all 4GB.
So if I buy a Sports Car with 300 BHP and the topspeed is 250 Km/h… But if you drive over 200 Km/h then your motor with 300BHP starts throttling so You dont get the advertized 300 BHP but cuts down to 75 BHP jumping back and forward to 300 BHP and 75 BHP, but is just design flow. Sorry…
I'd like to see FCAT run on a 980 and 970 in a situation that consistently uses 4GB RAM or more. Its not the average fps its the hitching that happens when it gets above 3.5GB of VRAM.
Say Shadows of Mordor at 1440p with the ultra texture pack and MSAA 4x/8x.
Will you be following up with Nvidia and inquiring about the behavior of other Maxwell GPUs (980m, 970m, 965m, 750)? Perhaps also how Kepler behaves as well?
It isn't just about overall performance, but about consistent performance. Does it cause stuttering or micro-pauses? I've seen CLI and CrossFire performance with end FPS showing 60FPS+, on par with where it should be but it stuttered like it was running at 10-15FPS. Unbelievable. nVidia needs to provide the technical details to this because we're not all a bunch of laymen sheep.
no it does not result in any stuttering.. not that i noticed. and i have a 4k monitor and actually can make use of 4GB.
for me that sounds like a pure theoretical problem.. based on low level benchmark results. blown out of proportion by ATI fanboys and people who have nothing better to do than to worry about a 1-3% performance difference.
someone please post here how i can make my 970 stutter with >3.5GB compared to 3 GB vram used.... im eager to test that.
Perhaps you should check out the post by nuoh_my_god, which is on the bottom of page 3 of comments as I type this. He reports stuttering and other issues, and explains under exactly what conditions it happens. Test away.
[On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference. On Battlefield 4, the drop is 47% on GTX 980 and...]
I'm sorry Ryan, but that example has done more to rile up the neophytes then it has to shed a light on any potential hardware problems of the 970. Case in point, from: http://www.anandtech.com/bench/product/1068?vs=133...
Perhaps some frame rate/variance tests conducted while the VRAM usage hovers roughly between 3.25 and 3.75GB would be useful for the proverbial digging.
While I agree that is is probably a non-issue for most use cases, the fact remains that it is false advertisement. I just went over the nvidia website. They state 4GB of RAM with a bandwidth of 224 GB/s for both the 980 and 970. Now it is clear that for technical reasons the 970 actually can only achieve this bandwidth with 3,5 GB of RAM. I'm not sure how anybody can claim this is NOT false or misleading advertisement
NVidia stepping on their D!@ks . it's a 4 gig card with 3.5 fast addressing , and .5 gig indirect translation due to the missing SMs ? I'm guessing there's a corner case hardware/software bug that can result in 'up to'™ a 70% frame rate drop . hopefully a firmware update can fix it , but the non-disclosure of the effect of the missing SMs leaves a sour taste .
The problem is very easily observable in CoD: Advanced Warfare.. play on max settings and see what happens when you use the grapple hook over long distances (i.e. a lot of texture loading within a very short time). I didn't know what it was and thought it was just poor optimization of the PC port but now it seems my new 970 GTX was the cause. Very annoying indeed.
Interesting discussion, thanks for the details Ryan. I've often wondered at what cost these culled functional units extoll on overall GPU performance. As enthusiasts we often try to pinpoint and isolate performance deltas with readily known variables like clockspeeds, SPs, ROPs, TMUs, memory bus and we often see performance is not quite linear, or as expected based on these specs alone.
But what about the unknowns? I guess we have a better understanding now, although I am not sure it really matters. In the end, this may help explain some of the differences in performance for fully functional SKUs based on the same ASIC, ie. GTX 480 vs 580, GTX 780 vs. 780Ti. I guess we now understand that cutting SM modules and functional units carries additional costs, which is not surprising since those vias to the crossbar would also be severed, incurring performance penalties.
In the end it sounds pretty simple without overcomplicating things: the cheaper SKUs will incur performance penalties and will be slower, if you want full performance, pay for the fully performing ASIC.
Even happier I picked up the 980 over the 970, now. anyways!
It would be nice to have all the technical details of said GPUs, but that is not a realistic explanation. It would be bad for a company to detail all the special sauce for competitive reasons. Now there are implications to this, but in general if it does not cause software crashing and the performance difference is consistent across 970 GPUs and in relation to the 980 who can complain. The card performs on par with a R9 290X and at way lower wattage. I stated a couple of "Ifs" and it remains to be seen that the card has no adverse impact on what Nvidia provided to the public.
Now we need to find those scenarios that cause hiccups from a 970 and not a 980 or 960. If you're an informed customer, than you have the choice at that point to not buy the 970. The only concern left is that Nvidia wasn't upfront and that will only matter if there is a true issue that software updates can't handle in the future.
I own two R9 290's. I think the Nvidia 900 series is great for the market. The 970 needs some special evaluation. At first glance this is a far better situation than AMD's first Phenom errata on the 9600, which I did buy and regretted. That processor never gave me any issues; I just felt like I got a less piece of hardware for the money. Especially when AMD released the fixed processors.
Must be updated this article. Nvidia issues, in fact were more of one, more than two, may be three...or four? \ 1.VRAM correct full volume on full speed 2. Volume of L2 Cache of GPU also castrated 3. Number of real working ROP's are cut off 4...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
93 Comments
Back to Article
zmeul - Saturday, January 24, 2015 - link
I'm no expert, but this sounds really bad for nVidia because it looks like the issue was known and knowingly hidden from the customersMonkeyPaw - Saturday, January 24, 2015 - link
It's not that bad, and people won't hold any long term grudge. Intel has launched several CPUs and chipsets over the years that have actually failed to perform a certain advertised feature, but that feature was usually secondary and the product was still very good. It's all about what you get in real world performance for the money you spend. Who cares how the 970 uses its RAM if the performance is what you expected.Non nVidia user.
zmeul - Saturday, January 24, 2015 - link
non nVidia user either, and I have been a Radeon user for the last 14 years until I got to 7870Ghz ED - dead; replaced with 280X - unstable out of the box; replaced with 2nd 280X - extremely unstable and currently in RMAthey way I see it, sadly this industry has gone to s**t; they need to pause for as long as they need, fix whatever they need to fix and resume delivery of solid products
insurrect2010 - Saturday, January 24, 2015 - link
In agreement with you.This is shocking on nvidia's part. Remember nvidia and 'bumpgate' ? Where they were responsible for thousands and thousands of defective gpu parts in laptops. That took a lawsuit and years and years to get them to take responsibility. Finally last year they had to settle and pay out damages for their defective hardware.
Nice to hear anandtech will be doing research on this. Believing anything nvidia says abou the matter is foolishness personified without 3rd party testing and verification. If the issue is as substantial for performance as we've been seeing in tests not from nvidia then they have every reason to try and hide and cloud the issue to avoid the cost of fixing it for end-users.
It's early yet and more investigations are needed on the flawed hardware in all GTX 970 cards.
Scratchpop - Saturday, January 24, 2015 - link
Hear hear! lost a laptop to a dead nvidia gpu in that fiqsco. all following the forum investigations on AT and OCN have seen the shocking performance loss when the VRAM flaw of the 970 manifests. we're sure to learn more as further tests occur and the full extent of nvidias mistake in 970 design is demonstrated. hopes are for gtx 970 users not needing to pursue a recall on these faulty cards.Samus - Sunday, January 25, 2015 - link
What's sad is HP, the largest OEM to use NVidia components, took the brunt of the customer fallout in those bad laptops. Sure, the cooling systems weren't great but they met NVidia's TDP requirements, which were flawed, and it's still unclear if TSMC even made the chips to QA. Most customers don't even know or care who NVidia is, they just know their HP laptop broke, millions of them, and HP lost customers.NeatOman - Sunday, January 25, 2015 - link
I used to repair them all the time (reflow) and it was a mic between the chip set that also carried the GPU not being able to report overheating so the fan would not ramp up, only when the CPU became warm (the fan would turn off if the CPU was at idle which caused it for the most part) and they used a rubber faom thermal pad to disapate heat from the chipset because of the amount of flex that the system displayed.The Intel boards didn't have this problem because the chipset had a much weaker GPU inside of it making it possible for disapate all the heat through the motherboard its self.
Nvidia and HP both messed up.
NeatOman - Sunday, January 25, 2015 - link
Sorry, I used text to speach on my phone lol. I meant to say *it was that the fan didn't respind to the chipset heating up that also carried the GPUAlexvrb - Sunday, January 25, 2015 - link
HP didn't really mess up, and the Nvidia chipsets in question didn't experience conditions more adverse than their models competing ATI and Intel components. They kept everything within Nvidia-specified conditions and they still failed left and right. If Nvidia had demanded they be kept cooler, or spec'd them for lower clocks (and thus reducing both current draw and heat) then it wouldn't have been such an unmitigated disaster.Furthermore it was ONLY a matter of heat and power draw. Heat accelerated the problem greatly, but even discrete GPUs with the same bump/pad and underfill materials were not immune to the problem. There's a reason it's called bumpgate and not fangate or thermalgate. Ramping up the fans sooner/faster/etc and otherwise boosting cooling is a band-aid and does not address the underlying problem. If you knew about the issue you could apply a BIOS update for many models that ramps up the fan more often (and sooner) but that mostly just delayed the inevitable. I lost a laptop to it, myself.
Just search "nvidia bumpgate" and you'll see what I'm talking about. High-lead bumps connected to eutectic pads? Wrong underfill materials? Uneven power draw causing some bumps to carry too much current? Yes, yes, and yes. They later started repackaging chips with different materials used for bumps and underfill, and manufacturers were more aggressive with cooling, and failures were vastly reduced.
Alexvrb - Sunday, January 25, 2015 - link
Ack, meant to say "Furthermore it wasn't ONLY a matter of head and power draw". That whole paragraph doesn't make sense if it was only heat and power. Sorry.haukionkannel - Sunday, January 25, 2015 - link
Hmmm they are not faulty cards. They just have worse memory controller than 980 has, and when you compare the price of the cards, you can take cheaper 970 or much more expensive 980 with better memory controller.This more like "a not so good" feature than a bug, because it has been done for purpose.
xthetenth - Monday, January 26, 2015 - link
That's worse. They're 4GB cards sold as 4GB cards, not 4GB cards terms and conditions may apply. Between Keplers having too little RAM and this I have way less faith that an NV card will hold onto its launch performance relative to other cards than I did half a year ago.Siana - Wednesday, January 28, 2015 - link
Small realistic assessments: most of the laptops i have seen fail have failed on the largest chip cracking its balls due to flexing of the mainboard. And most of these haven't had an nVidia in them!So it's really a very common issue, nothing special. Was it more pronounced on the particular nVidia in question? Yes, in the regard that the cracking also happened by pure thermal cycling within specified thermal parameters. But i doubt it seriously affected the laptop life, unless your laptop had a rigid magnesium body.
Alexvrb - Wednesday, January 28, 2015 - link
You have no idea how bad it was. Sorry. Google Nvidia Bumpgate. It was a HUGE issue. The failure rate was staggering, MOST of the ones with Nvidia MCPs that relied on the integrated graphics died within a couple years. It wasn't due to mainboard flexing. It was due to mismatched pad-bump materials, incorrect underfill selection (not firm enough), and uneven power draw.It wasn't quite as pronounced on dedicated graphics but there were certain models that were prone to early failure. Heat wasn't truly the cause (as it occurred within expected/rated operational temp ranges) but it did exacerbate the issue.
Ualdayan - Sunday, January 25, 2015 - link
The worst thing about bumpgate was, as someone who had multiple videocard failures (particularly 7950GX2s because they ran so hot anyway) everybody else insisted *I* must be at fault for the failures. Everybody insisted there was no way I was seeing multiple failures within months of each other without something else being wrong like my PSU (nevermind that it was happening in multiple different computers). Especially the manufacturer ("there's nothing wrong with our cards") before the problem was admitted by nVidia.GeorgeH - Saturday, January 24, 2015 - link
They acknowledge that there is a performance problem, albeit slight - in the examples they chose to use. It isn't unreasonable to assume there might be edge cases where a program assumes that all 4GB is created equal and is more memory performance restricted.Overall I'd bet this will end up a complete non-issue. That said, the fact that Nvidia "hid" this deficiency (by not clearly disclosing it from the beginning) just gives the internet drama machine way too much fuel. Stupid move on Nvidia's part.
CaedenV - Saturday, January 24, 2015 - link
Slower processors do not scale evenly under higher amounts of stress compared to faster processors. Having a 3-5% performance difference under identical heavier loads can be accounted for by lots of things, not just a difference in vRAM speeds. These are performance numbers are normal and what you would expect comparing these specs of cards.Gigaplex - Saturday, January 24, 2015 - link
You're calling a 1-3% performance difference (within margin of error in measurements) when comparing 2 products that differ in more than just memory configurations a performance problem? No, they do not acknowledge there is a performance problem, and even state, using those figures, to back up the claim that there isn't one.(There might be a performance problem under specific workloads, but their examples don't show it)
GeorgeH - Sunday, January 25, 2015 - link
"Slight" performance difference. And they clearly admit it: "there is very little change in the performance ... when it is using the 0.5GB segment." For Nvidia PR-speak, that's pretty clear. The single digit difference also makes it a non-issue, assuming that's as bad as it gets.Horza - Sunday, January 25, 2015 - link
It would be naive to assume that Nvidia chose examples that showed a worst-case scenario for the issue. How much of the slower 0.5GB segment was even in use? >3.5GB could mean 3.51GB which of course would have next to no impact on performance. I'm not implying this is some huge issue for owners just that I's like to know more.londiste - Monday, January 26, 2015 - link
worse, >3.5gb could mean >4gb which would actually fit well with the -40% performance drops. going over 4gb would inevitably impace the performance heavily and not really show the impact of that last 0.5gb.Yojimbo - Sunday, January 25, 2015 - link
Why should they disclose it to consumers? From a marketing standpoint it would just cause confusion and doubt. It's a technical detail of the memory system which is of concern to developers. As far as consumers are concerned, the performance is what matters. If it were to really perform worse in situations for which they are selling the card to use, then a heads-up from them would be in order. It still wouldn't be a bug or a deficiency, it's a different card from the 980 and doesn't have to just be a 980 with some SMs disabled. However, they are claiming that there is no performance difference under normal usage. If that's the case, it's just a technical detail. Of course, one might use the card for something other than gaming, and although one would expect a lower level of support for the card being used for that purpose, the information should still be available if it makes a difference to the programming and performance expectations of the card. Such users represent less than 1% of people screaming about the issue. It's probably 45% AMD fanboys, 45% 970 owners worried their shiny new toy has a scratch, and 10% people who would just as soon be complaining if it were revealed that Ivory soap is actually only 99.43% pure.londiste - Monday, January 26, 2015 - link
i have noticed for a while that my new shiny toy has a scratch. i just never cared enough to try and polish it out.games never consume more than 3.5gb or vram in my system, no matter what i do. granted, i am not on 4k, don't really force extreme aa or supersampling/dsr. today, there is only a handful of games that would need over 3.5gb vram usage at 1440p - mordor, ac:unity, fc4 (at least last 2 of which are far from stable).
290(x) and 980 users have reported 4gb vram usage with mordor starting at 1080p. yesterday i actually tested a whole bunch of resolutions and settings - the only settings i could get more than 3.5gb were all maxed, rendering at 5k and aa on. then it went to 4gb (and probably wanted to go over if the card had more to give). this is suspicious at best.
mapesdhs - Sunday, January 25, 2015 - link
They didn't acknowledge it was a 'problem', don't infer selectively. I don't think it's an
issue either though, and so far there's no evidence it was deliberately hidden. However,
now it's known the 970 design does differ in some way, for PR sakes it makes sense
for them to explain in more detail how the 970 works.
None of which though detracts from the fact that the 970 is a very good card, well
priced, with excellent power/noise characteristics, as so many reviews stated.
people are back-judging here, which is kinda lame.
Doesn't bother me though, I just bought a 980. ;)
Ian.
Ian.
mapesdhs - Sunday, January 25, 2015 - link
(will we ever be able to edit posts here??...)chizow - Monday, January 26, 2015 - link
No.No.
;)
yannigr2 - Sunday, January 25, 2015 - link
It's not that simple. Finding a bug after releasing a processor is one thing. Having 0.5GB RAM on a card that could lead to problems (frame time spikes, stuttering, performance issues in computing) is something totally different. The first is something that the manufacturer didn't wanted to happen. The second is a marketing decision.Nvidia come out with an explanation that shows average frame per second scores. Let's not forget that based on average fps scores, the old CF was excellent and with no problems. Then people started measuring frame times and AMD had to completely change it's drivers and abandon CF bridges.
I believe Nvidia is doing damage control. First it tries to buy time with this announcement, then probably they will use GeForce Experience to try to keep games from maxing out the memory usage on games with a GTX 970. Until it becomes a real problem with newer and more demanding games, new cards with new GPUs will be out and in some cases people will be advising others, who are searching for a good gaming monitor, to get a GSync one to help them with stuttering from their GTX 970.
xthetenth - Monday, January 26, 2015 - link
It's pretty frustrating and is sure making me wish I'd just switched a lightbulb for an LED and gotten a 290 like a smart human instead of the 970 I got. I'm probably going to be switching to an AMD card sooner than I expected when I bought the 970 after a decade with NV cards. I bought that thing to last, and it chokes on one of the most important parts of what determines how long it lasts. Unlike with Intel, NV has competition. Oh well, at least I'll probably lose the 970's driver problems.Flunk - Saturday, January 24, 2015 - link
Hardware implementation details are generally not shared with consumers. Hardware is much more complex than most people know enough about it to understand and they always gloss over the details. Nvidia's claim is that it doesn't matter and the tests look like that might be true but who knows how it performs in other cases.I have 2x Radeon 280x, and I haven't had problems with them. I did have the RMA one of the cards in less than a week (VRM exploded), but the replacement has been fine for over a year since.
CaedenV - Saturday, January 24, 2015 - link
Alright, I am admittedly a bit of an nVidia fanboy, but I really think this is a non-issue. The card DOES have 4GB of ram, and games ARE able to take advantage of all 4GB of that ram. Benchmarking software (which is typically shortsighted crap to begin with) reports the size of the main larger/faster partition of ram. It does not mean it is not there, and it does not mean that it goes unused, it just means that the software needs to be re-written to take the new architecture into account.The 512MB partition is slower than the main partition... but as far as I understand things, ram speed is rarely the bottleneck except in the highest end of cards (ie not this one), so there are likely few cases where it is a major issue. Never mind the fact that you don't really need more than 2GB of VRAM (today at least) unless you are pushing well above 1080p; in which case you ought to opt for higher end, and multiple higher end cards to deal with such a load.
It is a non-issue of a non-problem. But don't you worry, the writers at Tom's will get to the bottom of it. This is the closest thing we have had to excitement in hardware in quite a while, so journalists are craving for some kind of news or scandal to report.
Samus - Sunday, January 25, 2015 - link
The technical explanation of this is pretty simple. NVidia doesn't need to say anything further, other than WHY this really had to be implemented this way. But the way it works is simple.Imagine you have a 4GB SSD and 3500MB is C: and 500MB is D:
You can still use all 4GB of it. The only scenario you'd have a problem is if you have a 3600MB file. You couldn't actually store the file anywhere. But in games this never happens. Images, vector\shadder cache and various computational data are all small chunks. It'll span across the two partitions without any real penalty. There might even be an optimization to put certain low priority data on the 500MB partition under stressful workloads, such as PhysX data...
But this does become a problem, for example, a CUDA program that uses a database (a single file) as that file will be capped at 3.5GB. This is an incredibly rare case, and admittedly, a poorly written program as the database should probably be broken up in to 500MB chunks to prevent memory fragmentation.
olafgarten - Sunday, January 25, 2015 - link
I think I would be more worried that my SSD is smaller than my RAM!edzieba - Sunday, January 25, 2015 - link
"The only scenario you'd have a problem is if you have a 3600MB file"To be pedantic, it will happen if you have a 3500+MB OPERATION, e.g. if you have two 1200MB megatextures and you for some reason want to perform an operation that combines both into a third texture. That requires 3600MB total vRAM. This is an example (texture compression occurs, and that's a massive texture to be working on all at once), but there might be some very weird edge cases where operating on a lot of large files may start to dip into the lower-speed zone.
I can't see it being an issue for games even at extreme render and texture resolutions though.
Samus - Sunday, January 25, 2015 - link
It's important to point out that VRAM isn't a limiting factor in any graphics cards ability to render large chunks of data.Data that doesn't fit in VRAM simply get paged to system RAM, as they have for decades.
Friendly0Fire - Monday, January 26, 2015 - link
At a heavy performance hit which I doubt any game developer wants to trigger.Siana - Friday, January 30, 2015 - link
The memory is fully virtualized on the GPU, has been since GeForce6. It doesn't matter if you have a single drawcall, or single object hitting the barrier or multiple. I don't think any special provisions on the application end are necessary.I think the issue is best handled by the driver, which knows already which programs are demanding and which aren't, because demanding programs are either on its whitelist, or carry an NvOptimusEnablement token, or link NVAPI or CUDA. That's the rules set by nVidia Optimus, or are manually specified by the user. So then the driver could force the programs which are demanding to believe they have 3.5 GB and partition them into the fast section exclusively, and partition non-demanding programs such as DWM into the slow section first.
Given that DWM alone will happily eat a few hundred megs, we're really talking just about 200MB of actual performance-relevant difference. What's the big deal?
Vayra - Monday, January 26, 2015 - link
Windows occupies around 200+MB of VRAM on that slower partition as well. Hitting the 4GB or 3.51GB marker is really not that hard to do, especially when gaming @ 4K which is well within reach on this card (due to the higher amount of VRAM, in part).The issue here is frame times. The stutter that occurs when crossing the magical VRAM barrier (ie swaps bigger than 3.5GB) is precisely the same as the stutter that occurs when using 2.1GB on a 2GB card. It hardly shows in frame rate, but does show in frame time. Basically we have a card here that handles like a 3.5GB VRAM capped card, but is marketed as a 4GB one. It does not matter that at THE CURRENT STATE IN GAMING not many games utilize over 3.5GB. What matters is that the last 0,5GB does not perform like one would expect from GDDR5, it actually performs as if you didn't have it at all.
Superlatives such as 'an incredibly rare case' are worthless. If there is one edge case today, there will be dozens in a years' time as demands on VRAM increase further.
Lakku - Tuesday, February 3, 2015 - link
And do you have any proof to back up that claim?Gothmoth - Sunday, January 25, 2015 - link
it may sounds bad because(!) you have no clue....2late2die - Saturday, January 24, 2015 - link
I an confused about as to why Nvidia's statement isn't satisfactory in this case (consumer/performance wise). Obviously someone technical might be curious about more details, but as far as the whole "is it actually 4GB or not" issue goes, it seems like that has been resolved.Also, the following I feel is inaccurate:
"Despite the outward appearance of identical memory subsystems, there is an important difference here that makes a 512MB partition of VRAM less performant or otherwise decoupled from the other 3.5GB."
Nowhere did Nvidia say that the .5GB of RAM is less performant. What they said is that the 3.5GB allocation has higher priority, which is completely logical as you'd want the card to use that partition first. As for performance their statement doesn't say anything either way, instead it just goes into those examples, which to my mind show that there is in fact no significant performance hit. Now, one could certainly assume that the reason they didn't specifically say that the .5GB allocation performs the same is that it does perform slower, but that's just an assumption.
At the end of the day, it seems to me that there has been no wrong doing on Nvidia's part here other than not going into a bunch of technical detail into which they usually don't go in any case. They didn't lie about having 4GB of memory, and while the way that memory used is different between 980 and 970, it doesn't seem to affect performance.
MrPelican - Saturday, January 24, 2015 - link
But people HAVE reported performance issues. Don't you understand that this was HOW this issue was discovered in the first place, and people have found with certain games such as 'Mordor Shadow of Middle Earth', the moment the GPU dips in to that 0.5GB, there is a big drop in performance! If there was no performance issue, do you think we would even be discussing this, or that so many people would be in uproar about it? This is a SERIOUS problem, and it's only the tip of iceberg as more and more games need to utilise the full 4GB RAM. GTA V is just around the corner after all!maximumGPU - Sunday, January 25, 2015 - link
I think the issue was discovered when gpu monitoring tools weren't showing a full 4 GB allocation no matter what the settings, rather than by a performance drop.This is my experience as well, I top out at 3.5 GB on DA inquisition on 1440p.
andrewaggb - Monday, January 26, 2015 - link
But doesn't that mean for DA inquisition, you're card is effectively a 3.5GB card? Because the game is unaware or unable to use the other memory partition? Or am I misunderstanding something?Lakku - Tuesday, February 3, 2015 - link
The game shouldn't matter as the driver is between the game/OS and the card. OSDs and programs that show VRAM amounts report the card incorrectly and that is when/why the problem arose.D. Lister - Sunday, January 25, 2015 - link
Could you please share a link where multiple 970 users are in an uproar about big drops in performance related to this issue... because if such is the case, it is certainly a moment of shame for Nvidia.Gothmoth - Sunday, January 25, 2015 - link
nonsense... it was discovered becasue some tech geeks have too much time on their hands..mlambert890 - Sunday, January 25, 2015 - link
If this is a "SERIOUS" problem (that your budget card might suffer a 3% perf hit vs the absolute premium card that costs far more) in the MOST extreme performance issues than you must lead a charmed life indexedThose who are most angry are the ones who need to believe that the 970 is nothing more than a declocked 980 for the "smart guys" and that the 980 is for suckers with too much money
Reality is that this statement, and these tests, show a minor difference between the two. Given the price difference it should be expected, sorry.
xthetenth - Monday, January 26, 2015 - link
Or you could just buy a top of the line AMD card for less than the NVIDIA 'budget' card and be happy without worrying about weird errata. Considering the 970 was sold as a declocked 980, it's pretty fair to expect a declocked 970.Friendly0Fire - Monday, January 26, 2015 - link
Um, where was the 970 ever sold as a "declocked" 980?Also, AMD may be cheaper, but their power characteristics and drivers are much worse than Nvidia's. You're applying selective bias here.
dcoca - Wednesday, January 28, 2015 - link
Dude its not that at all, it's that when I paid 400 plus tax Canadian on a card based on the specs, 64 rops and 2Meg of l2 cash, with 4 gigs of ram running at memory bandwidth of 224, on a 256 bit bus, to only find out after 4 month that the specs that I based my decision was false.... that's the point, it's like being told you are getting an i5 to find out its an i3 with HT.. anyone that doesn't see that is just stupid, please bend over for me.nuoh_my_god - Sunday, January 25, 2015 - link
I'm having issues as well. Only using 1080p, but using heavily modded Skyrim with 200+ mods and over 20GB of additional meshes and textures from mods. When i stay in the <3500 zone, i have smooth sailing with 45+ fps, no drops. As soon as the cards hits 3500, i start getting the worst stuttering ever, constant drops to 0fps but with normal fps in between. Also, never see more than 3550 utilization, while my friends 670 with 4gb can use up to 3900 with no stutters, just lower fps in general.2late2die - Monday, January 26, 2015 - link
If that is indeed the case then I apologize, but from the article above I didn't get that there was in fact an issue performance, all I saw was that diagnostic tools weren't reporting 4GB. If like you say there are indeed severe performance issues then yeah, that's definitely a problem.Lakku - Tuesday, February 3, 2015 - link
Plenty of people would be in an uproar because people like to complain. Fanboys also like to point out flaws in products they don't support. There hasn't been any assessment that shows a big drop in performance or frame times. People noticed it because they have OSD screens on their keyboards or programs showing VRAM use and noticed it was showing 3.5GB and not 4. There hasn't been proof of any performance issues.dcoca - Wednesday, January 28, 2015 - link
Dude when I bought this card I was under the impression that it had 64 rops and 2meg of l2 cash, I don't care if you, or anyone else says it's not needed or impacting performance not having that total: it's the fact that I spent 400 Canadian on specs issued by vendor, and partners.... it's like buying 4 core cpu, to only find out that it really is an i3 with ht....MikeMurphy - Saturday, January 24, 2015 - link
I'm likely wrong, but I can't help but think the Maxwell GPUs read and write 32-bit addresses..tuxfool - Saturday, January 24, 2015 - link
That doesn't explain why the GTX 980 doesn't have the same issue. Nor why this issue only affects some people or if only a few people have tested their cards?PernusBernus - Saturday, January 24, 2015 - link
Interesting... I have a 980 and I get the same drop in bandwidth: http://pastebin.com/UqDVxGXLPernusBernus - Saturday, January 24, 2015 - link
Just to clarify, I used the benchmark linked from Lazygamer's coverage of the same apparent issue: http://www.lazygamer.net/general-news/nvidias-gtx9...PernusBernus - Saturday, January 24, 2015 - link
Never mind, just realised that the last 300MB or so VRAM was conserved by the system...tabascosauz - Saturday, January 24, 2015 - link
As much as I appreciate the quick and detailed response from Nvidia, this doesn't improve their reputation a single bit in my book. Nvidia has never been one to play it safe with memory buses, always preferring to use their "revolutionary" texture compression algorithms to make up for shortcomings on the VRAM side.The GTX 550 Ti, GTX 660 and GTX 660 Ti come to mind when it comes to problems addressing VRAM. All of these cards featured a 192-bit GDDR5 interface, but all of these cards either came in 1GB/2GB configs instead of the expected 768MB/1.5GB/3GB configs. After the reviews here briefly discussed the feasibility of such an implementation, it seemed to be completely forgotten in no time. Unsurprisingly, the GTX 660 was pretty disappointing. Hopefully Nvidia has learned their lesson and will take steps to improve their work in this regard.
Gothmoth - Sunday, January 25, 2015 - link
can we agree that the gtx970 is the best card for the money at the moment... so what?in some cases you have a 1-3% drop in performance... get a life guys.
should i buy a crappy radeon with crappy openGL drives instead?
no thanks....
anubis44 - Sunday, January 25, 2015 - link
No, we can't agree that the GTX970 is the best card for the money when I was able to buy a Gigabyte Radeon R9 290 for $259 Canadian and flash the bios on it for free so it runs at 1050MHz instead of 943MHz stock. The GTX970s were all $380+ Canadian, so no, the GTX970 is NOT the best bang for buck card in the mid-high end.Now that nVidia has once again shown how willing it is to rip off its own customers, I am especially glad I stayed away from them.
Gothmoth - Sunday, January 25, 2015 - link
i don´t live in canada.. so i don´t care about your prices.here the nvidia is the better bang for the buck......
+ i have working openGL drivers.. not the faulty ATI ones....
Black Obsidian - Sunday, January 25, 2015 - link
And the R9 290 doesn't have significant problems when you use all of its RAM, as opposed to what some GTX970 owners are reporting.They're both fairly minor problems, but I'd personally take "ATI's" (solvable) software issues over a suboptimal hardware design that might NOT be fixable with software.
chizow - Monday, January 26, 2015 - link
1-3% drop in perf constitutes significant problems on a part that is already 15-20% slower, but also costs about 40% less than the 980? If anything I'd say that premium on the 980 is really showing its value, for anyone who cares that much about that last 1-3% bit of performance.Pork@III - Monday, January 26, 2015 - link
Just 980 is too high priced. The price of 970 is close to normal. We need of 980 Ti to pushed down price of all ordinary 980th.chizow - Monday, January 26, 2015 - link
@anubis44, idk as I've tried to explain many times to you and others like Creig, performance is not the only factor that goes into a buying decision. Luckily for Nvidia fans however, that still mostly binds Nvidia to price against AMD based on performance alone.In any case, I'd say the rest of the marketplace disagrees with you. Despite AMD slashing prices in Q4, they still got slaughtered in the marketplace, so while they measured up well against Nvidia in terms of price and performance, they still got killed going up against the 970. Because additional support and features are worth the premium for many, and the main reason the majority will go with Nvidia if price and performance are close.
Vayra - Monday, January 26, 2015 - link
That one was a bit different though, and the stutter was considerably less than with this GTX 970.GTX 660 had 0,5GB on a smaller bus if I remember correctly, but it did not handicap the card as much. You could use 1900 MB and still have a playable game.
Klimax - Sunday, January 25, 2015 - link
Actually, we got all technical information we need:"and fewer crossbar resources to the memory system" is very condensed but technical explanation. There likely won't be anything more.
See:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=...
"Butterfly networks offer minimal hop count for a given
router radix while having no path diversity and requiring very
long wires. A crossbar interconnect can be seen as a 1-stage
butterfly and scales quadratically in area as the number of
ports increase."
http://web.eecs.umich.edu/~twenisch/papers/ispass1...
GT200:
http://www.realworldtech.com/gt200/10/
"Loads are then issued across a whole warp and sent over the intra-chip crossbar bus to the GDDR3 memory controller. Store instructions are handled in a similar manner, first addresses are calculated and then the stores are sent across the intra-chip crossbar to the ROP units and then to the GDDR3 memory controller."
From all that in conjunction with NVidia's statement we can conclude that there are likely fewer ports on crossbar -> smaller effective bandwidth as SMX have to wait for their memory access.
chizow - Monday, January 26, 2015 - link
Nice find, and I agree, its about as detailed as it is going to get without going overly technical. In essence, the missing performance was probably always there for any culled/cut SKUs based on a specific ASIC. At some point, the cut functional units are going to have to interface some other functional unit, so if one end isn't there, its only natural to assume the other interconnect will go unutilized or underutilized as a result.D. Lister - Sunday, January 25, 2015 - link
It isn't really a flawed design, just a seemingly inelegant one. I would still take it over either of the 6GB Titans.Achaios - Sunday, January 25, 2015 - link
I hope some Consumer Protection Agency in the USA will investigate and if appropriate, impose a huge fine on NVIDIA. NVIDIA should be taught a harsh lesson so that they may learn to respect consumers more.mlambert890 - Sunday, January 25, 2015 - link
Honestly this post sums up pretty much everything wrong with the US today.NVIDIA gave people a BUDGET part that outperforms anything else and is nearly as good as their far more expensive premium part and people still search for excuses to sue
.
The card works AS ADVERTISED. It offers 4GH vram and you get it.
In your world a segmented memory model is class action and fine worthy... Judy amazing. Do you apply the same standard to whatever work it is you do?
Gothmoth - Sunday, January 25, 2015 - link
most complainers are kids .. they don´t work at all.that´s why they only care about gaming performance.
RogB - Monday, January 26, 2015 - link
224 Memory bandwith (GB/sec)But this only counts for 3.5 GB of the memory not all 4GB.
So if I buy a Sports Car with 300 BHP and the topspeed is 250 Km/h…
But if you drive over 200 Km/h then your motor with 300BHP starts throttling so
You dont get the advertized 300 BHP but cuts down to 75 BHP jumping back and forward to 300 BHP and 75 BHP, but is just design flow.
Sorry…
BuddyRich - Sunday, January 25, 2015 - link
I'd like to see FCAT run on a 980 and 970 in a situation that consistently uses 4GB RAM or more. Its not the average fps its the hitching that happens when it gets above 3.5GB of VRAM.Say Shadows of Mordor at 1440p with the ultra texture pack and MSAA 4x/8x.
OrphanageExplosion - Sunday, January 25, 2015 - link
SoM doesn't have any anti-aliasing options other than super-sampling.nathanddrews - Monday, January 26, 2015 - link
I think MFAA is available now thanks to the latest NVIDIA driver. Either way, FCAT should really be brought back to test these types of issues.My 970 is used to play 1080p at high framerates, so I don't ever utilize the extra VRAM. I'll use medium textures before I sacrifice frames.
Pork@III - Sunday, January 25, 2015 - link
Pork@III say:- "Nvidia make False and Misleading Advertising"
Gothmoth - Sunday, January 25, 2015 - link
Gothmoth says:"Pork@III is a ATI fanboy."
boarsmite - Sunday, January 25, 2015 - link
"...the GTX 970 has something usual going on after 3.5GB VRAM allocation – but they have not come any closer in explaining just what is going on."I think you meant UNusual?
limitedaccess - Sunday, January 25, 2015 - link
Will you be following up with Nvidia and inquiring about the behavior of other Maxwell GPUs (980m, 970m, 965m, 750)? Perhaps also how Kepler behaves as well?htwingnut - Sunday, January 25, 2015 - link
It isn't just about overall performance, but about consistent performance. Does it cause stuttering or micro-pauses? I've seen CLI and CrossFire performance with end FPS showing 60FPS+, on par with where it should be but it stuttered like it was running at 10-15FPS. Unbelievable. nVidia needs to provide the technical details to this because we're not all a bunch of laymen sheep.Gothmoth - Sunday, January 25, 2015 - link
no it does not result in any stuttering.. not that i noticed.and i have a 4k monitor and actually can make use of 4GB.
for me that sounds like a pure theoretical problem.. based on low level benchmark results.
blown out of proportion by ATI fanboys and people who have nothing better to do than to worry about a 1-3% performance difference.
someone please post here how i can make my 970 stutter with >3.5GB compared to 3 GB vram used.... im eager to test that.
Black Obsidian - Sunday, January 25, 2015 - link
Perhaps you should check out the post by nuoh_my_god, which is on the bottom of page 3 of comments as I type this. He reports stuttering and other issues, and explains under exactly what conditions it happens. Test away.D. Lister - Monday, January 26, 2015 - link
@Ryan Smith[On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference. On Battlefield 4, the drop is 47% on GTX 980 and...]
I'm sorry Ryan, but that example has done more to rile up the neophytes then it has to shed a light on any potential hardware problems of the 970. Case in point, from: http://www.anandtech.com/bench/product/1068?vs=133...
Grid 2 performance
290 vs 280
[2560x1440 - Max Quality + 4x MSAA]
290: 80.2fps
280: 58.9fps
[1920x1080 - High Quality + 4x MSAA]
290: 194.6
280: 159.9
performance delta
290: 41.2% (-58.8)
280: 36.8% (-63.2)
Which is a difference of 4.4%.
Perhaps some frame rate/variance tests conducted while the VRAM usage hovers roughly between 3.25 and 3.75GB would be useful for the proverbial digging.
Galatian - Monday, January 26, 2015 - link
While I agree that is is probably a non-issue for most use cases, the fact remains that it is false advertisement. I just went over the nvidia website. They state 4GB of RAM with a bandwidth of 224 GB/s for both the 980 and 970. Now it is clear that for technical reasons the 970 actually can only achieve this bandwidth with 3,5 GB of RAM. I'm not sure how anybody can claim this is NOT false or misleading advertisementSloppySlim - Monday, January 26, 2015 - link
NVidia stepping on their D!@ks .it's a 4 gig card with 3.5 fast addressing , and .5 gig indirect translation due to the missing SMs ?
I'm guessing there's a corner case hardware/software bug that can result in 'up to'™ a 70% frame rate drop .
hopefully a firmware update can fix it , but the non-disclosure of the effect of the missing SMs leaves a sour taste .
jnieuwerth - Monday, January 26, 2015 - link
The problem is very easily observable in CoD: Advanced Warfare.. play on max settings and see what happens when you use the grapple hook over long distances (i.e. a lot of texture loading within a very short time). I didn't know what it was and thought it was just poor optimization of the PC port but now it seems my new 970 GTX was the cause. Very annoying indeed.chizow - Monday, January 26, 2015 - link
Interesting discussion, thanks for the details Ryan. I've often wondered at what cost these culled functional units extoll on overall GPU performance. As enthusiasts we often try to pinpoint and isolate performance deltas with readily known variables like clockspeeds, SPs, ROPs, TMUs, memory bus and we often see performance is not quite linear, or as expected based on these specs alone.But what about the unknowns? I guess we have a better understanding now, although I am not sure it really matters. In the end, this may help explain some of the differences in performance for fully functional SKUs based on the same ASIC, ie. GTX 480 vs 580, GTX 780 vs. 780Ti. I guess we now understand that cutting SM modules and functional units carries additional costs, which is not surprising since those vias to the crossbar would also be severed, incurring performance penalties.
In the end it sounds pretty simple without overcomplicating things: the cheaper SKUs will incur performance penalties and will be slower, if you want full performance, pay for the fully performing ASIC.
Even happier I picked up the 980 over the 970, now. anyways!
eanazag - Monday, January 26, 2015 - link
It would be nice to have all the technical details of said GPUs, but that is not a realistic explanation. It would be bad for a company to detail all the special sauce for competitive reasons. Now there are implications to this, but in general if it does not cause software crashing and the performance difference is consistent across 970 GPUs and in relation to the 980 who can complain. The card performs on par with a R9 290X and at way lower wattage. I stated a couple of "Ifs" and it remains to be seen that the card has no adverse impact on what Nvidia provided to the public.Now we need to find those scenarios that cause hiccups from a 970 and not a 980 or 960. If you're an informed customer, than you have the choice at that point to not buy the 970. The only concern left is that Nvidia wasn't upfront and that will only matter if there is a true issue that software updates can't handle in the future.
I own two R9 290's. I think the Nvidia 900 series is great for the market. The 970 needs some special evaluation. At first glance this is a far better situation than AMD's first Phenom errata on the 9600, which I did buy and regretted. That processor never gave me any issues; I just felt like I got a less piece of hardware for the money. Especially when AMD released the fixed processors.
Pork@III - Monday, January 26, 2015 - link
Must be updated this article. Nvidia issues, in fact were more of one, more than two, may be three...or four? \1.VRAM correct full volume on full speed
2. Volume of L2 Cache of GPU also castrated
3. Number of real working ROP's are cut off
4...
karakarga - Thursday, January 29, 2015 - link
What about GTX 670? Which is cripped version of GTX 680. They also have 4GB versions!And there is: GTX 760, again cripped version of GTX 770!
Are those ones also effected from the same bug?
Anyone knows?