GeForce 6200 TurboCache: PCI Express Made Useful
by Derek Wilson on December 15, 2004 9:00 AM EST- Posted in
- GPUs
Half-Life 2 Performance
Here is the raw data we collected from our Half-Life 2 performance analysis. The data shows fairly consistent performance across all the levels we test.Half-Life 2 1024x768 Performance | |||||
at_canals_08 | at_coast_05 | at_coast_12 | at_prison_05 | at_c17_12 | |
GeForce 6200 (128-bit) | 43.6 | 65.23 | 50.4 | 41.57 | 45.84 |
GeForce 6200 (TC-64b) | 38.26 | 61.81 | 48.2 | 38.92 | 42.11 |
Radeon X300 | 34.3 | 57.54 | 39.14 | 32.62 | 40.69 |
GeForce 6200 (TC-32b) | 31.66 | 51.09 | 39.72 | 30.69 | 34.1 |
Radeon X300 SE | 29.59 | 52.42 | 34.91 | 30.55 | 37.01 |
The X300 SE and new 32-bit TurboCache cards are very evenly matched here. The original 6200 leads every time, but the TC versions do hold their own fairly well. The regular X300 isn't quite able to keep up with the 64-bit version of the 6200 TurboCache, especially in particularly GPU limited levels. This comes across in a higher average performance at high resolutions.
Again, Unlike Doom 3, the TurboCache parts are able to keep up with the 128-bit 6200 part fairly well. This has to do with the ammount of memory bandwidth required to process each pixel, and HL2 is more evenly balanced between being GPU dependant and memory bandwidth dependant.
We can see from the resolution scaling chart that in cases other than 1024x764, the competition between the X300 series and the 6200 TurboCache parts is a wash. It is impressive that all these cards run HL2 at very playable framerates in all our tests.
43 Comments
View All Comments
sphinx - Wednesday, December 15, 2004 - link
I think this is a good offering from NVIDIA. Passively cooled is a VERY good solution in my line of work. One less thing I have to worry about silencing. As I use my PC to make money, not for playing games. Although I like to play an occasional game from time to time don't get me wrong. I use my XBOX for gaming. When this card comes out I'll get one.DerekWilson - Wednesday, December 15, 2004 - link
#9, It'll only use 128Mb if a full 128 is needed at the same time -- which isn't usually the case, but we haven't done an indept study on this yet. Also, keep in mind that we still tested at the absolute highest quality settings with noAA/AF (excpet doom 3 even used 8x AF as well). We were not seeing slide show framerates. The FX5200 doesn't even support all the features of the FX5900, let alone the 6200TC. Nor does the FX5200 perform as well at equivalent settings.IGP is something I talked to NVIDIA about. This solution really could be an Intel Extreme Graphics killer (in the integrated market). In fact, with the developments in the mareketplace, Intel may finally get up and start moving to create a graphics solution that actually works. There are other markets to look for TurboCache solutions to show up as well.
#11 ... The packaging issue is touchy. We'll see how vendors pull it off when it happens. The cards do run as if they has a full 128MB of ram, so that's very important to get across. We do feel that talking about the physical layout of the card and the method of support is important as well.
#8, 1600x1200x32 only requires that 7.5MB be stored locally. As was mentioned in the artile, only the FRONT buffer needs to be local to the graphics card. This means that the depth buffer, back buffer and other render surfaces can all be in system memory. I know it's kind of hard to believe, but this card can actually draw everything diectly into system RAM from the pixel pipes and ROPs. When the buffers are swapped to display the back buffer, what's in system memory is copied into graphics memory.
It really is very cool for a low performance budget part.
And we might see higher performance version of turbo cache in the future ... though NVIDIA isn't talking about them yet. It might be nice to have the possibility of an expanded framebuffer with more system RAM if the user wanted to enable that feature.
TurboCache is actually a performance enahancing feature. It's just that it's enhancing the performance of a card with either 16MB or 32MB of on board ram and either a 32 or 64 bit memory bus ... :-)
DAPUNISHER - Wednesday, December 15, 2004 - link
"NVIDIA has defined a strict set of packaging standards around which the GeForce 6200 with TurboCache supporting 128MB will be marketed. The boxes must have text, which indicates that a minimum of 512MB of system RAM is necessary for the full 128MB of graphics RAM support. It doesn't seem to require that a discloser of the actual amount of onboard RAM be displayed, which is not something that we support. It is understandable that board vendors are nervous about how this marketing will go over, no matter what wording or information is included on the package."More bullsh!t deceptive advertising to bilk uninformed consumers out of their money.
MAValpha - Wednesday, December 15, 2004 - link
#7, I was thinking the same thing. This concept seems absolutely perfect for nForce5 IGP, should NVidia decide to go that route. And, once again, NVidia's approach to budget seems superior to ATI's, at least from an initial glance. A heavily-castrated 6200TC running off SHARED RAM STILL manages to outperform a full X300? Come on, ATI, get with it!I gotta wonder, though: this solution seems unbelievably dependent on "proper implementation of the PCIe architecture." This means that the card can never be coupled with HSI for older systems, and transitional boards will have trouble running the card (Gigabyte's PT880 with converted PEG, for example- PT880 natively supports AGP). Does this mean that a budget card on a budget motherboard will suffer significantly?
mindless1 - Wednesday, December 15, 2004 - link
IMO, even (as low as) $79 is too expensive. Taking 128MB of system memory away on a system budgetized to include one of these, would typically be leaving 384MB, robbing the system of memory to pay nVidia et al. for a part without (much) memory.I tend to disagree with the slant of the article too, that it's not necessarily a good thing to try pushing modern gaming eyecandy at expense of performance. What looks good isn't a crisp and anti-aliased slideshow, but a playable game. even someone just beginning at gaming can discern a lag when fragging it out.
We're only looking at current games now, the bar for performance needs will be raised but the cards are memory bandwidth limited due to the architecture. These might look like a good alternative for someone who went and paid $90 for an FX5200 from BestBuy last year but in a budget system it's going to be tough to justify ~ $80-100 when a few bucks more won't rob one of system memory or as much performance.
Even so, historically we've seen that initial price-points do fall, better to see modern support than a rehash of a FX5xxx.
PrinceGaz - Wednesday, December 15, 2004 - link
nVidia's marketing department must be really pleased with coming up with the name "TurboCache". It makes it sounds like its faster than a normal card without TurboCache, whereas in reality the opposite is true. Uninformed customers would probably choose a TurboCache version over a normal version, even if they were priced the same!----
Derek- does the 16MB 6200 have limitations on what resolutions can be used in games? I know you wouldn't want to run it at 1600x1200x32 in Far Cry for instance, but in older games like Quake 3 it should be fast enough.
Thing is that the frame-buffer at 1600x1200x32 requires 7.3MB, so with double-buffering you're using up a total of 14.65MB leaving just 1.35MB for the Z-buffer and anything else it needs to keep in local memory, which might not be enough. I'm assuming the frame the card is currently displaying must be held in local memory, as well as the frame being worked on.
The situation is even worse with anti-aliasing as the frame-buffer size of the frame being worked on is multiplied in size by the level of AA. At 1280x960x32 with 4xAA, the single frame-buffer alone is 18.75MB meaning it won't fit in the 16MB 6200. It might not even manage 1024x768 with 4xAA as the two frame buffers would total 15MB (12MB for the one being worked on, 3MB for the one being displayed).
It will be interesting to know what the resolution limits for the 16MB (and 32MB) cards are, with and without anti-aliasing.
Spacecomber - Wednesday, December 15, 2004 - link
I may be way off base with this question, but would this sort of GPU lend it self well to some sort of integrated, onboard graphics solution? Even if it is isn't integrated directly into the main chipset (or chip for Nvidia), could it simply be soldered to the motherboard somewhere?Somehow this seems to make more sense to me for what to do with this technology than use it on a dedicated video card, especially if the price point is not that much less than a regular 6200.
bamacre - Wednesday, December 15, 2004 - link
Great review.Wow, almost 50 fps on HL2 at 10x7, that is pretty good for a budget card.
I'd like to see MS, ATI, and Nvidia get more people into PC gaming, that would make for better and cheaper games for those of us who are already loving it.
DerekWilson - Wednesday, December 15, 2004 - link
Actually, nForce 4 + AMD systems are looking better than Intel non-925xe based systems for TurboCache parts. We haven't looked at the 925xe yet though ... that could be interesting. But overhead hurts utilization alot on a serial bus, and having more than 6.4GB/s from memory might not be that useful.The efficiency of getting bandwidth across the PCI Express bus will still be the main bottleneck in systems though. Chipsets need to impliment PCI Express properly and well. That's really the important part. The 915 chipset is an example of what not to do.
jenand - Wednesday, December 15, 2004 - link
Turbo cache and Hyper memory cards should do better on Intel based systems as they do not need to go via the HTT to det to the memory. So I agree with #3 show us som i925X(E) tests. I'm not expecting higher scores on the Intel systems however. Just a larger gain from this type of technology.