Westmere-EX: Intel's Flagship Benchmarked
by Johan De Gelas on May 19, 2011 1:30 PM EST- Posted in
- IT Computing
- Intel
- Xeon
- Cloud Computing
- Westmere-EX
Intel Quanta QSCC-4R Benchmark Configuration
CPU |
4x Xeon X7560 at 2.26GHz or 4x Xeon E7-4870 at 2. 4GHz |
RAM | 16x4GB Samsung Registered DDR3-1333 at 1066MHz |
Motherboard | QCI QSSC-S4R 31S4RMB00B0 |
Chipset | Intel 7500 |
BIOS version | QSSC-S4R.QCI.01.00.S012,031420111618 |
PSU | 4x Delta DPS-850FB A S3F E62433-004 850W |
The Quanta QSCC-4R is an updated version of the server we reviewed a year ago. The memory buffers consume less power and support low power (1.35V) DDR3 ECC DIMMs. The server can accept up to 64x32GB Load Reduced DIMMs (LR-DIMMs), so the new server platform can offer up to 2TB of RAM!
LR-DIMMs are the successors of FB-DIMMs. Fully Buffered DIMMs reduced the load on the memory channel courtesy of a serial interface between the memory controller and the AMB. The very high serial input frequency however increased the heat generation significantly, so the memory vendors abandoned FB-DIMMs after DDR2. Until recently, all large DDR3 DIMMs have been registered DIMMs.
The new Load Reduced DIMM is a registered DIMM on steroids that buffers the address signals just like registered DIMMs, but it also buffers the datalines. LR-DIMMs therefore fully buffer the DIMMs and greatly increase the number of memory chips that can be used per channel without the power hogging serial interface of the AMBs. The downside is that buffering the datalines increases latency, especially with bus turnarounds.
The QSSC-4R comes with a rich BIOS. Below you can see the typical BIOS configuration that we used. As you can see we tested the Xeon with Turbo Boost and Hyper-Threading enabled.
Dell PowerEdge R815 Benchmarked Configuration
CPU | 4x Opteron 6174 at 2.2GHz |
RAM | 16x4GB Samsung Registered DDR3-1333 at 1333MHz |
Motherboard | Dell Inc 06JC9T |
Chipset | AMD SR5650 |
BIOS version | v1.1.9 |
PSU | 2x Dell L1100A-S0 1100W |
The R815 is not a direct competitor to the quad Xeon platform; it is more limited in RAS features and expandability (512GB of RAM max). However, it is an attractive alternative for some of the more cost sensitive quad Xeon buyers. Its very compact 2U design takes half the space of the quad Xeon servers, and a fully equipped quad Opteron server with 256GB of RAM can be purchased for less than $20,000. A similar quad Xeon system can set you back $30,000 or more.
Storage Setup
The storage setup is the same as what we described here.
62 Comments
View All Comments
extide - Monday, June 6, 2011 - link
When you spend $100,000 + on the S/W running on it, the HW costs don't matter. Recently I was in a board meeting for launching a new website that the company I work for is going to be running. These guys don't know/care about these detailed specs/etc. They simply said, "Cost doesn't matter just get whatever is the fastest."alpha754293 - Thursday, May 19, 2011 - link
Can you run the Fluent and LS-DYNA benchmarks on the system please? Thanks.mosu - Thursday, May 19, 2011 - link
A good presentation with onest conclusions, I like this one.ProDigit - Thursday, May 19, 2011 - link
What if you would compare it to 2x corei7 desktops, running Linux and free server software, what keeps companies from doing that?Orwell - Thursday, May 19, 2011 - link
Most probably the lack of support for more than about 48GiB of RAM, the lack of ECC in the case of Intel and the lack of multisocket-support, just to name a few.ganjha - Thursday, May 19, 2011 - link
There is always the option of clusters...L. - Friday, May 20, 2011 - link
Err... like you're going to go cheap for the CPU and then put everything on infiniband --DanNeely - Thursday, May 19, 2011 - link
Many of the uses for this class of server involve software that won't scale across multiple boxes due to network latency, or monolithic design. The VM farm test was one example that would; but the lack of features like ECC support would preclude it from consideration by 99% of the buyers of godbox servers.erple2 - Thursday, May 19, 2011 - link
I think that more and more people are realizing that the issue is more about lack of scaling linearly than anything like ECC. Buying a bullet proof server is turning out to cost way too much money (I mean ACTUALLY bullet proof, not "so far, this server has been rock solid for me").I read an interesting article about "design for failure" (note, NOT the same thing as "design to fail") by Jeff Atwood the other day, and it really opened my eyes. Each extra 9 in 99.99% uptime starts costing exponentially more money. That kind of begs the question, should you be investing more money into a server that shouldn't fail, or should you be investigating why your software is so fragile as to not be able to accommodate a hardware failure?
I dunno. Designing and developing software that can work around hardware failures is a very difficult thing to do.
L. - Thursday, May 19, 2011 - link
Well ./ obvious.Who has a fton of servers ? Google
How do they manage availability ?
So much redundancy that resilience is implicit and "reduced service" isn't even all that reduced.
And no designing / dev s/w that works around h/w failures is not that hard at all, and it is in fact quite common (load balancing, active/passive stuff, virtualization helps too etc.).