The Best Server CPUs part 2: the Intel "Nehalem" Xeon X5570
by Johan De Gelas on March 30, 2009 3:00 PM EST- Posted in
- IT Computing
Benchmark Configuration
None of our benchmarks required more than 16GB RAM.
Each Server had an Adaptec 5805 connected to the Promise 300js DAS. Database files were placed on a six drive RAID 0 set of Intel X25-E SLC 32GB SSDs, and log files on a four drive RAID 0 set of 15000RPM Seagate Cheetah 300GB hard disks.
We used AMD 8356 and 8384 CPUs in dual CPU configurations. Performancewise they are identical to the Opteron 2356 and 2387. So to avoid confusion, we list the Opterons 83xx as Opteron 2356 and Opteron 2384.
Xeon Server 1: ASUS RS700-E6/RS4 barebone
CPU: Dual Xeon "Gainestown" X5570 2.93GHz
MB: ASUS Z8PS-D12-1U
RAM: 6x4GB (24GB) ECC Registered DDR3-1333
NIC: Intel 82574L PCI-E Gbit LAN
Xeon Server 2: Intel "Stoakley" platform server
CPU: Dual Xeon E5450 at 3GHz
MB: Supermicro X7DWE+/X7DWN+
RAM: 16GB (8x2GB) Crucial Registered FB-DIMM DDR2-667 CL5 ECC
NIC: Dual Intel PRO/1000 Server NIC
Xeon Server 3: Intel "Bensley" platform server
CPU: Dual Xeon X5365 at 3GHz, Dual Xeon L5320 at 1.86 GHz and Dual Xeon 5080 at 3.73 GHz
MB: Supermicro X7DBE+
RAM: 16GB (8x2GB) Crucial Registered FB-DIMM DDR2-667 CL5 ECC
NIC: Dual Intel PRO/1000 Server NIC
Opteron Server: Supermicro SC828TQ-R1200LPB 2U Chassis
CPU: Dual AMD Opteron 8384 at 2.7GHz or Dual AMD Opteron 8356 at 2.3GHz
MB: Supermicro H8QMi-2+
RAM: 24GB (12x2GB) DDR2-800
NIC: Dual Intel PRO/1000 Server NIC
PSU: Supermicro 1200W w/PFC (Model PWS-1K22-1R)
vApus/DVD Store/Oracle Calling Circle Client Configuration
CPU: Intel Core 2 Quad Q6600 2.4GHz
MB: Foxconn P35AX-S
RAM: 4GB (2x2GB) Kingston DDR2-667
NIC: Intel Pro/1000
The Platform: ASUS RS700-E6/RS4
We were quite surprised to see that Intel chose the ASUS RS700-E6/RS4 barebone, but it came clear that ASUS is really gearing up to compete with companies like Supermicro and Tyan. This ASUS 1U barebone has a new Tylersburg-36D (Intel 5520) chipset and ICH10R Southbridge.
The ASUS RS700-E6 is a completely cable-less design, which is quite rare. According to ASUS, the gold finger mating mechanism delivers a more reliable signal quality. That is hard to verify but it is clear that a loose connection is much more unlikely than with cables. We have only had the server in the labs a few weeks, so it is too early to talk about the reliability, but we can say that the build quality of the server is excellent. The 6-phase power regulation that feeds each CPU comes from very high quality solid capacitors that are guaranteed to survive 5 years of working at 86°C (typically this is only 2 years). The same is true for the 3-phase memory power regulation. A special energy process unit (EPU) steers the VRMs to obtain higher power efficiency.
A rather unique feature is that this 1U server also supports two full height PCI-E expansion slots and one half-height slot (close to the PSU). The two full height slots are PCI-E x16 slots and the low profile slot is PCI-E x8. In addition, you can add a proprietary PIKE card, which allows you to add a SAS controller. This can be an LSI 1064E Software RAID solution (RAID 0 or 1) or a real hardware RAID card (the LSI 1078) with support for RAID 0, 1, 10, 5 and even 6.
The expandability is thus excellent, especially if you consider that the ASUS RS700 has room for two (1+1) redundant PSUs. We still have a few items on our wish list, though. We would like a less exotic video card with slightly more video RAM; ASUS uses the AST2050 with only 8MB. While many people will never use the onboard video, some of us do need to use it from time to time. The card comes with decent Windows and Linux drivers. Our distribution (SUSE SLES10SP2) would only work well at 1024x768 and refused to work in text mode until we installed the video driver, so it took a bit of tinkering before we were even capable of installing the right driver.
ESX 3.5 Update 3 does not recognize the new Intel SATA controller well, but luckily the ASUS server can be equipped with an ESX3i USB stick. ASUS offers a special USB port inside the server to attach the stick. We are currently circumventing the SATA-ESX issue with an install via ftp.
Overall, this is one of the finest 1U barebones that we have seen to date. We are pleased with the expandability, the excellent fabrication quality, and the 3-year warranty that ASUS provides.
44 Comments
View All Comments
gwolfman - Tuesday, March 31, 2009 - link
Why was this article pulled yesterday after it first posted?JohanAnandtech - Tuesday, March 31, 2009 - link
Because the NDA date was noon in the pacific zone and not CET. We were slightly too early...yasbane - Tuesday, March 31, 2009 - link
Hi Johan,Any chance of some more comprehensive Linux benchmarks? Haven't seen any on IT Anandtech for a while.
cheers
JohanAnandtech - Tuesday, March 31, 2009 - link
Yes, we are working on that. Our first Oracle testing is finished on the AMD's platform, but still working on the rest.Mind you, all our articles so far have included Linux benchmarking. All mysql testing for example, Stream, Specjbb and Linpack.
Exar3342 - Monday, March 30, 2009 - link
Thanks for the extremely informative and interesting review Johan. I am definitely looking forward to more server reviews; are the 4-way CPUs out later this year? That will be interesting as well.Exar3342 - Monday, March 30, 2009 - link
Forgot to mention that I was suprised HT has such an impact that it did in some of the benches. It made some huge differences in certain applications, and slightly hindered it in others. Overall, I can see why Intel wanted to bring back SMT for the Nehalem architecture.duploxxx - Monday, March 30, 2009 - link
awesome performance, but would like to see how the intel 5510-20-30 fare against the amd 2378-80-82 after all that is the same price range.It was the same with woodcrest and conroe launch, everybody saw huge performance lead but then only bought the very slow versions.... then the question is what is still the best value performance/price/power.
Istanbul better come faster for amd, how it looks now with decent 45nm power consumption it will be able to bring some battle to high-end 55xx versions.
eryco - Tuesday, April 14, 2009 - link
Very informative article... I would also be interested in seeing how any of the midrange 5520/30 Xeons compare to the 2382/84 Opterons. Especially now that some vendors are giving discounts on the AMD-based servers, the premium for a server with X5550/60/70s is even bigger. It would be interesting to see how the performance scales for the Nehalem Xeons, and how it compares to Shanghai Opterons in the same price range. We're looking to acquire some new servers and we can afford 2P systems with 2384s, but on the Intel side we can only go as far as E5530s. Unfortunately there's no performance data for Xeons in the midrange anywhere online so we can make a comparison.haplo602 - Monday, March 30, 2009 - link
I only skimmed the graphs, but how about some consistency ? some of the graphs feature only dual core opterons, some have a mix of dual and quad core ... pricing chart also features only dual core opterons ...looking just at the graphs, I cannot make any conclusion ...
TA152H - Monday, March 30, 2009 - link
Part of the problem with the 54xx CPUs is not the CPUs themselves, but the FB-DIMMS. Part of the big improvement for the Nehalem in the server world is because Intel sodomized their 54xx platform, for reasons that escape most people, with the FB-DIMMs. But, it's really not mentioned except with regards to power. If the IMC (which is not an AMD innovation by the way, it's been done many times before they did it, even on the x86 by NexGen, a company they later bought) is so important, then surely the FB-DIMMs are. They both are related to the same issue - memory latency.It's not really important though, since that's what you'd get if you bought the Intel 54xx; it's more of an academic complaint. But, I'd like to see the Nehalem tested with dual channel memory, which is a real issue. The reason being, it has lower latency while only using two channels, and for some benchmarks, certainly not all or even the majority, you might see better performance by using two (or maybe it never happens). If you're running a specific application that runs better using dual channel, it would be good to know.
Overall, though, a very good article. The first thing I mention is a nitpick, the second may not even matter if three channel performance is always better.