Supermicro's Twin: Two nodes and 16 cores in 1U
by Johan De Gelas on May 28, 2007 12:01 AM EST- Posted in
- IT Computing
Words of thanks
A lot of people gave us assistance with this project, and we would like to give them our sincere thanks.
Angela Rosario, Supermicro US
Michael Kalodrich, Supermicro US
(www.supermicro.com)
Jerry L. Baugh, Intel US
William H. Lea , Intel US
Matty Bakkeren, Intel Netherlands
(www.intel.com)
Marcel Eeckhout, MCS
Johan Van Dijck, MCS
(www.mcs.be)
Hopefully, Liz will still smile when we start our Oracle clustering research...
APUS Research team:
Elisabeth van Dijck, Oracle installation and fine tuning, MCS benchmarking
Tom Glorieux, Server & network administration
APUS software team:
Brecht Kets, Lead developer
Ben Motmans, Core developer
Leandro Cavalieri, Unix & power monitoring
Dieter Vandroemme, APUS benchmarking developer
The APUS software team: Dieter, Leandro, and Brecht
Benchmark configuration
Unfortunately, it is not that easy to get four CPUs of the same kind in the lab. We could only populate all four sockets with the Xeon E5320 (quad core 1.86 GHz). In the following pages you can read a more detailed description of our benchmarking methods.
Hardware configurations
Here is the list of the different configurations. The quad Opteron HP DL585 is only used as a reference point; this article is in no way meant to be an up-to-date AMD versus Intel comparison. Our focus is on the possibilities of the Twin 1U.
Xeon Server 1: Supermicro Superserver 6015T-INF
1 node, Dual Xeon DC 5160 3 GHz, Dual Xeon QC 2.33 GHz
2 nodes, 2 x 2 Xeon QC E5320
Intel 5000P chipset
Supermicro's X7DBT
(2 times) 4GB (4x1024 MB) Micron FB-DIMM Registered DDR-II 533 MHz CAS 4, ECC enabled
NIC: Dual Port Intel 82563EB Gigabit Platform LAN Connect
2 Seagate NL35 Nearline 250GB SATA-II 16MB 7200RPM
Xeon Server 2: Dual Xeon DP Supermicro Superserver 6015b-8+
Dual Xeon DP 5160 3 GHz
Intel 5000P chipset
Supermicro's X7DBR-8+
8 GB (8x1024 MB) Micron FB-DIMM Registered DDR-II 533 MHz CAS 4, ECC enabled
NIC: Dual Intel PRO/1000 Server NIC
2 Fujitsu MAX3073NC 73 GB - 15000 rpm - SCSI 320 MB/s
Opteron Server 1: Quad Opteron HP DL585
Quad Opteron 880 2.4 GHz
AMD8000 Chipset
16 GB (16x1048 MB) Crucial DDR333 CAS 2.5, ECC enabled
NIC: NC7782 Dual PCI-X Gigabit
2 Fujitsu MAX3073NC 73 GB - 15000 rpm - SCSI 320 MB/s
Client Configuration: Dual Opteron 850
MSI K8T Master1-FAR
4x512 MB Infineon PC2700 Registered, ECC
NIC: Broadcom 5705
Software
MS Windows Server 2003 Enterprise Edition, Service Pack 2
MCS eFMS WebPortal v9.2.43
3DS Max 9.0
A lot of people gave us assistance with this project, and we would like to give them our sincere thanks.
Angela Rosario, Supermicro US
Michael Kalodrich, Supermicro US
(www.supermicro.com)
Jerry L. Baugh, Intel US
William H. Lea , Intel US
Matty Bakkeren, Intel Netherlands
(www.intel.com)
Marcel Eeckhout, MCS
Johan Van Dijck, MCS
(www.mcs.be)
Hopefully, Liz will still smile when we start our Oracle clustering research...
APUS Research team:
Elisabeth van Dijck, Oracle installation and fine tuning, MCS benchmarking
Tom Glorieux, Server & network administration
APUS software team:
Brecht Kets, Lead developer
Ben Motmans, Core developer
Leandro Cavalieri, Unix & power monitoring
Dieter Vandroemme, APUS benchmarking developer
The APUS software team: Dieter, Leandro, and Brecht
Benchmark configuration
Unfortunately, it is not that easy to get four CPUs of the same kind in the lab. We could only populate all four sockets with the Xeon E5320 (quad core 1.86 GHz). In the following pages you can read a more detailed description of our benchmarking methods.
Hardware configurations
Here is the list of the different configurations. The quad Opteron HP DL585 is only used as a reference point; this article is in no way meant to be an up-to-date AMD versus Intel comparison. Our focus is on the possibilities of the Twin 1U.
Xeon Server 1: Supermicro Superserver 6015T-INF
1 node, Dual Xeon DC 5160 3 GHz, Dual Xeon QC 2.33 GHz
2 nodes, 2 x 2 Xeon QC E5320
Intel 5000P chipset
Supermicro's X7DBT
(2 times) 4GB (4x1024 MB) Micron FB-DIMM Registered DDR-II 533 MHz CAS 4, ECC enabled
NIC: Dual Port Intel 82563EB Gigabit Platform LAN Connect
2 Seagate NL35 Nearline 250GB SATA-II 16MB 7200RPM
Xeon Server 2: Dual Xeon DP Supermicro Superserver 6015b-8+
Dual Xeon DP 5160 3 GHz
Intel 5000P chipset
Supermicro's X7DBR-8+
8 GB (8x1024 MB) Micron FB-DIMM Registered DDR-II 533 MHz CAS 4, ECC enabled
NIC: Dual Intel PRO/1000 Server NIC
2 Fujitsu MAX3073NC 73 GB - 15000 rpm - SCSI 320 MB/s
Opteron Server 1: Quad Opteron HP DL585
Quad Opteron 880 2.4 GHz
AMD8000 Chipset
16 GB (16x1048 MB) Crucial DDR333 CAS 2.5, ECC enabled
NIC: NC7782 Dual PCI-X Gigabit
2 Fujitsu MAX3073NC 73 GB - 15000 rpm - SCSI 320 MB/s
Client Configuration: Dual Opteron 850
MSI K8T Master1-FAR
4x512 MB Infineon PC2700 Registered, ECC
NIC: Broadcom 5705
Software
MS Windows Server 2003 Enterprise Edition, Service Pack 2
MCS eFMS WebPortal v9.2.43
3DS Max 9.0
28 Comments
View All Comments
JohanAnandtech - Monday, May 28, 2007 - link
Those DIMM slots are empty :-)yacoub - Monday, May 28, 2007 - link
ohhh hahah thought they were filled with black DIMMs :Dyacoub - Monday, May 28, 2007 - link
Also on page 8:You should remove that first comma. It was throwing me off because the way it reads it sounds like the 2U servers save about 130W but then you get to the end of the sentence and realize you mean "in comparison with 2U servers, we save about 130W or about 30% thanks to Twin 1U". You could also say "Compared with 2U servers, we save..." to make the sentence even more clear.
Thanks for an awesome article, btw. It's nice to see these server articles from time to time, especially when they cover a product that appears to offer a solid TCO and strong comparative with the competition from big names like Dell.
JohanAnandtech - Monday, May 28, 2007 - link
Fixed! Good pointgouyou - Monday, May 28, 2007 - link
The part about infiniband's performance much better as you increase the number of core is really misleading.The graph is mixing core and nodes, so you cannot tell anything. We are in an era where a server has 8 cores: the scaling is completely different as it will depend less on the network. BTW, is the graph made for single core servers ? dual cores ?
MrSpadge - Monday, May 28, 2007 - link
Gouyou, there's a link called "this article" in the part on InfiniBand which answers your question. In the original article you can read that they used dual 3 GHz Woodcrests.What's interesting is that the difference between InfiniBand and GigE is actually more pronounced for the dual core Woodcrests compared with single core 3.4 GHz P4s (at 16 nodes). The explanation given is that the faster dual core CPUs need more communication to sustain performance. So it seems like their algorithm uses no locality optimizations to exploit the much faster communication within a node.
@BitJunkie: I second your comment, very nice article!
MrS
BitJunkie - Monday, May 28, 2007 - link
Nice article, I'm most impressed by the breadth and the detail you drilled in to - also the clarity with which you presented your thinking / results. It's always good to be stretched and great example of how to approach things in structured logical way.Don't mind the "it's an enthusiast site" comments. Some people will be stepping outside their comfort zone with this and won't thank you for it ;)
JohanAnandtech - Monday, May 28, 2007 - link
Thanks, very encouraging comment.And I guess it doesn't hurt the "enthusiast" is reminded that "pcs" can also be fascinating in another role than "Hardcore gaming machine" :-). Many of my students need the same reminder: being an ITer is more than booting Windows and your favorite game. My 2-year old daughter can do that ;-)
yyrkoon - Monday, May 28, 2007 - link
It is however nice to learn about InfiniBand. This is a technology I have been interrested in for a while now, and was under the impression was not going to be implemented until PCIe v2.0 (maybe I missed something here).I would still rather see this technology in the desktop class PC, and if this is yet another enterprise driven technology, then people such as myself, who were hoping to use it for decent home networking(remote storage) are once again, left out in the cold.
yyrkoon - Monday, May 28, 2007 - link
And I am sure every gamer out there knows what iSCSI *is* . . .
Even in 'IT' a 16 core 1U rack is a specialty system, and while they may be semi common in the load balancing/failover scenario(or maybe even used extensively in paralell processing, yes, and even more possible uses . . .), they are still not all that common comparred to the 'standard' server. Recently, a person that I know deployed 40k desktops/ 30k servers for a large company, and would'nt you know it, not one had more than 4 cores . . . and I have personally contracted work from TV/Radio stations(and even the odd small ISP), and outside of the odd 'Toaster', most machines in these places barely use 1 core.
I too also find technologies such as 802.3 ad link aggregation, iSCSI, AoE, etc interresting, and sometimes like playing around with things like openMosix, the latest /hottest Linux Distro, but at the end of the day, other than experimentation, these things typically do not entertain me. Most of the above, and many other technologies for me, are just a means to an end, not entertainment.
Maybe it is enjoyable staring at a machine of this type, not being able to use it to its full potential outside of the work place ? Personally I would not know, and honestly I really do not care, but if this is the case, perhaps you need to take notice of your 2 year old daughter, and relax once in a while.
The point here ? The point being: pehaps *this* 'gamer' you speak of knows a good bit more about 'IT' than you give him credit for, and maybe even makes a fair amount of cash at the end of the day while doing so. Or maybe I am a *real* hardware enthusiast, who would rather be reading about technology, instead of reading yet another 'product review'. Especially since any person worth their paygrade in IT should already know how this system (or anything like) is going to perform beforehand.