SSD versus Enterprise SAS and SATA disks
by Johan De Gelas on March 20, 2009 2:00 AM EST- Posted in
- IT Computing
Configuration and Benchmarking Setup
First, a word of thanks. The help of several people was crucial in making this review happen:
- My colleague Tijl Deneut of the Sizing Server Lab, who spend countless hours together with me in our labs. Sizing Servers is an academic lab of Howest (University Ghent, Belgium).
- Roel De Frene of "Tripple S" (Server Storage Solutions), which lent us a lot of interesting hardware: the Supermicro SC846TQ-R900B, 16 WD enterprise drives, an Areca 1680 controller and more. S3S is a European company that focuses on servers and storage.
- Deborah Paquin of Strategic Communications, inc. and Nick Knupffer of Intel US.
As mentioned, S3S sent us the Supermicro SC846TQ-R900B, which you can turn into a massive storage server. The server features a 900W (1+1) power supply to power a dual Xeon ("Harpertown") motherboard and up to 24 3.5" hot-swappable drive bays.
We used two different controllers to avoid letting the controller color this review too much. When you are using up to eight SLC SSDs in RAID 0, where each disk can push up to 250 MB/s through the RAID card, it is clear that the HBA can make a difference. Our two controllers are:
Adaptec 5805 SATA-II/SAS HBA
Firmware 5.2-0 (16501) 2009-02-18
1200MHz IOP348 (Dual-core)
512MB 533MHz/ECC (Write-Back)
ARECA 1680 SATA/SAS HBA
Firmware v1.46 2009-1-6
1200MHz IOP348 (Dual-core)
512MB 533MHz/ECC (Write-Back)
Both controllers use the same I/O CPU and more or less the same cache configuration, but the firmware will still make a difference as you will see further. Below you can see the inside of our Storage server, featuring:
- 1x quad-core Harpertown E5420 2.5GHz and X5470 3.3GHz
- 4x2GB 667MHz FB-DIMM (the photo shows it equipped with 8x2GB)
- Supermicro X7DBN mainboard (Intel 5000P "Blackford" Chipset)
- Windows 2003 SP2
The small 2.5" SLC drives are plugged in the large 3.5" cages:
We used the following disks:
- Intel SSD X25-E SLC SSDSA2SH032G1GN 32GB
- WDC WD1000FYPS-01ZKB0 1TB (SATA)
- Seagate Cheetah 15000RPM 300GB ST3300655SS (SAS)
Next is the software setup.
67 Comments
View All Comments
marraco - Wednesday, March 25, 2009 - link
The comparison is not fair, but can be fairer:If the RAID of SATA/SAS disks is restricted to the same storage capacity than the SSD, limiting the partition to the fastest external tracks/cilynders, the latency is significantly reduced, and average read/write speed is significantly increased, so
PLEASE, PLEASE, PLEASE
Repeat the benchmarcks, but with short stroking for magnetic disks.
JohanAnandtech - Friday, March 27, 2009 - link
May I ask what the difference with the fact that we created a relatively small partition across our RAID-5 raidset? Also, you can imagine that our 23 GB database was at the outer tracks of the disks. I have to verify, but that seems logical.This kind of testing should give the same effects as short stroking. I personally think Short stroking can not be good for your actuator, while a small partition should be no problem.
marraco - Friday, March 27, 2009 - link
See this link.http://www.tomshardware.com/reviews/short-stroking...">http://www.tomshardware.com/reviews/short-stroking...
Clearly, you results are orders of magnitude than those showed on that benchmark.
As I understand, short stroking increase actuator health, because reduces physical acceleration on the actuator.
Anything necessary, is to use a small partition on the fastest external track.
you utilized a raid 0 of 16 disks, with less than 1000 gb/second.
On Tomshardware, a raid of only 4 disk achieved average (not maximun) 1400 to 1600 Mb/s. (of course, the test are not the same; for that reason, I ask for new test)
About the RAID 5: I would love to see RAID 0.
I are interesed on comparing a fast SSD as the intels, (or OCZ Vostro/Summit), with what can be achieved at the same cost, with magnetic media, if the partition size is restricted to the same total capacity than the SSD.
Anyway, thanks for the article. Good work.
So good, I want to see more :)
marraco - Sunday, April 5, 2009 - link
Please, tell me you are preparing such article :)JohanAnandtech - Tuesday, April 7, 2009 - link
We are investigating the issue. I like to have some second opinions before I start heavy benchmarking on THG article. They tend to be sensational...araczynski - Wednesday, March 25, 2009 - link
wow, color me impressed. all the more reason to upgrade everything to gigabit and fiber.BailoutBenny - Tuesday, March 24, 2009 - link
Can we get any updates on the future of chalcogenide glass (phase change) based drive technologies? IBM's Millipede and other MEMS probe storage devices? Any word about Intel and STMicroelectronics' shipments of PRAM samples to customers that happened last year? What do the rumor mills say? Are these technologies proving viable? It is difficult to formulate a coherent picture for these technologies without being an industry insider.Black Jacque - Tuesday, March 24, 2009 - link
RAID 5 in Action... However, it is rarely if ever used for any serious application.
You are obviously not a SAN Admin or know too much about enterprise level storage.
RAID 5 is the mainstay of block-level storage systems by companies like EMC.
In addition, the article mentions STEC EFDs used by EMC. On the EMC CLARiiON line, those EFDs are provisioned in RAID 5 groups.
spikespiegal - Wednesday, March 25, 2009 - link
[quote]RAID 5 is the mainstay of block-level storage systems by companies like EMC. [/quote]Which thus explains why in this day in age I see so many SANs blowing entire volumes and costing days of restoration when the room temp gets a few degrees above ambient.
Corrupted RAID 5 arrays have cost me more lost enterprise data than all the non-RAID client side disks I've ever replaced; iSeries, all brands of x386, etc. EMC has a great script to account for this in which they always blame the drives first, then only when cornered by an enraged CIO will they admit it's their controllers. Been there...done that...for over a decade in many different industries.
If you haven't been burned by RAID 5, or dare claim a drive controller in RAID 5 mode has a better MTBF than the drives it's hosting, then it's time to quite your day job at the call center in India. RAID 5 saves you the cost of one drive every four, which was logical in 1998 but not today. At least span across multiple redundant controllers in RAID 10 or something....
JohanAnandtech - Tuesday, March 24, 2009 - link
I fear you misread that sentence:"RAID 0 is good way to see how adding more disks scales up your writing and reading performance. However, it is rarely if ever used for any serious application."
So we are talking about RAID-0 not RAID-5.
http://it.anandtech.com/IT/showdoc.aspx?i=3532&...">http://it.anandtech.com/IT/showdoc.aspx?i=3532&...