Promise VTrak J300s
by Jason Clark & Dave Muysson on February 2, 2007 8:00 AM EST- Posted in
- IT Computing
Test configuration
Testing of the VTrak was performed using build 2004.07.30 of the open-source benchmark IOMeter since it has a very large amount of flexibility when testing storage subsystems. The operating system used was Windows 2003 Server R2 since it supports GPT disks, which are necessary when you want to use more than 2TB per physical volume.
We decided to run a variety of tests on the drives that allowed us to scale the amount of random accesses vs. sequential, as well as a low to high disk queue length (DQL). The purpose of this is so we can see how well the interfaces perform from high throughput to high latency, as well as from light to heavy disk loads.
For those of you unfamiliar with Disk Queue Length, it is a performance counter that indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. Microsoft recommends a value of no more then 2 per physical drive, though some seem to think this could be as high as 3 per drive.
The Disk Queue Length counter is commonly used to determine how busy a particular drive is, such as the one hosting a database. If the DQL for a database array averages 2 per drive or more, it is a good indication that disk I/O is becoming a bottleneck and that upgrading your disk subsystem could result in increased performance. Alternatively, if it averages less then 2 per drive, upgrading CPU or memory may be more beneficial since the disk subsystem is able to keep up with the current workload.
Using the LSI 8480E / ServeRAID 6M controller, we created a large RAID 10 array using all 12 disks. The operating system was hosted by a separate controller and pair of disks so that it would not impact the results. The drive was then formatted using all of the drive in one NTFS volume with a 64K Allocation Unit size.
For testing purposes, we started with a DQL of 2 (which works out to 0.167 DQL per drive) and then incremented the number by two until we reached 36 (3 DQL per drive). We wanted to see how each interface would scale from light to heavy workloads. We did not test above a DQL of 3 since most experts advise against running a storage system at this level for an extended period of time.
Since the number of ways storage can be accessed is huge, we decided to run tests that would give us a good indication of performance for almost any scenario. For example, we ran tests at 100% Sequential in the event you need to stream lots of sequential data off the drives. On the other hand you may have an application that is extremely random and you want to know how well it performs under this type of load. We also measured with a mix of random/sequential accesses at key points to better understand how much random access impacts a sequential stream.
Lastly, we used 64K access sizes for IOMeter, the NTFS Allocation unit, and RAID Stripe size. We did this to obtain the best performance possible for all drives/interfaces, but this is also beneficial since most databases use 64K access sizes when reading/writing data.
Test hardware
1 x Promise VTrak J300s with single I/O module
1 x LSI Logic 8480E PCI-E SAS HBA
12 x Seagate NL35.1 250GB SATA I Drive
12 x Western Digital 500GB WD5000YS SATA II Drive
12 x Fujitsu MAX3147RC 146GB 15K SAS Drive
1 x IBM EXP400 Chassis
1 x IBM ServeRAID 6M PCI-X SCSI HBA
12 x IBM 146GB 10K SCSI Drives
We'd like to thank Jennifer Juwono and Billy Harrison from Promise, David Nguyen from Western Digital, along with Seagate, Fujitsu and LSI Logic for providing the hardware to conduct this article.
Testing of the VTrak was performed using build 2004.07.30 of the open-source benchmark IOMeter since it has a very large amount of flexibility when testing storage subsystems. The operating system used was Windows 2003 Server R2 since it supports GPT disks, which are necessary when you want to use more than 2TB per physical volume.
We decided to run a variety of tests on the drives that allowed us to scale the amount of random accesses vs. sequential, as well as a low to high disk queue length (DQL). The purpose of this is so we can see how well the interfaces perform from high throughput to high latency, as well as from light to heavy disk loads.
For those of you unfamiliar with Disk Queue Length, it is a performance counter that indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. Microsoft recommends a value of no more then 2 per physical drive, though some seem to think this could be as high as 3 per drive.
The Disk Queue Length counter is commonly used to determine how busy a particular drive is, such as the one hosting a database. If the DQL for a database array averages 2 per drive or more, it is a good indication that disk I/O is becoming a bottleneck and that upgrading your disk subsystem could result in increased performance. Alternatively, if it averages less then 2 per drive, upgrading CPU or memory may be more beneficial since the disk subsystem is able to keep up with the current workload.
Using the LSI 8480E / ServeRAID 6M controller, we created a large RAID 10 array using all 12 disks. The operating system was hosted by a separate controller and pair of disks so that it would not impact the results. The drive was then formatted using all of the drive in one NTFS volume with a 64K Allocation Unit size.
For testing purposes, we started with a DQL of 2 (which works out to 0.167 DQL per drive) and then incremented the number by two until we reached 36 (3 DQL per drive). We wanted to see how each interface would scale from light to heavy workloads. We did not test above a DQL of 3 since most experts advise against running a storage system at this level for an extended period of time.
Since the number of ways storage can be accessed is huge, we decided to run tests that would give us a good indication of performance for almost any scenario. For example, we ran tests at 100% Sequential in the event you need to stream lots of sequential data off the drives. On the other hand you may have an application that is extremely random and you want to know how well it performs under this type of load. We also measured with a mix of random/sequential accesses at key points to better understand how much random access impacts a sequential stream.
Lastly, we used 64K access sizes for IOMeter, the NTFS Allocation unit, and RAID Stripe size. We did this to obtain the best performance possible for all drives/interfaces, but this is also beneficial since most databases use 64K access sizes when reading/writing data.
Test hardware
1 x Promise VTrak J300s with single I/O module
1 x LSI Logic 8480E PCI-E SAS HBA
12 x Seagate NL35.1 250GB SATA I Drive
12 x Western Digital 500GB WD5000YS SATA II Drive
12 x Fujitsu MAX3147RC 146GB 15K SAS Drive
1 x IBM EXP400 Chassis
1 x IBM ServeRAID 6M PCI-X SCSI HBA
12 x IBM 146GB 10K SCSI Drives
We'd like to thank Jennifer Juwono and Billy Harrison from Promise, David Nguyen from Western Digital, along with Seagate, Fujitsu and LSI Logic for providing the hardware to conduct this article.
31 Comments
View All Comments
LordConrad - Sunday, February 4, 2007 - link
They may not be used much in corporate environments, but I think it would be interesting to see where the Raptors fall on these charts considering their higher rotational speeds.dropadrop - Tuesday, February 6, 2007 - link
Yeah, I never saw a commercial product offered with Raptors. SATA seems to always be with 500GB 7200rpm drives. I guess the logic is, that people will only go with SATA to get 'cheap' space. The price / capacity ratio would fall quite drastically as soon as you move to Raptors negating the advantage.
bob4432 - Saturday, February 3, 2007 - link
how can you camparee older 10k scsi with brand new fujtisu max 15k sas? you do kow that they make a u320 version of the max drive? or the industry leader atm - the seagate 15k.5 (which i currently own and have both a str and burst of 96MB/s on a single channel u160 card due to 32bit pci limitations...) ? why woould you compare apples to oranges when you could apples to apples? why not add soem 5400rpms hdds to the mix too???JarredWalton - Saturday, February 3, 2007 - link
Sometimes you have to test with what you have available. Obviously, the SCSI setup is going to perform better with a 15K spindle, and we mention this numerous times in various ways. However, the realizable throughput is not going to come anywhere near SAS. The sequential tests show maximum throughput, and while having a SCSI setup with two connections rather than one would improve throughput, SCSI's parallel design is becoming outdated. It can still hold its own for now, but most drive companies are putting more effort into higher capacity, higher performance SAS models now.shady28 - Sunday, February 4, 2007 - link
I agree your approach to SCSI is tabloid like. You are looking at a JBOD array on a single SCSI channel using obsolete 3-year old drives. Moreover, I have yet to see a production SCSI system utilize only one SCSI channel. A setup like that is the mark of a newbie, and a dangerous one if handling critical data.
There is a huge difference in the performance of new 15k SCSI drives and the old 10K drives. Check storagereview.com and look at their IOPs readings - a critical measure for databases and OLTP applications. The top 2 ranked drives are SCSI, you don't even see SATA until you get down to the Raptor - a drive that has an IOPS rating that is more than 1/3 lower than the top rated Atlas 15K II 147GB. Even the SCSI JBOD array you used was pulled from market some 7 months ago.
If that doesn't convince you of how silly your SCSI approach is consider this :
The Seagate Cheetah 15k.5 U320 single drive has a sequential transfer rate that is better than your entire array of 14 10k rpm SCSI drives. I have seen two drives on the even older U160 interface do better in sequential reads than your array.
None of this is really a good way to benchmark arrays. A much better and more informative method would be to utilize benchmarks with Oracle and MS-SQL server under Linux and Windows with various disk configurations.
yyrkoon - Sunday, February 4, 2007 - link
Guys, you completely missed the whole point of WHY they used those drives in the comparison. They already had those drives, so thats what they used. In other words, they couldn't afford whatever the latest greatest SCSI drive costs x14 (and to be honest, why even bother buying SCSI drives, when you already have a goodly amount of SAS drives ?).Some of you guys, I really don't know what to think about you. You seem to think, that reviewers have endless amounts of cash, to drop on stuff they don't need, and would most likely never use, because they already have something better. Regardless whether you except it or not, SAS is far superior to SCSI, and has a very visible road map, compared to SCSI's, 'shaky' and un-certain future. Yes, SCSI has proven its self many times in the past, and for a long time, was the fastest option without using solid state, but now, a NEW technology, BASED on SCSI, and SATA has emerged, and I personally think that SCSI days are drawing to an end. Who knows though, maybe I'm wrong, and not like it would be the first time either . . .
JarredWalton - Monday, February 5, 2007 - link
I can't say that we purchase most of the hardware that we review, simply because it would be too expensive. In this case, however, why would a manufacturer want to send us SCSI hard drives when they already know SAS is going to be faster in many instances? Basically, SCSI and SAS 15K RPM drives cost about the same amount, but either the enclosures cost more for SCSI (in order to get multiple SCSI channels) or else they offer lower total performance (throughput). In random access tests or seek times take precedence over throughput, SAS and SCSI are going to perform about the same. With most storage arrays being used for a variety of purposes, however, why would you want a SCSI setup that offers equal good performance in a few areas but lower performance in others?At this point, the only major reason to purchase SCSI hard drives is because of existing infrastructure. For companies that have a lot of high-end SCSI equipment, it would probably make more sense to upgrade the hard drives rather than purchasing Serial Attached SCSI enclosures and hard drives, at least in the short-term. The long-term prospects definitely favor SAS over SCSI, however -- at least in my book.
yyrkoon - Monday, February 5, 2007 - link
Oh, hey Jarred, whilst you guys are still paying attention to this thread, something I personally would like to see, is minimum hardware requirements, for certain storage 'protocols'. I don't suppose you guys plan on doing something like this ?Let me clarify a little. Lately, I've been doing a LOT of experimentation with Linux / Windows file / block level storage. This includes AoE, iSCSI, CIFS, NFS, and FTP. Between two of my latest systems, I seem to be limited at around ~30MB/s(Megabytes/second). The hardware I'm using isn't server grade, but isn't shabby either, so I'm a bit confused as to what is going on. Anyhow, network is p2p GbE, and I've used multiple different drive configurations (including a 4x RAID0 array capable of 210MB/s reads). MY personal end goals are to have a very reliable storage server, but secondary goals are as high speed as possible. I wasn't expecting too much I don't think, in thinking that ~30MB/s is too slow (I was hoping for ~80-100MB/s, but would settle for ~50-60MB/s).
Anyhow, some food for though ?
JarredWalton - Monday, February 5, 2007 - link
I actually don't do too much with high-end storage. I've had transfer rates between systems of about 50 MB/s, which is close to my HDD's maximum, but as soon as there's some fragmentation it drops pretty quickly when doing network transfers. 20-30 MBps seems typical. I don't know how the OS, NIC, switch, etc. will impact things - I would assume all can have an impact, depending on the hardware and situation. Motherboard and CPU could also impact things.Best theoretical performance on GbE tends to be around 900-920 Mbps, but I've seen quite a few NICs that will top out at around 500-600 Mbps. That also creates a CPU load of 20-50% depending on CPU. Depending on your hardware, you might actually be hitting a bottleneck somewhere that caps you at ~30 MBps, but I wouldn't know much about the cause without knowing a lot more about the hardware and doing lots of testing. :|
Maybe Jason or Dave can respond - you might try emailing them, though.
yyrkoon - Monday, February 5, 2007 - link
I understand that you guys do not buy most of your hardware, well the hardware that you review, but thats part of my point, I assuming Promise either 1) gave you the SAS enclosure, for the review, or 2) 'lent' you the system for review. Either way, in my book, it doesn't really matter. Anyhow, Promise sent you guys hardware, you reviewed it, and compared it to whatever else you had on hand (no ? ).