Comments Locked

49 Comments

Back to Article

  • Gilbert Osmond - Thursday, August 12, 2010 - link


    Buying an SSD for your notebook or desktop is nice. You get more consistent performance. Applications launch extremely fast... ...However, at the end of the day, it’s a luxury item. It’s like saying that buying a Ferrari will help you accelerate quicker. That may be true, but it’s not necessary.


    Whether an SSD is a luxury or not depends on the user. I use my desktop (a late 2009 Mac Mini w/5GB RAM + 60GB SF-1200-based SSD) for work. The increased performance from the SSD helps me to get more work done in the same amount of time, which means I can make the same amount of money in less time, which translates to reduced stress. That's not a trivial, or luxury, result.

    For any given action - say, committing a new database record, or opening a 2MB PDF, doing an unrestricted Spotlight search -- I may only save 1 to 4 seconds compared to my previous 5400RPM 2.5" HDD. But over hundreds of such operations per day, the time adds up. In certain cases the speed-up is even greater, say, launching large apps, when scrolling through thousands of image thumbnails, or through large PDF docs, when drilling down through deep / wide / densely-populated directory trees.

    Then there's the qualitative -- and IMO just as important -- perception of increased speed. My thought processes and decision-making can also be faster now that I can count on the computer to come closer to matching my pace. I don't have to waste as much valuable patience waiting for the computer to catch up.
  • semo - Thursday, August 12, 2010 - link

    Anand was trying to make a point that SSD make sense for corporate/business use.

    What I wonder is how can an SSD be "enterprise" when it doesn't even support SAS, let alone SC? SSDs are still seen as something new and infant and such claims do little to help their image in the actual enterprise environment. We can't make use of this SSD in my company because our SAN simply does not support SATA drives.

    If SLC drives are out of the price range for most enthusiasts and not compatible with most enterprise hardware, who are they for? What's happening with the Intel/Hitachi partnership on a SAS interface for SSDs?
  • Voo - Thursday, August 12, 2010 - link

    Shouldn't SAS be backwards compatible to SATA 2? Not my area of expertise so maybe I miss something, but from my point of view SAS doesn't offer anything essential over SATA featurewise and since the SATA SSD is way ahead of the SAS HDD the performance also isn't a problem.
  • retnuh - Thursday, August 12, 2010 - link

    SAS is backwards compatible with SATA2, the advantages of SAS is the SCSI protocol and all its features. Because of that the controllers are a lot more robust. A short description of the differences is located here:

    http://en.wikipedia.org/wiki/Serial_attached_SCSI#...

    For a normal desktop machine, SATA is a big step forward and fine for everyday use. Server side, SCSI is just way more flexible and scalable.
  • nexox - Monday, August 23, 2010 - link

    """Shouldn't SAS be backwards compatible to SATA 2? Not my area of expertise so maybe I miss something, but from my point of view SAS doesn't offer anything essential over SATA featurewise and since the SATA SSD is way ahead of the SAS HDD the performance also isn't a problem."""

    You /can/ use a SAS HBA or RAID controller with SATA drives, but SAS drives have a few advantages that some people absolutely cannot live without - multi-pathing, where a single drive can connect to multiple controllers, for redundancy in case of controller or host failure, is a good example.
  • retnuh - Thursday, August 12, 2010 - link

    I second this, bought a OWC SSD for my MBP ~2 months ago. I spend all day in and out of VMs, theres a night and day difference in my ability to bounce around tasks, mostly programming starting/switching between various IDEs, and not wait on the machine. VM performance shot through the roof in comparison to the 5400rpm drive. This has to be the single most productive purchase I've made. Anyone doing work inside a virtual machine needs an SSD.
  • Romulous - Saturday, August 14, 2010 - link

    I took a different path to get performance, but without sacrificing capacity. For years I’ve been searching for an ‘ultimate’ hard drive configuration. My main workstation has a 12 port 3ware controller with (so far) 3 drives in raid 5. Now, that does not sound impressive, that is until I benched marked it. My old setup had 11 WD3200YS raid edition drives, the new one uses WD2003FYYS drives. The new array can almost keep up to the old one. Very impressive for only 3 drives! Just wait until I add more!
  • Romulous - Saturday, August 14, 2010 - link

    Actually that was 9 WD3200YS not 11.
  • tech6 - Thursday, August 12, 2010 - link

    That is not the point - they are faster but they are expensive and most companies could probably get similar productivity gains by cheaper means - like cutting down on the amount of web surfing by their employees (like you and I are doing now).
  • james.jwb - Friday, August 13, 2010 - link

    Whether a Ferrari is considered a luxury or not also depends on the user/usage...
  • Gilbert Osmond - Friday, August 13, 2010 - link

    Whether a Ferrari is considered a luxury or not also depends on the user/usage...


    A Ferrari is a luxury for 99.99997328471% of Ferrari owners. Aside from German police units on the Autobahn (who would probably use German high-end cars, anyhow,) is there any necessary or serious use for Ferraris? Not really.
  • jabberwolf - Friday, August 27, 2010 - link

    It does depend on the task and you are right...
    99% wouldnt use it for its purpose other than flash.

    But lets say I want to use it for 100-200 virtual destops on a server, and these VDI sessions eat iops.
    Not to mention, I need this IOP consumtion as well as a drive that can erase bad data when the VDI sessions are not provisioned and the next group needs to login.

    These drives are a god send to enterprise use that are looking for these drives, and dont want to go and buy 20 sas drives and a drive array...to get th same IOPS use.

    If you dont know what I am talking about - then youre probably not in that 1% market ;)
  • eanazag - Thursday, August 12, 2010 - link

    I am glad to see more enterprise offerings. I hope the pricing is right. The spec numbers look good. I have followed the articles closely on SSDs. I own 3 Intel X-25M 80GBs and a 60GB Mushkin Callisto Enhanced Deluxe. I passed on the Crucial 64GB for performance drops without TRIM and the 256GB model really has the enviable specs. I am using these in a ESXi server running training VMs of Windows XP with a MS SQL DB employed. The performance allows me to skip RAID and really pack in the VMs.

    I want to see more OPAL spec encryption on SSDs other than Samsung on the market. The current Samsung drives only appeal to me is the OPAL spec encryption. The new Samsung drives may be reasonable. The company I work for could potentially gobble these drive types up over the next few years to replace software encryption on client machines. RAID 5/6 and the ilk support would be nice also.
  • ///// - Thursday, August 12, 2010 - link

    How come SSDs don't require redundancy? Are you saying they never fail prematurely? Is there any data?
  • Gilbert Osmond - Thursday, August 12, 2010 - link

    For all-you-can-read (and more) about SSD speed, reliability, marketplace analysis, history, etc., I *highly* recommend this site:

    http://ww.storagesearch.com

    See this link for a specific discussion on data integrity:
    http://www.storagesearch.com/sandforce-art1.html

    The site is one person's experienced, informed, independent,technically-competent perspective on the past, present, and future of the SSD market. The editor's focus is on the enterprise, but that's because until very recently SSDs have been almost exclusively an enterprise-level product.

    Some of his newer articles address the newly-emerging lower levels of the market, i.e. SOHO / consumer applications such as laptops and small NAS.
  • ///// - Thursday, August 12, 2010 - link

    Seriously? I want data and you give me this?
  • Gilbert Osmond - Friday, August 13, 2010 - link

    Seriously? I'm going to waste my time doing your homework for you? Show some gratitude and use your mouse-button to find your own answers; they're out there on the Intarweb waiting for you.
  • yknott - Thursday, August 12, 2010 - link

    SSD's definitely require redundancy. An SSD is just like any other electronics item, it can and will fail at some point. In the tests that Johan performed above, I'm guessing that the SSDs were in some sort of RAID 10 configuration. The single SSD configuration was for pure illustration. No one would ever run on that kind of configuration in a real production system.

    I'm not sure I would even question if/when SSDs fail. it doesn't matter. If the data you are writing to your SSD is important and uptime is valuable, then redundancy is required.

    On a side note, Anand/Johan, I'd love to find out the specs for the server you ran those tests on. I'm curious to see if the RAID controller was being maxed out at any point. A simple test would be to see how the results scaled as you moved from 4->6->8 drives in your array. I'm willing to bet that you're starting to hit a limit where the controller can't keep up with all of the i/o requests coming through.
  • jimhsu - Thursday, August 12, 2010 - link

    Read: http://research.microsoft.com/pubs/63596/USENIX-08...
    http://forums.anandtech.com/showthread.php?t=20710...

    Briefly:

    SSDs have more predictable failure modes than hard drives. Read/write errors are directly correlated with block lifetime. Raw error rates in general are at least 10 times lower than hard drives. Parity varies from industry standard (URE 10^-14) to extreme (i.e. Sandforce). Controller failures are not covered.

    I highly recommend searching the internet (http://tinyurl.com/32lzc6s) Plenty of useful info.
  • ///// - Friday, August 13, 2010 - link

    The internet tends to focus on easily measured or predicted failure modes that may not lead to sudden drive failures at all.
    What is known about "Controller failures" as in here
    http://forums.anandtech.com/showpost.php?p=2887340...
  • bah12 - Friday, August 13, 2010 - link

    "SSDs have more predictable failure modes than hard drives"

    More predictable <> 0% unpredicable. In other words even if there is a very slight risk of unpredictable failure, it is still there and therefore an unpredictable failure can occur.

    Also no one has addressed timing, so what if a "predictable" failure can be determined if it is only "predicted" 30 sec before the failure.

    When Intel/Micron will guarantee 0 failures, and their driver will accurately predict that and notify an admin, then I'll buy it. Until then it is just theory. Trust me if Intel could with 100% certainty predict failures early enough, they would be marketing the hell out of that. Bottom line is if they won't stick their neck out on it, then I won't be either.
  • Justin Time - Friday, August 13, 2010 - link

    EVERYTHING in an enterprise system requires redundancy. When down-time costs you $/min you simply can't afford to accept anything less. An SSD may have a predictable life-span, but that doesn't rule out unexpected failure.

    However, the point the article seems to miss about replacing a bunch of 15K dirves with a single SSD is that the primary reason for multi-drive RAID is typically capacity as well as redundancy. You don't replace a RAID array of 1TB+ drives with a single SSD of ANY kind.
  • JHoov - Monday, August 30, 2010 - link


    However, the point the article seems to miss about replacing a bunch of 15K dirves with a single SSD is that the primary reason for multi-drive RAID is typically capacity as well as redundancy. You don't replace a RAID array of 1TB+ drives with a single SSD of ANY kind.


    Primary reason for a multi-drive RAID is rarely capacity in my experience, and if capacity is your primary reason, SSDs aren't the right path for you. In my experience, redundancy is number 1, with IOPS (Performance) being a close second. In some cases, that is reversed, or redundancy is not a concern, ie for temp tables on a database where the data is transient and unimportant, but the speed at which it can come in and out of the storage is paramount.

    In many cases, I've seen 10 or more (sometimes a lot more) 146GB or 300GB 15K RPM SAS HDDs in an array that have been short stroked to keep the data on the fastest portion of the disk, thereby increasing throughput. So you've got between 1.4 and 3TB of RAW space being used to hold a couple hundred GB of data, with the rest being wasted. In many cases, a pair of good SSDs in RAID1 would provide the same or better performance than the disks in random IOPS, at about the same cost, without needing the external array and everything that goes with it (rack space, power, cooling, etc.)

    I recently looked at a database server that was completely I/O bound, and determined that the traditional hard disk method would have needed ~40 15K RPM drives to satisfy the IOPS requirements. Instead of that, an array of 6 SSDs was specified, and thus far, benchmarking on them looks like it will easily meet the needs, with some headroom for future expansion.

    On your second point, of course you would never replace an array of 1TB+ HDDs with SSD (one or many) unless you've got a printing press churning out money in the back. But then, you shouldn't be using an array of 1TB+ disks if performance is your concern either, seeing as all of the 1TB+ disks I've seen have been SATA (or 'midline' SAS), and <=7200 RPM. SSDs can, do, and will continue to serve a definite purpose in applications where performance is the absolute highest priority, such as the OLTP example given in the original article.
  • cdillon - Thursday, August 12, 2010 - link

    While mechanical storage requires redundancy in case of a failed disk, SSDs don’t. As long as you’ve properly matched your controller, NAND and ultimately your capacity to your workload, an SSD should fail predictably.


    No "enterprise" SSD will fail unpredictably, ever? I want to be able to keep my data integrity and uptime high by being able to swap user-replaceable components out as soon as they fail or are predicted to fail, and some kind of N+1 (or better) redundancy of your data-storage modules is required to do that. Whether they are HDDs or SSDs or holographic cubes or whatever is irrelevant.
  • andersx - Thursday, August 12, 2010 - link

    SSD is for very good use on scratch disks, if you are in that sweet spot, where the data you're fetching is not small enough to fit in the memory, but not large enough to fill several TB's of disk space. If you can get by with perhaps as little as perhaps 3-4 SSDs in your RAID array, you can potentially save a lot of time, depending on your program. But it's a weighing of the cost/benefit. You can get a more storage if you buy 300gigs 10 or 15K rpm disks for the same price.

    Our servers run 5 300GB 15K rpm disks in raid 0 (no redundancy) for swapping only. If we could fit our data into SSD, execution times would probably go up a fair bit, but then again we couldn't afford buying as many nodes, since SSDs are way more costly.

    It's hard to generalize "enterprise" or "servers" as a whole.
  • mapesdhs - Thursday, August 12, 2010 - link


    I would love to use SSDs as scratch storage for movie processing, and a good one
    for a system disk, but the cost & lowish capacity still put me off. Though obviously
    nothing like as fast, it was a lot cheaper to buy a couple of used 450GB 15K SAS,
    for which I get 187 to 294MB/sec sequential read (HDTach). Not tested random
    read/write yet, should be interesting.

    There's also the problem of XP vs. Win7, which controllers perform better when TRIM
    isn't available, eg. SandForce vs. whatever Crucial RealSSD C300 uses (forget
    offhand, Micron?)

    IMO the technology still needs to mature a bit.

    Ian.
  • Out of Box Experience - Thursday, August 12, 2010 - link

    Re: There's also the problem of XP vs. Win7, which controllers perform better when TRIM
    isn't available, eg. SandForce vs. whatever Crucial RealSSD C300 uses (forget
    offhand, Micron?)
    -----------------------
    Trim is not available on Sandforce Conrolled SSD's only when using XP
    Trim is available on Sandforce Controlled SSd's when using Windows 7

    Here is how a 40GB OCZ Vertex 2 performs on a Lowly ATOM Dualcore 510 series computer in the WORST CASE Scenario>

    Boot time from Win Logo to Functioning Desktop is 7 seconds after installing XP SP2 without ANY drivers or software installed (Just XP was installed)

    Boot time with all drivers and AVG Antivirus is about 14 seconds

    The difference in boot times is due to the software you have installed and NOT a SSD related issue (This should be covered when explaining SSD performance)

    Read performance in a worst case scenario (Without Trim) using XP-SP2 on a lowly Atom Begins at 120 MB / SEC and quickly levels off at about 240 MB / SEC after just a few seconds (HD-Tach)

    Encrypted Partition Read Speed is HORRIBLE!
    Read Speed of a DriveCrypt v4 partition on a Vertex 2 was just over 14 MB / Sec in XP

    I think this may be due to Drivecrypt however and not the SSD
    Further testing is needed

    I chose worst case scenario's for this drive to get a feel for real world performance without Trim

    I did notice that ATOM computers feel like completely different animals when using a fast SSD compaired to a Hard Drive however and the performance gain feels much greater on an Atom than when starting off with a really fast computer

    NOTE: Other encryption or a newer version of Drivecrypt might yield better results but the real performance gains on a Sandforce controller are from hardware compression which might not work with encrypted drives!

    Hope that helps
  • retnuh - Friday, August 13, 2010 - link

    ahhh..... sandforce drives support hardware 128bit AES encryption, why not use that instead of drivecrypt?
  • Out of Box Experience - Friday, August 13, 2010 - link

    Thanks for the info..

    I was not aware that it had hardware encryption untill now.

    http://www.ocztechnologyforum.com/forum/showthread...

    Guess I need to run a few more tests today!
  • Out of Box Experience - Friday, August 13, 2010 - link

    Nope!

    Guess not

    The OCZ Toolbox is still unavailable due to bugs

    Can't wait to test the hardware encryption
  • fotoguy42 - Saturday, August 14, 2010 - link

    Note that all of the read speed tests will not really (at all?) be effected by whether TRIM is supported.
    TRIM is there to reset existing sections that have been written to, and then marked as available. Without TRIM, the SSD will have to do that at the same time it is trying to write things down, versus doing it at other times.
  • vol7ron - Thursday, August 12, 2010 - link

    How is it, many years after the fact, that Intel is still one-of (if not "the") best SSD provider?

    I don't get why Intel's stock is so low, even though they've had their legal issues. Google's stock goes up/down what Intel's price is on a daily basis.

    Intel did the SSD right, straight from the gate. They may have had some glitches and it may have lacked some functionality (like TRIM) or the ability to update the firmware, but their SSDs didn't stutter and they are still top performing in their class.

    I am still waiting for the price to drop.

    As stated before, my ideal market prices:
    256GB SSD at ~$175
    5TB HDD at ~$175
  • andersx - Friday, August 13, 2010 - link

    Wait ten years, then add a zero to those capacities.
  • bok-bok - Thursday, August 12, 2010 - link

    Since the dawn of time (or maybe only the Tandy 2000), we have awaited the day when the storage subsystem would cease to be a bottleneck, ruin our marriages, and devour our children.

    And once I sat down at a core i5 laptop with an Intel SSD-based RAID 0 array, I felt like that moment had arrived.

    I had debated about what I was going to devote upgrade money to in my own laptop, and I was ready to go for more RAM. Then I experienced the performance boost that comes with the price tag, and it stopped being just being a benchmark number on a website.

    And now I feel like I was in a state of denial driven by a penny-wise mentality. I mean, we've always known it was the damn disk. Ever since I was a kid playing Hero's Quest on my dad's Tandy, and the game action would halt as the 20mb hard drive started chugging and chirping, loading the next epic encounter, my life has been affected by mechanical storage.

    While I appreciate andersx's example, where the capacity needs and related price tag don't scale well, I believe there are countless business/enterprise cases where the expenditure is beyond justified.

    I mean, what's your time worth?

    Consider your daily business processes; to what extent are they hindered by the various friction you encounter while interfacing with technology? How do you qualify the subconscious effect of all that friction on our well being? And it does have an impact.

    Now imagine an office environment where like many organizations, the staff is pushing older workstations - or even newer ones with certain damnably slow antivirus/endpoint protection applications constantly doing read operations on mechanical spindles. I like my A/V app, but I'm still in-f$%*ing-furiated when the startup scan causes my machine to bring time to a halt. Whatever possibly valuable idea I had that motivated me to open my laptop just got away from me when the wave of frustration swept over me. Multiply that by everyone involved in a business process who endures a similar phenomenon.

    And of course there's the quantifiable loss of time (your most precious asset and the thing when you run out, you die).

    But there's more. It's not a contest against your own price-performance curve; business is a constant competition. Now I'm not suggesting that the lack of an SSD drive is going to topple your empire. But if an organization misses a high-dollar opportunity due to an accumulated competitive disadvantage, then the "will this save me enough time to justify the cost?" question is irrelevant. The fastest horse wins. If you can afford to deploy SSD, then you almost can't afford not to.

    I mean, I don't know I found the performance gain to be dramatic enough to justify outfitting every machine in the office with a 40gb SSD drive. Even the 4+ year old ones, even the D310s. I haven't tested the theory yet, but I'm guessing that the oldest computers will probably see the greatest benefit. Provided the caps on the motherboards don't crap out first, that is.
    But if they do, just pull the drives out.

    After all that it will be the Exchange Server's turn.

    .

    It's hard to qualify the psychological impact of friction in our computing environment, but consider how long we've been enduring the storage bottleneck
  • Iketh - Thursday, August 12, 2010 - link

    BRAVO!!!!
  • LokutusofBorg - Saturday, August 14, 2010 - link

    I got my work to agree to a 120GB SSD in my dev workstation and it has proved itself. So I've been pushing really hard lately to start testing PCI-x SSD drives for our DB servers. A single FusionIO drive delivers 6 figure SAN (160 drives) performance. I've been reading lots on this lately, but this article is the one that really got me thinking:

    http://www.brentozar.com/archive/2010/03/fusion-io...
  • Gilbert Osmond - Friday, August 13, 2010 - link

    Yes, this year IS the corner around which the storage market is turning. Peak-HDD (a-la Peak Oil) is just about now or has already passed.

    The dude over at http://www.storagesearch.com has had the mass-storage market figured out for at least 5 years. Articles he wrote 3 to 6 years ago predicted that SSDs would start to penetrate the mass market around now. His predictions for the next 5 years probably as accurate. No, I'm not a paid shill for his site; I've just read most of what's on it and it's clearly correct.

    The widespread adoption of SSDs is underway; spinning HDDs are likely to be extinct within 5 to 7 years, as they themselves made floppy disks extinct in their time. SSD capacities will rise and prices per GB / per TB will only get lower. It's about time.
  • Necrosaro420 - Friday, August 13, 2010 - link

    $600 for 256gb...ill sitck to platters.
  • Marakai - Saturday, August 14, 2010 - link

    I've now been searching for info on this for a while, but can't seem to come up with an answer:

    Why is the lack of TRIM not considered an issue for enterprise usage of SSDs?

    If the business case outweighs the cost, I'd build an array with RAID10, possibly RAID50.

    I lose TRIM. As Anand has demonstrated that leads to big performance drops over time in consumer usage. Why does this not seem to be an issue in enterprise usage - going *purely* by the lack of information on the Intertubes addressing this issue?
  • Marakai - Saturday, August 14, 2010 - link

    Sorry for two comments back to back, but as I'm seriously looking at a pilot project using enterprise level SSDs, these two issues are on the top of my mind:

    The second one, besides RAID and TRIM is the question whether we are simply shifting the performance bottleneck when moving to enterprise SSDs in storage arrays.

    The storage array most likely will be fibre-channel, or increasingly, trunked giga- or 10giga-Ethernet.

    When will that become the bottleneck where the SSD array could deliver the data, but the pipe to the host is too slow? Are we going to go back to DAS? Will Infiniband have its day?
  • bok-bok - Tuesday, August 17, 2010 - link

    Depending on the application, the whole network could be virtual and all the data passing around could be amongst the "SSPindles" - or WTF-ever they'll be called - on a DAS array. Then FTT-enduser would just deliver a streamed screen. Whole network redundancy would still only involve transmitting the delta to the network mirror(s). Obviously this is an oversimplification, but it's not like it's out of reach.

    If XenDesktop eats some more power pellets and Google rolls out enough experimental gigabit networks, and SSD prices get more sane (all three are somewhat safe bets), then I don't see why it couldn't be a viable model. Especially when you consider what kind of processor core density we're starting to see. There are few computing applications that would be effectively hindered by the latency inherent in light speed (plus reasonably overhead), even across a distant WAN link

    I think we're all probably subconsciously freaking out at the prospect of becoming, ourselves, the bottleneck. Personally, I'm giddy at the notion of technology not evoking ambivalence from a productivity/performance/empowerment point of view. Like, what am I gonna do when I can't use the term "f@$#ing computer!" anymore?

    Industry might be freaking out at, "What can we sell the end user when even the end user knows that the only performance bottleneck is the end user."

    Are they just gonna virtualize US?

    XEndUser?

    But in all seriousness, I very much look forward to trading up to a different set of problems.
  • Marakai - Thursday, August 19, 2010 - link

    You made my day at the thought of running myself through P2V. :-D

    Well, I'm going to look into scraping some fund together from my various project budgets and see if I can't build a nice little "prototype" enterprise SSD array and see how fast my team can access pr0n^H^H^H^H inner-departmental production data. ;)
  • bok-bok - Monday, August 30, 2010 - link

    I used to just take a filing clerk, image them, and then ghostcast them to all the filing cabinets, which did save on labor.

    But then it hit me - I could just P2V the whole office staff and then fire /everyone/ - saved me a bundle! ;-)

    Wait a minute. Hmmm...
  • ericgl21 - Sunday, August 15, 2010 - link

    There's a company called Pliant that manufactures 2.5" & 3.5" SSDs for the enterprise, and they claim at least 120K IOPS on the 2.5" drive and 160K IOPS for the 3.5" drive. Here's a link to their website: www.plianttechnology.com

    There's another well-respected company that manufactures SSDs for the enterprise called STEC, and they also claim to have SSDs with a very-high performance. their website is: www.stec-inc.com

    I highly recommend it to Anand to evaluate & test SSD products from both companies, as they might put the Intel X25-E and Micron P300 to shame.
  • JHoov - Monday, August 30, 2010 - link

    I'm not sure about Pliant, but STEC's drives aren't anywhere near $10/GB. And if my 60GB SSD gives me 60K IOPS, then that's $10/1000 IOPS. Again, the STEC might have claimed 120K+ IOPS, but you'd have to add a zero to the price tag, and then double it afterwards.

    This is based on 3 month old pricing for STEC SSDs for HDS/HP XP Tier 1 SAN Arrays, wherein I was told that 146GB SSDs run in the $11-14K range, EACH.

    These are also Fibre Channel attached drives made to go into a SAN, vs SATA attached drives made for DAS, but if I have a requirement for IOPS in the multi-ten to hundred K range, and I can do it for 1/20th the price, I'm OK with DAS.
  • ArntK - Thursday, December 9, 2010 - link

    I just want to second this and hope Anand will start to look closely into these types of drives.
  • zadx - Wednesday, November 2, 2011 - link

    I am in the same boat with ArntK and ericgl21 , would love to see how STEC SSDs stand against others.
  • stepz - Monday, August 16, 2010 - link

    Is the write cache super-capacitor backed? That is basically a hard requirement to use it for table storage on OLTP loads. Most consumer drives will happily lose data in the write cache when the power is cut. For databases that is completely unacceptable, losing any writes will pretty much always result in a corrupt database. And when you turn off write caching the random small write performance goes through the floor. Maybe you could test the write cache behavior with something like this: http://brad.livejournal.com/2116715.html?page=2

    Some other points about SSDs for database use:
    * If the SSD has a super-cap, then it can be used as a cheap way to get 4 digit transaction rates for small to medium sized databases without getting a RAID card with a battery backed write cache. A RAM based cache with backup power is necessary to turn the synchronous IOPS into sequential writes.
    * I doubt anyone who cares about the data will run the log as RAID-0. RAID-1 is more likely, with a BBWC even RAID-5 is acceptable.
    * For random i/o limited loads (quite common for databases), this type of drive will beat spinning rust by more than a factor of 10 in IOPS per $.
  • bok-bok - Tuesday, August 17, 2010 - link

    well said!

Log in

Don't have an account? Sign up now