For months SandForce has been telling me that the market is really going to get exciting once its next-generation controller is ready. I didn’t really believe it, simply because that’s what every company tells me. But in this case, at least based on what SandForce showed me, I probably should have.

What we have today are the official specs of the second-generation SandForce SSDs, the SF-2000 series. Drives will be sampling to enterprise customers in the coming weeks, but we probably won’t see shipping hardware until Q1 2011 if everything goes according to plan. And the specs are astounding:

We'll get to the how in a moment, but let's start at the basics. The overall architecture of the SF-2000 remains unchanged from what we have today with the SF-1200/SF-1500 controllers.

SandForce’s controller gets around the inherent problems with writing to NAND by simply writing less. Using real time compression and data deduplication algorithms, the SF controllers store a representation of your data and not the actual data itself. The reduced data stored on the drive is also encrypted and stored redundantly across the NAND to guarantee against dataloss from page level or block level failures. Both of these features are made possible by the fact that there’s simply less data to manage.

Another side effect of SandForce’s write-less policy is there’s no need for an external DRAM to handle large mapping tables. It reduces the total BOM cost of the SSD and allows SandForce to charge a premium for its controllers.

These are the basics and as I mentioned above, they haven’t changed. The new SF-2000 controller is faster but the fundamental algorithms remain the same. The three areas that have been improved however are the NAND interface, the on-chip memories, and the encryption engine.

NAND Support: Everything


View All Comments

  • jwilliams4200 - Thursday, October 7, 2010 - link

    It is the Sandforce marketing department that is impressive. They have a lot of people drinking their Kool-aid. But Sandforce's actual technology does not live up to their hype. Reply
  • therealnickdanger - Thursday, October 7, 2010 - link

    It doesn't?
  • jwilliams4200 - Thursday, October 7, 2010 - link

    Note that the Sandforce drives got beat by the C300 and the X25-E on the benchmark you cited. Neither of those SSDs claims a write speed as high as 275 MB/s as Sandforce does.

    Also check out these benchmarks of copying real data files:

    The Sandforce drives do not even achieve 50% of their claimed write speed when faced with copying realistic data files. With real files, their write speeds are about 130 MB/s on a fresh SSD, and drop to about 83 MB/s on a well-used SSD.

    This from a company that claims 275 MB/s write speeds. Sandforce is good at hype, not so much at delivering what they claim.
  • jwilliams4200 - Thursday, October 7, 2010 - link

    Also check out these benchmarks of copying real data files:

    (couldn't include this in previous comment)

    The Sandforce drives do not even achieve 50% of their claimed write speed when faced with copying realistic data files. With real files, their write speeds are about 130 MB/s on a fresh SSD, and drop to about 83 MB/s on a well-used SSD.
  • therealnickdanger - Friday, October 8, 2010 - link

    Seriously, how often do you spend the majority of your time copying that many files to other drives?

    Those examples are pretty selective and also, it's hardly fair to pit SLC against MLC. Special use scenarios are all fine and good, but for your typical user, the current SF MLC drives beat Intel MLC in typical multi-tasking real-world scenarios (AT's benchmark, Vantage).

    According to AT's reviews of SF-based drives, they all bounce back original speeds after TRIM... with "real" files. Intel degrades over time as well and then is restored after TRIM. It's the nature of the beast.

    The evidence points strongly to SF beating out Intel overall by a substantial margin in real-world and synthetic tests, with Intel only winning in a handful of non-typical scenarios. I think you're just seeing what you want to see.
  • jwilliams4200 - Friday, October 8, 2010 - link

    Copying files is a basic benchmark which gives an indication of how all other reads and writes will go. If a drive performs at less than half its claimed specification when copying files, you can be sure that it will perform similarly poorly on other tests.

    Yes, Anand's tests missed the Sandforce problem of performance degradation that cannot be recovered through TRIM, I'm not sure what your point is. Surely no one thinks Anand is perfect. The problem is real, and has been observed by bit-tech and by computerbase. I have also spoken with several people who have seen the problem themselves.

    And the evidence is that Intel matches or beats Sandforce on most real world tests, when you are looking at a well-used drive. Sandforce's used performance degradation is really bad when you are writing data that its controller cannot compress.
  • 'nar - Sunday, October 10, 2010 - link

    Famously simple answer:

    "You're holding it wrong."

    Copying files is not necessarily representative of normal workloads, you need a course in deductive reasoning. You cannot assume that large, contiguous, compressed files copied one at a time are at all representative of small, uncompressed, random files accessed concurrently.
  • Breit - Saturday, October 9, 2010 - link

    This seems to be a bit unfair with SF. Since their Controllers (or lets say SSDs with their controllers) can achieve a fairly high IOps count, you should at least bench the aggregate bandwidth they achieve with multiple file transfers at once...

    If this is a realistic workload or not depends entirely on your needs of course, but you also should choose Hard Drives and especially SSDs depending on your application and what delivers the best performance for you. Maybe SF-SSDs aren't the best SSDs for your average workload if speedy large single-file data transfer is your main goal. :)
  • 'nar - Sunday, October 10, 2010 - link

    Anand has covered this already. Compression reduces write amplification, thus improves performance in most workloads, and extends Flash life by writing to NAND less.

    "SandForce’s controller gets around the inherent problems with writing to NAND by simply writing less" - from this article.

    Then here is the test with truly random data:

    No drive is perfect. Most large files, such as what you linked with 6.8 GB files, are compressed already. Highly compressed files like movies do not benefit from SF compression, but they also don't need to. How fast do you watch a movie? All of my movies are on hard drives.

    This is not Kool-Aid, this is a choice. Use what is most appropriate for your workloads. Don't trash-talk the drive or mislead others due to one type of synthetic benchmark, or one supposed "real world scenario" that really is not what most people would use them for anyway.

    Just accept that this drive has less performance with compressed, encrypted, or truly random files. I have, and I have moved on. I have purchased three sf drives while being fully aware of that fact, two OCZ LE's and a G.Skill Phoenix Pro. I do not use compressed data on them anyway, just windows and applications, all are compressible. Well, mostly compressible.
  • vol7ron - Thursday, October 7, 2010 - link

    I imagine the added compression generates more heat for these components.

    Do you think that it will deteriorate the drive quicker?

    I'm not up to speed on the cooling inside an SSD, but I'm curious what happens to performance when a few cells in the proc begin to go.

Log in

Don't have an account? Sign up now