PCMark 10 Storage Benchmarks

At the end of 2019, UL released a major update to their popular PCMark 10 benchmark suite, adding storage performance tests that had been conspicuously absent for over two years. These new storage benchmarks are similar to our AnandTech Storage Bench (ATSB) tests in that they are composed of traces of real-world IO patterns that are replayed onto the drive under test. We're incorporating these into our new SSD test suite, and including our first batch of results here.

PCMark 10 provides four different storage benchmarks. The Full System Drive, Quick System Drive and Data Drive benchmarks cover similar territory to our ATSB Heavy and Light tests, and all three together take about as long to run as the ATSB Heavy and Light tests combined. The Drive Performance Consistency Test is clearly meant to one-up The Destroyer and also measure the worst-case performance of a drive that is completely full. Due to time constraints, we are not yet attempting to add the Drive Performance Consistency Test to our usual test suite.

PCMark 10 Storage Tests
Test Name Data Written
Data Drive 15 GB
Quick System Drive 23 GB
Full System Drive 204 GB
Drive Performance Consistency 23 TB + 3x drive capacity

The primary subscores for the PCMark 10 Storage benchmarks are average bandwidth and average latency for read and write IOs. These are combined into an overall score by computing the geometric mean of the bandwidth score and the reciprocal of the latency score. PCMark 10 also records more detailed statistics, but we'll dig into those in a later review. These PCMark 10 Storage test runs were conducted on our Coffee Lake testbed:

AnandTech Coffee Lake SSD Testbed
CPU Intel Core i7-8700K
Motherboard Gigabyte Aorus H370 Gaming 3 WiFi
Chipset Intel H370
Memory 2x 8GB Kingston DDR4-2666
Case In Win C583
Power Supply Cooler Master G550M
OS Windows 10 64-bit, version 2004

 

Data Drive Benchmark

The Data Drive Benchmark is intended to represent usage a secondary or portable drive may be subject to. This test simulates copying around files, but does not simulate the IO associated with launching and running applications from a drive.

PCMark 10 Storage - Data
Overall Score Average Bandwidth Average Latency

Starting off, the 8TB Sabrent Rocket Q leads the field thanks to its massive and fast SLC cache; it clearly outperforms even the decently high-end 2TB TLC-based HP EX920. The several capacities of the Samsung 870 QVO all performa about the same: less than half the speed of the faster NVMe drives, and slower than the slowest entry-level NVMe drives. The enterprise SATA drive with no SLC caching comes in last place.

Quick System Drive Benchmark

The Quick System Drive Benchmark is a subset of the Full System Drive Benchmark, running only 6 out of the 23 sub-tests from the Full test.

PCMark 10 Storage - Quick
Overall Score Average Bandwidth Average Latency

Moving on to the Quick test, the Sabrent Rocket Q no longer stands out ahead of the other NVMe drives, but still offers decent performance. The performance gap between the NVMe drives and the Samsung 870 QVO drives has narrowed slightly, but is still almost a factor of two.

Full System Drive Benchmark

The Full System Drive Benchmark covers a broad range of everyday tasks: booting Windows and starting applications and games, using Office and Adobe applications, and file management. The "Full" in the name does not mean that each drive is filled or that the entire capacity of the drive is tested. Rather, it only indicates that all of the PCMark 10 Storage sub-tests are included in this test.

PCMark 10 Storage - Full
Overall Score Average Bandwidth Average Latency

The Full test starts to bring the downsides of QLC NAND into focus. The Sabrent Rocket Q is now the slowest of the NVMe drives, only moderately faster than the 8TB Samsung 870 QVO. The 1TB 870 QVO is also falling behind the larger and faster models. However, the QLC-based Intel 660p manages to hold on to decent performance, possibly a result of the class-leading SLC cache performance we usually see from Silicon Motion NVMe controllers paired with Intel/Micron flash.

AnandTech Storage Bench Random I/O Performance
Comments Locked

150 Comments

View All Comments

  • heffeque - Friday, December 4, 2020 - link

    "Of course we hope that firmwares don't have such bugs, but how would we know unless someone looked at the numbers?"
    Well on a traditional HDD you also have to hope that they put Helium in it and not Mustard gas by mistake. It "can" happen, but how would we know if nobody opens every single HDD drive?

    In a serious note, if a drive has such a serious firmware bug, rest assured that someone will notice, that it will go public quite fast and that it will end up getting fixed (like it has in the past).
  • Spunjji - Monday, December 7, 2020 - link

    Thanks for responding to that "how do you know unless you look" post appropriately. That kind of woolly thinking really gets my goat.
  • joesiv - Monday, December 7, 2020 - link

    Well, I for one would rather not be the one that discovers the bug, and lose my data.

    I didn't experience this one, but it's an example of a firmware bug:
    https://www.engadget.com/2020-03-25-hpe-ssd-bricke...

    Where I work, I'm involved in SSD evaluation. A drive we used in the field had a nasty firmware bug, that took out dozens of our SSD's after a couple years of operation (that was well within their specs), The manufacturer fixed it in a firmware update, but not until a year + after release, so we shipped hundreds of product.

    Knowing that, I evaluate them now. But for my personal use, where my needs are different, I'd love it if at least a very simple check was done in the reviews. It's not that hard, review the SSD, then check to see if the writes to NAND is reasonable given the workload you gave it. It's right there in the smart data, it'll be in block sizes, so you might have to multiply it by the block size, but it'll tell you a lot.

    Just by doing something similar, we were able to vet a drive that was writing 100x more to NAND than it should have been, essentially it was using up it's life expectancy 1% per day! Working with the manufacturer, they eventually decided we should just move to another product, they weren't much into firmware fixes.

    Anyways, someone should keep the manufactuers honest, why not start with the reviews?

    Also, no offence, but what is the "wolly thinking" are you talking about? I'm just trying to protect my investment and data.
  • heffeque - Tuesday, December 8, 2020 - link

    As if HDD didn't have their share of problems, both firmware and HW (especially the HW). I've seen loads of HDD die in the first 48 hours, then a huge percentage of them no later than a year afterwards.

    My experience is that SDD last A LOT longer and are A LOT more reliable than HDD.
    While HDD had been braking every 1-3 years (and changing them was a high cost due to the remote location, and the high wages of Scandinavian countries), when we changed to SSD we had literally ZERO replacements to perform since then so... can't say that the experience with hundreds of SSD not failing vs hundreds of HDD that barely last a few years goes in favor of HDD in any kind of measure.

    In the end, paying to send to those countries a slightly more expensive device (the SSD) has payed for itself several-fold in just a couple of years.
  • MDD1963 - Friday, December 4, 2020 - link

    I've only averaged .8 TB per *month* over 3.5 years....
  • joesiv - Monday, December 7, 2020 - link

    Out of curiousity, how did you come to this number?

    Just be aware that SMART data will track different things. You're probably right, but SMART data is manufactuer and model dependant, and sometimes they'll use the attributes differently. You really have to look up the smart documentation for your drive, to be sure they are calculating and using the attributes the way your smart data utility is labeling them as. Some manfacturers also don't track writes to NAND.

    I would look at:
    "writes to nand" or "lifetime writes to flash" - which for some kingston drives is attribute 233
    "SSD Life Left" - which for some ADATA drives is 232 (ADATA), and Micron/Crucial might be is 202), this is actually usually calculated based on average block erase count against the rated block erase counts the NAND is rated for (3000ish for MLC, much less for 3d nand)

    A lot of maufactuers haven't included the actual NAND writes in their SMART data, so it'd be hard to get to, and should be called out for it (Delkin, Crucial).

    "Total host writes" is what the OS wrote, and what most viewers assume is what manufactuers are stating when they're talking about drive writes per day or TB a day. That's the amount of data that is fed to the SSD, not what is actually written to NAND.

    Also realize that wear leveling routines can eat up SSD life as well. I'm not sure how SLC mode that newer firmwars have affects life expectancy/nand writes actually.
  • stanleyipkiss - Friday, December 4, 2020 - link

    Honestly, if the prices of these QLC high-capacity drives would drop a bit, I would be all over them -- especially for NAS use. I just want to move away from spinning mechanical drives but when I can get a 18 TB drive at the same price of a 4-8 TB SSD, I will choose the larger drive.

    Just make them cheaper.

    Also: I would love HIGHER capacity, and I WOULD pay for it... Micron had some drives and I'm sure some mainstream drives could be made available -- if you can squeeze 8TB onto M.2 then you could certainly put 16TB on a 2.5 inch drive.
  • DigitalFreak - Monday, December 7, 2020 - link

    Ask and ye shall receive.

    https://www.pcgamer.com/sabrent-is-close-to-launch...
  • Xex360 - Friday, December 4, 2020 - link

    The prices don't make any sense, you can get multiple drives for the same capacity but less money and more performance and reliability, and should cost more because they use more material.
  • inighthawki - Friday, December 4, 2020 - link

    At least for the sabrent drive, M.2 slots can be at a premium, so it makes perfect sense for a single drive to cost more than two smaller ones. On many systems being able to hook up that many drives would require a PCIe expansion card, and if you're not just bifurcating an existing 8x or 16x lane you would need a PCIe switch which is going to cost you hundreds of dollars at minimum.

Log in

Don't have an account? Sign up now