Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.

On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.

Working Set Size

Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.

When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.

We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.

When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.

The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.

The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.

Introduction AnandTech Storage Bench
Comments Locked

150 Comments

View All Comments

  • Kevin G - Friday, December 4, 2020 - link

    At 1 Gbit easily sure, but 2.5 Gbit is taking off in the consumer space and 10 Gbit has been here for awhile but at a price premium. There is also NIC bonding which can increase throughput further if the NAS has multiple active users.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    A single seagate ironwolf can push over 200MB/s read speeds. 2.5 Gbit will still bottleneck even the most basic raid 5 arrays.
  • heffeque - Friday, December 4, 2020 - link

    I want a silent NAS.
    Also SSD last longer than HDD.
    I'm hoping for a Synology DS620Slim but with AMD Zen inside (like the DS1621+), and I'll fill it up with 4TB QVO drives on SHD1 with BTRFS.
  • david87600 - Friday, December 4, 2020 - link

    Re: SSD lasts longer than HDD:

    Not necessarily. Especially with high volumes of writes. We've had more problems with our SSDs dying than our HDDs. We have several servers but the main application runs on an HDD. We replace our servers every four years but the old servers go into use as backup servers or as client machines. Some of those have been running their HDDs for 15 years now. None of our SSDs have lasted more than 2 years under load.
  • heffeque - Saturday, December 5, 2020 - link

    The Synology DS620Slim is not even near an enterprise server. Trust me, the SSDs won't die from high volume writes on a home user.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    Completely different use case. Home users fall under more of the WORM style of usage, they are not writing large data sets constantly.

    I also have no clue what you are doing to your poor SSDs. We have our SQL databases serving thousands of users reading and writing daily on SSDs for 3 years now without a single failure. Of course we have enterprise SSDs instead of consumer, so that makes a huge difference.
  • Deicidium369 - Saturday, December 5, 2020 - link

    I have far more dead HDDs than dead SSDs. The 1st SSD I bought was an OCZ midrange, 120GB - that drives has been used continuously for several years - about a year ago, wiped it and checked it - only a few worn cells. On the other hand - I had had terrible luck with anything over 8TB mechanical - out of the close to 300 14TB Seagates - over 10% failure rate - about half of those died during the 48 hour burn in - and the rest soon after.

    The Intel Optane U.2 we used in the Flash array have had no issues at all over the 3 year period - we had one that developed a power connector failure - but no issues with the actual media.

    as with most things tech YMMV
  • GeoffreyA - Sunday, December 6, 2020 - link

    Just a question. Between Seagate and WD, who would you say is worse when it comes to failures? Or are they about the same?
  • Deicidium369 - Sunday, December 6, 2020 - link

    I have not used WD in some time - so I can't comment I tend to use Backblaze failure rates - https://www.backblaze.com/blog/backblaze-hard-driv...
  • GeoffreyA - Monday, December 7, 2020 - link

    Thanks

Log in

Don't have an account? Sign up now