Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.

On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.

Working Set Size

Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.

When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.

We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.

When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.

The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.

The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.

Introduction AnandTech Storage Bench
POST A COMMENT

149 Comments

View All Comments

  • Slash3 - Sunday, December 6, 2020 - link

    I have two 2TB Crucial MX500s for general storage and they're only ten bucks cheaper than what I paid, over two years ago. Reply
  • MDD1963 - Monday, December 7, 2020 - link

    Hmmm...wonder what the "Q" in QVO stands for? :) Reply
  • Scour - Monday, December 7, 2020 - link

    After experiences which some QLC-SSDs from Samsung and Crucial I have to say: Stay away from QLC if you want performance.

    Maybe it´s OK for ppl who install a windows and store some music or photos on it, but if you want to write larger amount of data you will be faster with HDDs.

    It´s a shame that some ppl recommend a QVO because it have a Samsung-controller and DRAM. Don´t agree with them because some cheap TLC-SSDs are much faster.
    Reply
  • Oxford Guy - Monday, December 7, 2020 - link

    Samsung is often overrated anyway. Their planar TLC drives were so poorly made that they have to periodically rewrite the data that's on the drive to maintain decent performance.

    I also remember the company's completely bogus power consumption claims, claims that were taken as truth by consumers who would recommend the drives based on the deception.
    Reply
  • Scour - Tuesday, December 8, 2020 - link

    My 840 (first version) never was good, it was slower than some of my cheapest SSDs in daily use. I use it now for video-recording on a set-top-box. It´s fast enough for the writing-speed and it gets erased all 2-3 weeks.

    But the 850 and 860 Evo works good and fast.

    The QVO-series maybe beats other QLC-products like DRAM-less BX500 (so far never seen benchmarks of new Sandisk Plus with QLC) but is to expensive in capacities less than 8TB
    Reply
  • WaltC - Monday, December 7, 2020 - link

    This has to be the first NMEe .M2-interface vs. SATA3-interface SSD comparison that ignores the differences in bus connections as if they don't exist--or as if they don't matter. Scratching my head over this one. Max optimal bandwidth for Sata3 SSD's is generally less than 550MB/s. Max optimal bandwidth for an .M2 NVMe 3x4 PCIe 3 drive like the Sabrent here is 3.5-5.x GB/s. And for PCIe 4 3x4 NVMe drives like the 980 Pro from Samsung, the max optimal bandwidth is as much a 7+ GB/s. Comparing the internal drive controllers and the onboard ram between SSD's is fine and should be done--but *never* at the expense of treating the drive interfaces into the system as if they just don't matter, imo...;) If people are merely looking capacities and prices without regard to performance this might be a helpful review. But when is that ever really the case? With SATA3 SSDs, it doesn't really matter about the internals, the performance is caped at < 550MB/s. The bottleneck being the drive's system interface. Reply
  • peevee - Wednesday, December 9, 2020 - link

    2TB of SLC is equal to 8TB of QLC. I doubt the SLC flash is separate from QLC, they probably use QLC in SLC mode until 2TB fill up, and then start compressing the data into QLC. So the switch might happen without constant sequential write too. Reply
  • ballsystemlord - Wednesday, December 9, 2020 - link

    @Billy , Under "Random Write Performance" (burst and sustained,) you'll notice that you wrote the same comment twice by mistake. Reply
  • zhpenn - Monday, February 8, 2021 - link

    About the 8TB version power consumption, I notice in the spec is 5.5W when compare to 860 EVO(4W) Can I put 870 QVO 8TB into a USB 3.0 SATA enclosure and used it without an unstable issue? or it may eject unexpectedly or slow speed due to high power consumption? Reply

Log in

Don't have an account? Sign up now