Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.

On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.

Working Set Size

Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.

When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.

We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.

When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.

The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.

The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.

Introduction AnandTech Storage Bench
POST A COMMENT

149 Comments

View All Comments

  • shelbystripes - Thursday, December 10, 2020 - link

    Dude, you don't seem to understand how "consumerist capitalism" DOES work. QLC will still be more than good enough for most consumers, or at least, that's what manufacturers are banking on. They still need to sell the hardware, and they're competing in a world where MLC and TLC SSDs still widely exist.

    The only way to get there will be lower cost... and there will be plenty of consumers who respond to high-capacity QLC SSDs at lower costs than "scale" alone can achieve for MLC or TLC drives, and who won't care about the drop in MTBF because QLC SSDs still have more total writes than they'll ever need. QLC SSDs aren't going to be for everyone, but if TLC (even 3D TLC) is such cheap technology that "scale" is all you need to hit 8TB SSDs with it, why isn't anyone making sub-$1K 8TB 3D TLC drives and competing with these? Shouldn't they be?

    You just don't know what you're talking about, yet you have the arrogance of someone prepared to speak for everybody uniformly.
    Reply
  • boozed - Saturday, December 5, 2020 - link

    The Sabrent appears to perform quite well in real world tests, regardless of its synthetic/theoretical performance. Is this a bad thing? Reply
  • Hixbot - Saturday, December 5, 2020 - link

    MLC/TLC is still available at extra cost. Meanwhile QLC is pushing HDDs out of the market. Reply
  • Oxford Guy - Sunday, December 6, 2020 - link

    "MLC/TLC is still available at extra cost."

    Economy of scale. QLC is an attack on TLC and MLC.
    Reply
  • Oxford Guy - Sunday, December 6, 2020 - link

    Also the article says:

    "QLC NAND offers just a 33% increase in theoretical storage density, but in practice most QLC NAND is manufactured as 1024Gbit dies while TLC NAND is manufactured as 256Gbit and 512Gbit dies."

    Which means manufacturers are trying to kneecap TLC to push QLC.
    Reply
  • Spunjji - Monday, December 7, 2020 - link

    Or it means that manufacturing TLC at those capacities per die would result in a bloated die size with decreased yields, increased costs, and too-few dies per drive to reach competitive speeds at the most common capacities.

    The problem with having a conclusion and looking for evidence to support it is that you can come up with all sorts of silly reasons for things that are perfectly explicable by other means.
    Reply
  • Oxford Guy - Thursday, December 10, 2020 - link

    Speculative Reply
  • shelbystripes - Thursday, December 10, 2020 - link

    It's ironic that you respond to someone calling out your unsubstantiated speculation as "speculative". If you're opposed to speculation, you should retract your statements assuming that manufacturers are out to "kneecap" MLC/TLC like they have some secret agenda against higher-reliability parts... Reply
  • Spunjji - Monday, December 7, 2020 - link

    Do you have any evidence that would support that claim? Say, TLC costs rising even as QLC rolls out, in a way that doesn't reflect the usual industry supply/demand fluctations? Reply
  • Oxford Guy - Thursday, December 10, 2020 - link

    Yes. The die sizes offered with TLC are 50% smaller at best. That magnifies the 30% density increase of QLC automatically. Maybe this reply will stick. Here’s to hoping. Reply

Log in

Don't have an account? Sign up now