AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

Our initial runs of the Heavy test on the Samsung 970 EVO produced results similar to the Samsung PM981, with the 1TB model showing worse performance on an empty drive than a full drive. This seems to be related to the secure erase process used to wipe the drive before the test. Like many drives, the 970 EVO seems to lie about when it has actually finished cleaning up. Adding an extra 10 minutes of idle time before launching the Heavy test produced the results seen here, and in the future all drives will be tested with longer pauses after erasing (all other drives were given at least two minutes of idle time after each erase).

With the odd behavior eliminated, the Samsung 970 EVO comes close to setting a new record on the Heavy test. The empty drive performance of the 1TB model is up in Optane territory, though the full drive average data rate is not much higher than other TLC-based drives. The 500GB model is far slower, and its full-drive performance doesn't even keep pace with the Intel SSD 760p.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The average and 99th percentile latency scores from the Samsung 970 EVO are about normal and in line with its closest competitors, except for the particularly good empty-drive score from the 1TB 970 EVO.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average write latency of the 970 EVO is fairly typical for a high-end NVMe SSD, but the average read latency of the 1TB 970 EVO in the best case is surprisingly quick. Both capacities show a larger than normal gap between empty and full drive performance, even after accounting for the fact that they are using TLC to compete against the best MLC drives.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read latency scores from both tested capacities of the 970 EVO show a big difference between full drive and empty drive performance. The 500GB drive's read QoS doesn't seem up to par, but the 1TB model's scores would look pretty good if the WD Black hadn't recently shown up with an MLC-like minimal performance loss when full. The 99th percentile write latency scores of the 970 EVO are good but not substantially better than the competition, and the 500GB model is clearly worse at keeping latency under control than the 1TB model or MLC drives of similar capacity.

ATSB - Heavy (Power)

The 500GB 970 EVO continues the trend of relatively poor power efficiency from the Samsung Phoenix controller, but the 1TB model in its best case of running the test on an empty drive is fast enough that its overall energy usage is comparable to good SATA drives.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
Comments Locked

68 Comments

View All Comments

  • cfenton - Tuesday, April 24, 2018 - link

    I've been meaning to ask about this for a while, but why do you order the performance charts based on the 'empty' results? In most of my systems, the SSD's are ~70% full most of the time. Does performance only degrade significantly if they are 100% full? If not, it seems to me that the 'full' results would be more representative of the performance most users will see.
  • Billy Tallis - Tuesday, April 24, 2018 - link

    At 70% full you're generally going to get performance closer to fresh out of the box than to 100% full. Performance drops steeply as the last bits of space are used up. At 70% full, you probably still have the full dynamic SLC cache size usable, and there's plenty of room for garbage collection and wear leveling.

    When it comes to manual overprovisioning to prevent full-drive performance degradation, I don't think I've ever seen someone recommend reserving more than 25% of the drive's usable space unless you're trying to abuse a consumer drive with a very heavy enterprise workload.
  • cfenton - Tuesday, April 24, 2018 - link

    Thanks for the reply. That's really helpful to know. I didn't even think about the dynamic SLC cache.
  • imaheadcase - Tuesday, April 24, 2018 - link

    So im wondering, i got a small 8TB server i use for media/backup. While i know im limited to network bandwidth, would replacing the drives with ssd make any impact at all?
  • Billy Tallis - Tuesday, April 24, 2018 - link

    It would be quieter and use less power. For media archiving over GbE, the sequential performance of mechanical drives is adequate. Incremental backups may make more random accesses, and retrieving a subset of data from your backup archive can definitely benefit from solid state performance, but it's probably not something you do often enough for it to matter.

    Even with the large pile of SSDs I have on hand, my personal machines still back up to a home server with mechanical drives in RAID.
  • gigahertz20 - Tuesday, April 24, 2018 - link

    @Billy Tallis Just out of curiosity, what backup software are you using?
  • enzotiger - Tuesday, April 24, 2018 - link

    With the exception of sequential write, there are some significant gap between your numbers and Samsung's spec. Any clue?
  • anactoraaron - Tuesday, April 24, 2018 - link

    Honest question here. Which of these tests do more than just test the SLC cache? That's a big thing to test, as some of these other drives are MLC and won't slow down when used beyond any SLC caching.
  • RamGuy239 - Tuesday, April 24, 2018 - link

    So these are sold and markedet with IEEE1667 / Microsoft edrive from the get-go, unlike Samsung 960 EVO and Pro that had this promised only to get it at the end of their life-cycles (the latest firmware update).

    That's good and old. But does it really work? The current implementation on the Samsung 960 EVO and Pro has a major issue, it doesn't work when the disk is used as a boot drive. Samsung keeps claiming this is due to a NVMe module bug in most UEFI firmware's and will require motherboard manufactures to provide a UEFI firmware update including a fix.

    Whether this is indeed true or not is hard for me to say, but that's what Samsung themselves claims over at their own support forums.

    All I know is that I can't get neither my Samsung 960 EVO 1TB, or my Samsung 960 Pro 1TB to use hardware encryption with BitLocker on Windows 10 when its used as a boot drive on neither my Asus Maximus IX Apex or my Asus Maximus X Apex both running the latest BIOS/UEFI firmware update.

    When used as a secondary drive hardware encryption works as intended.

    With this whole mess around BitLocker/IEEE1667/Microsoft Edrive on the Samsung 960 EVO and Pro how does it all fare with these new ones? Is it all indeed a issue with NVMe and most UEFI firmware's requiring new UEFI firmware's with fixes from motherboard manufactures or does the 970 EVO and Pro suddenly work with BitLocker as a boot drive without new UEFI firmware releases?
  • Palorim12 - Tuesday, April 24, 2018 - link

    Seems to be an issue with the BIOS chipset manufacturers like Megatrends, Phoenix, etc, and Samsung has stated they are working with them to resolve the issue.

Log in

Don't have an account? Sign up now