Kingston may not be a name that rolls off the tip of the tongue when you're talking about datacenter hardware vendors, but the company has come to have a major presence in datacenters through their DRAM modules. A lucrative and high-volume market on its own, the company has unspririsngly been attempting to pivot off of their success with datacenter DRAM into other datacenter products, but they've only met limited success thus far. Their other product lines – in particular enterprise/datacenter SSDs – have been servicable, but haven't been able to crack the market as a whole.

Still intent on slicing out a larger portion of the datacenter SSD market, Kingston has decided to raise their profile by introducing SSDs that are based around the needs of their existing DRAM customers. That means that the company's new DC500 family of SSDs is intended for second-tier cloud service providers and system integrators, rather than the top hyperscalers like Google, Microsoft, Amazon, etc. This also means that the new drives are SATA SSDs, because in this market segment – which relies more heavily on commodity components and platforms than Open Compute Project-style thorough customization – there is still significant demand for SATA SSDs.

Using NVMe SSDs adds to platform costs in the form of expensive PCIe switches and backplanes, the drives themselves are each more expensive than a SATA drive of the same capacity, and power efficiency is often better for SATA than NVMe. PCIe SSDs make it possible to cram a lot of storage performance into a smaller number of drives and servers, but where the emphasis is more on capacity and cost effectiveness, SATA still has a place.

The SATA interface itself is stuck at 6Gbps, but the technology that goes into SATA SSDs continues to evolve with new generations of NAND flash memory and new SSD controllers. Kingston's new DC500 family of enterprise SATA SSDs are our first look at Phison's new S12 SSD controller (specifically, the S12DC variant), the replacement for the S10 that has been on the market for over five years. (S11 is Phison's current DRAMless SATA controller.) While consumer SATA SSD controllers have mostly dropped down to just four NAND channels, the S12DC still has eight channels, but more for the sake of supporting high capacities than for improving performance. The S12DC officially supports 8TB, but Kingston isn't pushing things that far yet. The S12DC controller is fabbed on a 28nm process and brings major improvements to the error correction capabilities including Phison's third-generation LDPC engine.

The DC500 family uses Intel's 64-layer TLC NAND flash memory, a break from Kingston's usual preference for Toshiba NAND. 96/92-layer TLC has started to show up in the client/consumer SSD market, but it's still a bit early to be seeing it in this part of the enterprise storage market.

The DC500 family includes two tiers: the DC500R for read-heavy workloads (endurance rating of 0.5 DWPD) and the DC500M for more mixed read/write workloads (endurance rating of 1.3 DWPD). Kingston says the Intel NAND they are using is rated for about 5000 program/erase cycles, so with a warranty for a bit less than 1000 total drive writes on the DC500R they're clearly allowing for quite a bit of write amplification.

NVMe SSDs have mostly killed off the market for very high endurance SATA drives, because applications that need to support several drive writes per day tend to need higher performance than SATA can support (and as drive capacities increase, there's no longer enough time in a day to complete more than a few drive writes at ~0.5GB/s). Micron still offers a 5 DWPD SATA model (5200 MAX) but most other brands now top out around 3 DWPD for SATA drives. Those 3 DWPD and higher drives only account for about 20% of the market, so Kingston isn't missing out on too many sales by only going up to 1.3 DWPD with the DC500 family. The introduction of QLC NAND has helped lower the entry-level of this market down to around 0.1 DWPD, but Kingston doesn't have anything to offer at that level yet.

Kingston DC500 Series Specifications
Capacity 480 GB 960 GB 1920 GB 3840 GB
Form Factor 2.5" 7mm SATA
Controller Phison PS3112-S12DC
NAND Flash Intel 64-layer 3D TLC
DRAM Micron DDR4-2666
Sequential Read 555 MB/s
Sequential
Write
DC500R 500 MB/s 525 MB/s 525 MB/s 520 MB/s
DC500M 520 MB/s 520 MB/s 520 MB/s 520 MB/s
Random Read 98k IOPS
Random
Write
DC500R 12k IOPS 20k IOPS 24k IOPS 28k IOPS
DC500M 58k IOPS 70k IOPS 75k IOPS 75k IOPS
Power Read 1.8 W
Write 4.86 W
Idle 1.56 W
Warranty 5 years
Write
Endurance
DC500R 438 TB
0.5 DWPD
876 TB
0.5 DWPD
1752 TB
0.5 DWPD
3504 TB
0.5 DWPD
DC500M 1139 TB
1.3 DWPD
2278 TB
1.3 DWPD
4555 TB
1.3 DWPD
9110 TB
1.3 DWPD
Retail Price (CDW) DC500R $104.99 (22¢/GB) $192.99 (20¢/GB) $364.99 (19¢/GB) $733.99 (19¢/GB)
DC500M $125.99 (26¢/GB) $262.99 (27¢/GB) $406.99 (21¢/GB) $822.99 (21¢/GB)

The DC500R and DC500M are available in the same set of usable capacities ranging from 480GB to 3840GB, but they differ in the amount of spare area included, which is what allows the -M to have higher write endurance and higher sustained write performance. For sequential IO, the -R and -M versions are rated to deliver essentially the same performance, bottlenecked by the SATA link. The same is true for random reads, but steady-state random write performance is limited by the flash itself and varies with drive capacity and spare area. The DC500M models all have higher random write performance than all of the DC500R models.

Power consumption is rated at a modest 1.8 W for reads and a fairly typical 4.86 W for writes. Low-power idle states are usually not included on enterprise drives, so the DC500s are rated to idle at 1.56 W.

Left: DC500R 3.84 TB, Right: DC500M 3.84 TB

The DC500R and DC500M both use the same plain metal case, but the PCBs inside have some minor layout changes due to the differences in overprovisioning. Our 3.84TB samples feature raw capacities of 4096GB for the DC500R and 5120GB for the DC500M, so the -R versions have comparable overprovisioning to consumer SSDs while the -M versions have about three times as much spare area. The extra flash on the DC500M also requires it to have more DRAM: 6GB instead of the 4GB found on the DC500R 3.84TB.

Physically, the memory is laid out differently between the two drives. The 3.84TB DC500R has a total of 16 packages with 256GB each of NAND, and the 3.84TB DC500M uses 10 packages of 512GB each rather than mix packages of different capacities. In both cases this is Intel NAND packaged by Kingston. Since the -M has fewer NAND packages, it also gets away with fewer of the small TI multiplexer chips that sit next to the controller. The -M also has two fewer tantalum caps for power loss protection despite having more total NAND and DRAM.

The Competition

There are plenty of competing enterprise SATA SSDs based on 64-layer 3D TLC, but many of them have been on the market for quite a while; Kingston's a bit late to market for this generation. Samsung's SATA SSDs launched last fall are the only current-generation drives we have to compare against the Kingston DC500s, and all of our older enterprise SATA SSDs are far too outdated to be relevant.

The Samsung 883 DCT falls somewhere in between the DC500R and DC500M, with a write endurance of 0.8 DWPD (compared to 0.5 and 1.3 for the Kingston drives). The Samsung 860 DCT is a bit of an oddball since it lacks one of the defining features of enterprise SSDs: power loss protection capacitors. It also has quite a low endurance rating of just 0.2 DWPD, which is almost in QLC territory. Despite these handicaps, it still uses Samsung's excellent controller and firmware, and is tuned to offer much better performance and QoS on server workloads than can be expected from the client and consumer SSDs it superficially resembles.

To give a sense of scale, we've also included results for Samsung's entry-level datacenter NVMe drive, the 983 DCT, specifically the 960GB M.2 model. Some relevant SATA competitors that we have not tested include the Intel D3-S4510 and Micron 5200 ECO, both using the same 64L TLC as the Kingston drives but with different controllers.

Test System

Intel provided our enterprise SSD test system, one of their 2U servers based on the Xeon Scalable platform (codenamed Purley). The system includes two Xeon Gold 6154 18-core Skylake-SP processors, and 16GB DDR4-2666 DIMMs on all twelve memory channels for a total of 192GB of DRAM. Each of the two processors provides 48 PCI Express lanes plus a four-lane DMI link. The allocation of these lanes is complicated. Most of the PCIe lanes from CPU1 are dedicated to specific purposes: the x4 DMI plus another x16 link go to the C624 chipset, and there's an x8 link to a connector for an optional SAS controller. This leaves CPU2 providing the PCIe lanes for most of the expansion slots, including most of the U.2 ports.

Enterprise SSD Test System
System Model Intel Server R2208WFTZS
CPU 2x Intel Xeon Gold 6154 (18C, 3.0GHz)
Motherboard Intel S2600WFT
Chipset Intel C624
Memory 192GB total, Micron DDR4-2666 16GB modules
Software Linux kernel 4.19.8
fio version 3.12
Thanks to StarTech for providing a RK2236BKF 22U rack cabinet.

The enterprise SSD test system and most of our consumer SSD test equipment are housed in a StarTech RK2236BKF 22U fully-enclosed rack cabinet. During testing for this review, the front door on this rack was generally left open to allow better airflow, and some Silverstone FQ141 case fans have been installed to help exhaust hot air from the top of the cabinet.

The test system is running a Linux kernel from the most recent long-term support branch. This brings in about a year's work on Meltdown/Spectre mitigations, though strategies for dealing with Spectre-style attacks are still evolving. The benchmarks in this review are all synthetic benchmarks, with most of the IO workloads generated using FIO. Server workloads are too widely varied for it to be practical to implement a comprehensive suite of application-level benchmarks, so we instead try to analyze performance on a broad variety of IO patterns.

Enterprise SSDs are specified for steady-state performance and don't include features like SLC caching, so the duration of benchmark runs doesn't have much effect on the score, so long as the drive was thoroughly preconditioned. Except where otherwise specified, for our tests that include random writes the drives were prepared with at least two full drive writes of 4kB random writes. For all the other tests, the drives were prepared with at least two full sequential write passes.

Our drive power measurements are conducted with a Quarch HD Programmable Power Module. This device supplies power to drives and logs both current and voltage simultaneously. With a 250kHz sample rate and precision down to a few mV and mA, it provides a very high resolution view into drive power consumption. For most of our automated benchmarks, we are only interested in averages over time spans on the order of at least a minute, so we configure the power module to average together its measurements and only provide about eight samples per second, but internally it is still measuring at 4µs intervals so it doesn't miss out on short-term power spikes.

Performance at Queue Depth 1
Comments Locked

28 Comments

View All Comments

  • Christopher003 - Sunday, November 24, 2019 - link

    I had an Agility 3 60gb, used for just over 2 years for my system, mom used now an additional over 2.5 years, however it was either starting to have issues, or the way mom was using caused it to "forget" things now and then.

    I fixed with a crucial mx100 or 200 (forget LOL) that still has over 90% life either way, the Agility 3 was "warning" though still showed as over 75% life left (christmas '18-19) .. def massive speed up by swapping to more modern as well as doing some cleaning for it..
  • Samus - Wednesday, June 26, 2019 - link

    I agree, I hated how they changed the internals without leaving any inclination of a change on the label.

    But the thing that doesn't stop me from recommending them: had anyone ever actually seen a Kingston drive fail?

    It seems their firmware and chip binning is excellent. The later of which is easy for a company that makes so many God damn USB flash drives and can use the shitty NAND elsewhere...
  • jabber - Tuesday, June 25, 2019 - link

    Kingston are my go to budget SSD brand. I bought dozens of those much moaned at V300 SSDs in the day. Did I care? No, because they were light years better than any 5400rpm pile of junk in a laptop or desktop.

    The other reason? Not one of them to date has failed. Including the V400 and onwards.

    They may not be the fastest (what's 30MBps between friends) but they are solid drives.

    Nothing more boring than a top end enthusiast SSD that is bust.

    Recommended.
  • GNUminex_l_cowsay - Tuesday, June 25, 2019 - link

    This whole article raises a question for me. Why is SATA still locked into 6Gbps? I get that there is an alternative higher performance interface but considering how frequently USB 3 has had its bandwidth upgraded lately it seems like a maximum bandwidth increase should be reasonable.
  • thomasg - Tuesday, June 25, 2019 - link

    There's just no point in updating SATA.
    6 Gbps is plenty for low-performance systems, SATA works well, is cheap and simple.

    For all that need more performance, the market has moved to PCIe and NVMe in their various form factors, which is just a lot more expensive (especially due to the numerous and frequently changed form factors).

    USB, as not only an, but THE external port that all users are facing has a lot more pressure behind it to get updated.
    Users touch USB all the time, there's demand for a lot of things over USB; most users never touch internal drives (in fact, most users actively buy hardware without replaceable internal drives), so there's no point in updating the standard.
    The manufactures can just spin new ports and new connectors, since they ship only complete systems anyway.
  • Dug - Tuesday, June 25, 2019 - link

    "There's just no point in updating SATA."
    That could be said for USB, pci, etc.
    There is a very good reason to go beyond an interface that is already saturated, and it doesn't have to be regulated to low performance systems.
  • Samus - Wednesday, June 26, 2019 - link

    SATA is an ancient way of transferring data. Why have a host controller on the PCI BUS when you can have a native PCIe device like NVMe. Further, SATA even with AHCI simply lacks optimization for flash storage. There doesn't seem to be an elegant way of adding NVMe features to SATA without either losing backwards compatibility with AHCI devices or adding unnecessary complexity.
  • TheUnhandledException - Saturday, June 29, 2019 - link

    SATA the protocol was built around supporting spinning discs. Making it work at all for solid state drices was a hack. A hack with a lot of unnecessary overhead. It was useful because it provided a way to put flash drives on existing systems. Future flash will will NVMe over PCIe directly. The only reason for upgrading SATA would be if hard drives actually needed >600 MB/s and they likely never will. So while we will have faster and faster interfaces for drives it won't be SATA. It would be like saying well because we made HDMI/DP faster and faster why not enhance VGA port to support 8K. I mean in theory we could but VGA to support a digital display is a hack and largely just existed for backwards and forwards compatibility because of analog displays.
  • MDD1963 - Tuesday, June 25, 2019 - link

    Why limit yourself to 550 MB/sec? I think having 6-8 ports of SATA4/SAS spec (12 Gbps) would breathe new life into local storage solutions...(certainly a NAS would be limited by even 10 Gbps networks, however, so equipped, but,..gotta start somewhere with incremental improvements, and, many SATA3 spec drives have now been limited to 500-550 MB/sec for years!)
  • Spunjji - Wednesday, June 26, 2019 - link

    You kinda covered the reason right there - where the performance is really needed, SAS (or PCIe) is where it's at. There really is no call for a higher-performing SATA standard.

Log in

Don't have an account? Sign up now