The first version of the Non-Volatile Memory Express (NVMe) standard was ratified almost five years ago, but its development didn't stop there. While SSD controller manufacturers have been hard at work implementing NVMe in more and more products, the protocol itself has acquired new features. Most of them are optional and most are intended for enterprise scenarios like virtualization and multi-path I/O, but one feature introduced in the NVMe 1.2 revision has been picked up by a controller that will likely see use in the consumer space.

The Host Memory Buffer (HMB) feature in NVMe 1.2 allows a drive to request exclusive access to a portion of the host system's RAM for the drive's private use. This kind of capability has been around forever in the GPU space under names like HyperMemory and TurboCache, where it served a similar purpose: to reduce or eliminate the dedicated RAM that needs to be included on peripheral devices.

Modern high-performance SSD controllers use a significant amount of RAM, and typically we see a ratio of 1GB of RAM for every 1TB of flash. The controllers are usually conservative about using that RAM as a cache for user data (to limit the damage of a sudden power loss) and instead it is used to store the organizational metadata necessary for the controller to keep track of what data is stored where on the flash chips. The goal is that when the drive recieves a read or write request, it can determine which flash memory location needs to be accessed based on a much quicker lookup in the controller's DRAM, and the drive doesn't need to update the metadata copy stored on the flash after every single write operation is completed. For fast consistent performance, the data structures are chosen to minimize the amount of computation and number of RAM lookups required at the expense of requiring more RAM.

At the low end of the SSD market, recent controller configurations have chosen instead to cut costs by not including any external DRAM. There are combined savings of die size and pin count for the controller in this configuration, as well as reduced PCB complexity for the drive and eliminating the DRAM chip from the bill of materials, which can add up to a competitive advantage in the product segments where performance is a secondary concern and every cent counts. Silicon Motion's DRAM-less SM2246XT controller has stolen some market share from their own already cheap SM2246EN, and in the TLC space almost everybody is moving toward DRAM-less options.

The downside is that without ample RAM, it is much harder for SSDs to offer high performance. Even with clever firmware, DRAM-less SSDs can cope surprisingly well with just the on-chip buffers, but they are still at a disadvantage. That's where the Host Memory Buffer feature comes in. With only two NAND channels on the 88NV1140, it probably can't saturate the PCIe 3.0 x1 link under even the best circumstances, so there will be bandwidth to spare for other transfers with the host system. PCIe transactions and host DRAM accesses are measured in tens or hundreds of nanoseconds compared to tens of microseconds for reading from flash, so it's clear that a Host Memory Buffer can be fast enough to be useful for a low-end drive.

The trick then is to figure out how to get the most out of a Host Memory Buffer, while remaining prepared to operate in DRAM-less mode if the host's NVMe driver doesn't support HMB or if the host decides it can't spare the RAM. SSD suppliers are universally tight-lipped about the algorithms used in their firmware and Marvell controllers are usually paired with custom or third-party licensed firmware anyways, so we can only speculate about how a HMB will be used with this new 88NV1140 controller. Furthermore, the requirement of driver support on the host side means this feature will likely be used in embedded platforms long before it finds its way into retail SSDs, and this particular Marvell controller may never show up in a standalone drive. But in a few years time it might be standard for low-end SSDs to borrow a bit of your system's RAM. This becomes less of a concern as we move through successive platforms having access to more DRAM per module in a standard system.

Source: Marvell

Comments Locked

33 Comments

View All Comments

  • Pissedoffyouth - Tuesday, January 12, 2016 - link

    >it might be standard for low-end SSDs to borrow a bit of your system's RAM

    You could say that it's like the soft modem of old, but I really don't have an issue with this. My GPU uses system memory as it's integrated and RAM in a desktop or laptop is plentiful.

    Bring on cheap TLC 2TB drives!
  • nathanddrews - Tuesday, January 12, 2016 - link

    Or the network cards of today, the sound cards of today, etc. borrowing CPU cycles...

    Whatever it takes to get these 2TB+ SSDs in my hands, bring it on!
  • ddriver - Tuesday, January 12, 2016 - link

    You mean the lousy network or sound card of today. Good hardware is hardware accelerated.

    I don't see this as a move to bring down prices, but as a move to bring up profit margins by skimping on the hardware. SDD cache is best placed in the SSD, in system RAM it will still have to be transferred via the bus. Also, operating systems already have a RAM cache for the file system.

    All in all, a cheap move.
  • nathanddrews - Tuesday, January 12, 2016 - link

    Yes, but if it's cheap and it works... then who cares? This isn't for bleeding edge hardware in the first place. For most people, onboard networking and audio is just fine and they'll never appreciate the difference.
  • ddriver - Tuesday, January 12, 2016 - link

    As long as the purchase is a product of an informed decision it should be OK in this particular case. But generally it is a bad practice, cutting corners often results in serious damages or even casualties.
  • Justwow - Wednesday, January 13, 2016 - link

    "cutting corners often results in serious damages or even casualties."

    Are you going to kill yourself over your SSD using a bit of system RAM? What are you on about.
  • ddriver - Thursday, January 14, 2016 - link

    Improve your reading and reasoning skills, as I said "it should be OK in this particular case".

    But when a car manufacturer cuts corners and that results in lower yet still in the limits of "acceptable" reliability - that kills people, which is bad, even if the industry and regulators have deemed it acceptable.
  • icrf - Tuesday, January 12, 2016 - link

    The only way they get to bring up profit margins is if no one else does this and it's a competitive advantage. One more than one supplier does, I think we'll see some price flexibility. If it bumps up performance enough, it may end up being a premium budget drive on that aspect alone, and priced accordingly.
  • close - Tuesday, January 12, 2016 - link

    You don't have L3/4 cache on all CPUs, you don't have dedicated memory for all GPUs, you don't have dedicated cache for all SSDs. There's room for expensive hardware and for cheap hardware. You can't have only the best on the market.

    A 5.25" SSD might help though.
  • extide - Tuesday, January 12, 2016 - link

    No, a 5.25" SSD wont help bring prices down. SSD's just don't take up much space, and they don;'t need to use alternate, more expensive technologies or production methods to make an ssd fit into a 2.5" form factor. A 5.25" ssd would just have the same pcb as a 2.5" ssd and a ton of extra space. You could decide to fill all that space with more flash .. but then you would end up with a drive costing many thousands of dollars, and there is essentially no market for such a device in the consumer space. Same goes for 3.5" ssd's.

    See, people have this idea in their head that smaller costs more, but the thing is a 2.5" ssd is not the small version, is it the default size!

Log in

Don't have an account? Sign up now