Intel's Caching History

Intel's first attempt at using solid-state memory for caching in consumer systems was the Intel Turbo Memory, a mini-PCIe card with 1GB of flash to be used by the then-new Windows Vista features Ready Drive and Ready Boost. Promoted as part of the Intel Centrino platform, Turbo Memory was more or less a complete failure. The cache it provided was far too small and too slow—sequential writes in particular were much slower than a hard drive. Applications were seldom significantly faster, though in systems short on RAM, Turbo Memory made swapping less painfully slow. Battery life could sometimes be extended by allowing the hard drive to spend more time spun down in idle. Overall, most OEMs were not interested in adding more than $100 to a system for Turbo Memory.

Intel's next attempt at caching came as SSDs were moving into the mainstream consumer market. The Z68 chipset for Sandy Bridge processors added Smart Response Technology (SRT), a SSD caching mode for Intel's Rapid Storage Technology (RST) drivers. SRT could be used with any SATA SSD but cache sizes were limited to 64GB. Intel produced the SSD 311 and later SSD 313 with low capacity but relatively high performance SLC NAND flash as caching-optimized SSDs. These SSDs started at $100 and had to compete against MLC SSDs that offered multiple times the capacity for the same price—enough that the MLC SSDs were starting to become reasonable options for every general-purpose storage without any hard drive.

Smart Response Technology worked as advertised but was very unpopular with OEMs, and it didn't really catch on as an aftermarket upgrade among enthusiasts. The rapidly dropping prices and increasing capacities of SSDs made all-flash configurations more and more affordable, while SSD caching still required extra work to set up and small cache sizes meant heavy users would still frequently experience uncached application launches and file loads.

Intel's caching solution for Optane Memory is not simply a re-use of the existing Smart Response Technology caching feature of their Rapid Storage Technology drivers. It relies on the same NVMe remapping feature added to Skylake chipsets to support NVMe RAID, but the caching algorithms are tuned for Optane. The Optane Memory software can be downloaded and installed separately without including the rest of the RST features.

Optane Memory caching has quite a few restrictions: it is only supported with Kaby Lake processors and it requires a 200-series chipset or a HM175, QM175 or CM238 mobile chipset. Only Core i3, i5 and i7 processors are supported; Celeron and Pentium parts are excluded. Windows 10 64-bit is the only supported operating system. The Optane Memory module must be installed in a M.2 slot that connects to PCIe lanes provided by the chipset, and some motherboards will also have M.2 slots that do not support Optane Caching or RST RAID. The drive being cached must be SATA, not NVMe, and only the boot volume can be cached. Lastly, the motherboard firmware must have Optane Memory support to boot the cached volume. Motherboards that have the necessary firmware features will feature a UEFI tool to unpair the Optane Memory cache device from the backing device being cached, but this can also be performed with the Windows software.

Many of these restrictions are arbitrary and software enforced. The only genuine hardware requirement seems to be a Skylake 100-series or later chipset. The release notes for the final production release of the Optane Memory and RST drivers even includes in the list of fixed issues the removal of the ability to enable Optane caching with a non-Optane NVMe cache device, and the ability to turn on Optane caching with a Skylake processor in a 200-series motherboard. Don't be surprised if these drivers get hacked to provide Optane caching on any Skylake system that can do NVMe RAID with Intel RST.

Intel's latest caching solution is not being pitched as a way of increasing performance in high-end systems; for that, they'll have full-size Optane SSDs for the prosumer market later this year. Instead, Optane Memory is intended to provide a boost for systems that still rely on a mechanical hard drive. It can be used to cache access to a SATA SSD or hybrid drive, but don't expect any OEMs to ship such a configuration—it won't be cost-effective. The goal of Optane Memory is to bring hard drive systems up to SSD levels of performance for a modest extra cost and without sacrificing total capacity.

Introduction Testing Optane Memory
POST A COMMENT

110 Comments

View All Comments

  • evilpaul666 - Thursday, April 27, 2017 - link

    Everyone presumes that technology will improve over time. Talking up 1000x improvements, making people wait for a year or more, and then releasing a stupid expensive small drive for the Enterprise segment, and a not particularly useful tiny drive for whoever is running a Core i3 7000 series or better CPU with a mechanical hard drive, for some reason, is slightly disappointing.

    We wanted better stuff now after a year of waiting not at some point in the future which was where we've always been.
    Reply
  • Lehti - Tuesday, April 25, 2017 - link

    Hmm... And how does this compare to regular SSD caching using Smart Response? So far I can't see why anyone would want an Optane cache as opposed to that or, even better, a boot SSD paired with a storage hard drive. Reply
  • Calin - Tuesday, April 25, 2017 - link

    Did you brought the WD Caviar to steady state by filling it twice with random data in random files? Performance of magnetic media varies greatly based on drive fragmentation Reply
  • Billy Tallis - Wednesday, April 26, 2017 - link

    I didn't pre-condition any of the drives for SYSmark, just for the synthetic tests (which the hard drive wasn't included in). For the SYSmark test runs, the drives were all secure erased then imaged with Windows. Reply
  • MrSpadge - Tuesday, April 25, 2017 - link

    "Queue Depth > 1

    When testing sequential writes at varying queue depths, the Intel SSD DC P3700's performance was highly erratic. We did not have sufficient time to determine what was going wrong, so its results have been excluded from the graphs and analysis below."

    Yes, the DC P3700 is definitely excluded from these graphs.. and the other ones ;)
    Reply
  • Billy Tallis - Wednesday, April 26, 2017 - link

    Oops. I copied a little too much from the P4800X review... Reply
  • MrSpadge - Tuesday, April 25, 2017 - link

    Billy, why is the 960 Evo performing so badly under Sysmark 2014, when it wins almost all synthetic benchmarks against the MX300? Sure, it's got fewer dies.. but that applies to the low level measurements as well. Reply
  • Billy Tallis - Wednesday, April 26, 2017 - link

    I don't know for sure yet. I'll be re-doing the SYSmark tests with a fresh install of Windows 10 Creators Update, and I'll experiment with NVMe drivers and settings. My suspicion is that the 960 EVO was being held back by Microsoft's horrific NVMe driver default behavior, while the synthetic tests in this review were run on Linux. Reply
  • MrSpadge - Wednesday, April 26, 2017 - link

    That makes sense, thanks for answering! Reply
  • Valantar - Tuesday, April 25, 2017 - link

    Is there any reason why one couldn't stick this in any old NVMe-compatible motherboard regardless of paltform and use a software caching system like PrimoCache on it? It identifies to the system as a standard NVMe drive, no? Or does it somehow have the system identify itself on POST and refuse to communicate if it provides the "wrong" identifier? Reply

Log in

Don't have an account? Sign up now