Silverstone MS12 and Yottamaster HC2-C3 USB 3.2 Gen 2x2 20Gbps Enclosures Reviewedby Ganesh T S on August 12, 2021 10:00 AM EST
The 2021 AnandTech DAS Testbed and Test Suite
The evaluation routine for direct-attached storage devices – portable SSDs, storage bridges (including RAID enclosures), and memory cards – all utilize the same testbed and have similar workloads with slight tweaks based on the end market for the product. Our testbeds have kept pace with the introduction of new external interfaces - Thunderbolt 2, Thunderbolt 3, and USB 3.2 Gen 2 via Type-C. In mid-2014, we prepared a custom desktop based on Haswell, which was then upgraded to Skylake in early 2016. A botched Thunderbolt 3 firmware upgrade on the Skylake machine meant that we had to shift to the Hades Canyon NUC starting in early 2019. With USB 3.2 Gen 2x2 gaining traction, the inability to use an add-in card in the Hades Canyon NUC meant that we had to go hunting for a new DAS testbed platform.
I/O ports on PCs are enabled by one or more of the following approaches:
- Direct interface from the SoC / CPU
- Direct interface from the Platform Controller Hub (PCH) - either part of the package, or, discrete
- External bridge chip interfacing between one of the direct interfaces above or other available high-speed I/O lanes and the relevant I/O port
The first option is the preferred one, followed by an external bridge chip that connects to the high-speed I/O lanes directly from the SoC / processor. Connecting via the PCH is the least preferred, as it can introduce DMI bottlenecks as the data shuttles between the processor and the PCH. In almost all Thunderbolt 3 platforms prior to Ice Lake, the OEMs ended up connecting the Thunderbolt controller to the PCH's PCIe lanes. One of the requirements for the new testbed, therefore, was to ensure that we had PCIe lanes off the processor exposed via multiple slots in a compact package (as similar to the previous testbed - the Hades Canyon NUC - as possible).
The 2021 AnandTech DAS Testbed
After considering various options in the market, we figured out that the Quartz Canyon NUC (essentially, the Xeon / ECC version of the Ghost Canyon NUC) was a good fit for our requirements. Intel provided us with a sample of the Quartz Canyon NUC, and ADATA helpfully sponsored 2x 16GB DDR4-3200 ECC SODIMMs and a PCIe 3.0 x4 NVMe SSD - the IM2P33E8 1TB.
The most attractive aspect of the Quartz Canyon NUC is the presence of two PCIe slots (electrically, x16 and x4) for add-in cards. In the absence of a discrete GPU - for which there is no need in a DAS testbed - both slots are available. In fact, we also added a spare SanDisk Extreme PRO M.2 NVMe SSD to the CPU direct-attached M.2 22110 slot in the baseboard in order to avoid DMI bottlenecks when evaluating Thunderbolt 3 devices. This still allows for two add-in cards operating at x8 (x16 electrical) and x4 (x4 electrical).
One of the issues with the Yottamaster PCIe AIC used in our previous USB 3.2 Gen 2x2 review was the need for a SATA power connector. SilverStone provided their SST-ECU06 add-in card based on the ASMedia ASM3242 host controller for use in the testbed.
The specifications of the testbed are summarized in the table below:
|AnandTech DAS Testbed Configuration|
|System||Intel Quartz Canyon NUC9vXQNX|
|CPU||Intel Xeon E-2286M|
|Memory||ADATA Industrial AD4B3200716G22
32 GB (2x 16GB)
DDR4-3200 ECC @ 22-22-22-52
|OS Drive||ADATA Industrial IM2P33E8 NVMe 1TB|
|Secondary Drive||SanDisk Extreme PRO M.2 NVMe 3D SSD 1TB|
|Add-on Card||SilverStone SST-ECU06 USB 3.2 Gen 2x2 Type-C Host|
|OS||Windows 10 Enterprise x64 (21H1)|
|Thanks to ADATA, Intel, and SilverStone for the build components|
The bus configuration of the testbed is brought out in the annotated picture below.
External I/O devices are capped at 40Gbps for the near future, with USB4 products just starting to appear in the market. The USB 3.x products top out at 20Gbps (for the Gen 2x2 variant), and all incoming products belonging to the USB 3.x category are evaluated using the Type-C host port enabled by the SilverStone SST-ECU06 add-in card. For Thunderbolt 3 devices, the Type-C port enabled by the Titan Ridge controller is used.
The plans for the new testbed were set in motion late last year, and our hope was that an add-in card with Thunderbolt 4 / USB4 support would become ready by the time Thunderbolt 4 / USB4 peripherals started rolling in. A system based on Tiger Lake such as the Beast Canyon NUC (with processor-attached Thunderbolt 4 / USB4 ports and spare PCIe slots for a USB 3.2 Gen 2x2 AiC) would have been an ideal single system capable of evaluating all types of direct-attached storage devices. Unfortunately, the Beast Canyon NUC was a long way off when we started planning for this testbed. When the USB4 / Thunderbolt 4 peripherals start to come in, our plan is to evaluate them using the Titan Ridge host as well as the Thunderbolt 4 ports off one of the Tiger Lake mini-PCs to figure out the performance differences. In the worst case, we might end up using a different testbed for Thunderbolt 4 / USB4 devices.
The 2021 AnandTech DAS Suite
The testbed hardware is only one segment of the evaluation. Over the last few years, the typical direct-attached storage workloads have also evolved. High bit-rate 4K videos at 60fps have become quite common, and 8K videos are starting to make an appearance. Game install sizes have also grown steadily, thanks to high resolution textures and artwork. Backups tend to involve larger number of files, many of which are small in size. The vendors have also appropriately responded, with 4TB bus-powered units already available in the market. Keeping these in mind, we have adopted some tweaks to our evaluation methodology.
The evaluation scheme for DAS units involves multiple workloads which are described in detail in the corresponding sections.
- Synthetic workloads using CrystalDiskMark and ATTO
- Real-world access traces using PCMark 10's storage benchmark
- Custom robocopy workloads reflective of typical DAS usage
- Sequential write stress test
The new test suite makes the following updates:
- Updated CrystalDiskMark software version to 8.0.2 from 7.0.0
- Increased ATTO file size from 8 GB to 32 GB
- Increased CrystalDiskMark workload span from 8GiB to 32GiB
- Restructured customm robocopy workloads, with multi-threaded copying enabled
- Temperature tracking enabled during all idling intervals
The major update is in our custom robocopy workloads - the earlier version transferred around 42GB of data back and forth from the DAS over each of three iteration sets for a total of around 250GB of traffic. The new version ups this to around 95GB per iteration direction for a total of around 570GB of traffic.
The robocopy workloads have typically transferred data between the DAS and a RAM drive to remove any bottlenecks from the testbed's storage subsystem. In the new test suite version, we also include a disk-to-disk iteration set where the whole suite (around 318GB) is transferred from the CPU-attached NVMe SSD to the DAS and back. One of the interesting aspects discovered while developing the new workloads was that the high-speed DAS units were woefully under-utilized when subject to transfers using default robocopy parameters. Using the /MT:32 parameter helps queue up more traffic to the DAS and make optimal use of the capabilities of NVMe SSDs. A downside is the increased CPU usage compared to the non-multi-threaded copies, but that is rarely a concern even for processors operating in the 15 - 28W TDP envelop. Practically speaking, the ability to use the MT option effectively depends on how busy the host system is with other tasks taxing the CPU.