Introduction and Testbed Setup

QNAP has focused on Intel's Bay Trail platform for this generation of NAS units (compared to Synology's efforts with Intel Rangeley). While the choice made sense for the home users / prosumer-targeted TS-x51 series, we were a bit surprised to see the TS-x53 Pro series (targeting business users) also use the same Bay Trail platform. Having evaluated 8-bay solutions from Synology (the DS1815+) and Asustor (the AS7008T), we requested QNAP to send over their 8-bay solution, the TS-853 Pro-8G. Hardware-wise, the main difference between the three units lie in the host processor and the amount of RAM.

The specifications of our sample of the QNAP TS-853 Pro are provided in the table below

QNAP TS-853 Pro-8G Specifications
Processor Intel Celeron J1900 (4C/4T Silvermont x86 @ 2.0 GHz)
RAM 8 GB
Drive Bays 8x 3.5"/2.5" SATA II / III HDD / SSD (Hot-Swappable)
Network Links 4x 1 GbE
External I/O Peripherals 3x USB 3.0, 2x USB 2.0
Expansion Slots None
VGA / Display Out HDMI (with HD Audio Bitstreaming)
Full Specifications Link QNAP TS-853 Pro-8G Specifications
Price USD 1195

Note that the $1195 price point is for the 8GB RAM version. The default 2 GB version retails for $986. The extra RAM is important if the end user wishes to take advantage of the unit as a VM host using the Virtualization Station package.

The TS-853 Pro runs Linux (kernel version 3.12.6). Other aspects of the platform can be gleaned by accessing the unit over SSH.

Compared to the TS-451, we find that the host CPU is now a quad-core Celeron (J1900) instead of a dual-core one (J1800). The amount of RAM is doubled. However, the platform and setup impressions are otherwise similar to the TS-451. Hence, we won't go into those details in our review.

One of the main limitations of the TS-x51 units is the fact that it can have only one virtual machine (VM) active at a time. The TS-x53 Pro relaxes that restriction and allows two simultaneous VMs. Between our review of the TS-x51 and this piece, QNAP introduced QvPC, a unique way to use the display output from the TS-x51 and TS-x53 Pro series. We will first take a look at the technology and how it shaped our evaluation strategy.

Beyond QvPC, we follow our standard NAS evaluation routine - benchmark numbers for both single and multi-client scenarios across a number of different client platforms as well as access protocols. We have a separate section devoted to the performance of the NAS with encrypted shared folders, as well as RAID operation parameters (rebuild durations and power consumption). Prior to all that, we will take a look at our testbed setup and testing methodology.

Testbed Setup and Testing Methodology

The QNAP TS-853 Pro can take up to 8 drives. Users can opt for either JBOD, RAID 0, RAID 1, RAID 5, RAID 6 or RAID 10 configurations. We expect typical usage to be with multiple volumes in a RAID-5 or RAID-6 disk group. However, to keep things consistent across different NAS units, we benchmarked a single RAID-5 volume across all disks. Eight Western Digital WD4000FYYZ RE drives were used as the test disks. Our testbed configuration is outlined below.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

The above testbed runs 25 Windows 7 VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 25 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation.

Thank You!

We thank the following companies for helping us out with our NAS testbed:

QNAP's HD Station - QvPC Explored
POST A COMMENT

58 Comments

View All Comments

  • hrrmph - Monday, December 29, 2014 - link

    How about a shrunken down unit with 8 x 2.5 inch bays and some 1TB SSDs?

    When will the bandwidth to get the data in and out quickly be available?
    Reply
  • Jeff7181 - Monday, December 29, 2014 - link

    It's available today if you can afford a few thousand dollars on 10 GbE or fibre channel. Reply
  • fackamato - Monday, December 29, 2014 - link

    "8 x 2.5 inch bays and some 1TB SSDs"

    So say RAID5, 7TB of data, ~3.5GBps or ~28Gbps. Yeah you need to bond 3x 10Gb interfaces for that, at least.
    Reply
  • SirGCal - Monday, December 29, 2014 - link

    Why are all of these 8-drive setups configured as RAID-5? Personally, the entire point of so many disks are for more redundancy. At least RAID 6 (or even RAID Z3).

    Personally, I have a 24TB array and a 12 TB array, effectively. Each 8-drive servers (not these pre-built boxes, but actual servers). One with 4 and one with 2TB drives. Raid 6 and Z2. Both easily out perform the networks they are attached to. But they were designed to be as reasonably secure as possible, and they are plenty fast for even small business use. But I have to lose 3 drives to lose data.

    When you do lose a drive, the time it takes, and the stress on the remaining drives, is when you are most likely to lose another drive. Assuming you don't do look-ahead drive replacement, etc. and just let it drive into the ground... Once one drive fails, the others are all tired and aging and the stress involved in rebuilding one drive can cause another one to go. Should that happen in RAID 5, you're done. With RAID 6, you at least have one more security step.

    Knock on wood, I've only once ever had a RAID 6 rebuild fail once where-as I've had multiple RAID 5's fail, and that's over many dozens of servers and many many many years (decades). Hence why moving to RAID 6 was important. IMHO, RAID 5 is peachy for systems with <= 5 drives. But after that, especially with larger drives taking longer rebuild times, moving up to more redundancy is the sole point of having more drives in a unit. (assuming one single volume, etc. There are always other configurations with multiple RAID 5 or other volumes...)

    Just my opining but that's what I see when I see all of the RAID 5 tests on these, could-be, very large arrays. And I'm not even going into the cost of these units, but I don't even see RAID 6 times as tested at all in the final page. If I was to ever be getting something like this, that would be the foremost important area, RAID 6 performances, that is...
    Reply
  • Icehawk - Monday, December 29, 2014 - link

    Agreed - I run a RAID 1 (just 2 HDs) at home at it's sole purpose is live backup/redundancy of my critical files, I don't really care about speed just data security. I don't work in IT anymore but when I did that was also the driving force behind our RAID setups, is this no longer the case? Reply
  • kpb321 - Monday, December 29, 2014 - link

    I am not an expert but my understanding is it is more than just that. The size of drives has increased so much that with a large array like that a rebuild to replace a failed disk is reading so much data that the drives Unrecoverable Error Rate ends up being a factor and a fully functionaly drive may still end up throwing a read error. At that point the best case scenario is that the system retries the read and gets the right data or ignores it and continues on just corrupting that piece of data but the worst case is that the raid now marks that drive as failed and thinks you've just lost all your data due to a two drive failure.

    The first random article about this topic =)
    http://subnetmask255x4.wordpress.com/2008/10/28/sa...
    Reply
  • shodanshok - Wednesday, December 31, 2014 - link

    Please read all the articles speaking about URE rate with a (big) grain of salt.

    Articles as the one linked above suggest that a multi TB disk will be doomed to regularly throw URE. I even read one articles stating that with consumer URE rate (10^-14) it will be almost impossible to read a 12+ TB arrays without error.

    These statements are (generally) wrong, as we don't know _how_ the manufacturer arrive at the published URE numbers. For example, we don't know if they refer to a very large disk population, of a smaller set with aging (end-of-warranty) disks.

    Here you can find an interesting discussion with other (very prepared) peoples on the linux-raid mailing list: http://marc.info/?l=linux-raid&m=1406709331293...

    For the record: I read over 50 TB from an aging 500 GB Seagate 7200.12 disk without a single URE. At the same time, much younger disks (500 GB WD Green and Seagate 7200.11) gave me UREs much sooner than I expected.

    Bottom line: while UREs are a _real_ problem (and the main reason to ditch single-parity RAID schemes, mainly on hardware raid cards where a single unreadable error occurring in an already degraded scenario can kill the entire array), many articles on the subject are absolutely wrong in their statements.

    Regards.
    Reply
  • PaulJeff - Monday, December 29, 2014 - link

    Being in the storage arena for a long time, you have to look at performance and storage requirements. If you need high IOPS, with low overhead of RAID-based read and write commands, RAID5 has less of a penalty than RAID6. In terms of data protection, mathematically, RAID6 is more "secure" when it comes to unrecoverable read error (URE) during RAID rebuilds with high capacity (>2TB) drives with 4 or more drives in the array.

    I never rebuild RAID arrays whether they be hardware or software-based (ZFS) due to the issue of URE and critically long rebuild times. I make sure I have perfect backups because of this. Blow out the array, recreate the array or zpool and restore data. MUCH faster and less likely to have a problem. Risk management at work here.

    To get over the IOPS issue with a large number of disks in an array, I use ZFS and max out the RAM onboard and large L2ARC when running VM's. For database and file storage, lots of RAM, decent sized L2ARC and ZIL are key.
    Reply
  • SirGCal - Tuesday, December 30, 2014 - link

    My smaller array mirrors the bigger one on the critical folders. Simple rsync every night. And I have built similar arrays in pairs that mirror each other all the time for just that reason. However I haven't had an issue with rebuild times... Even on my larger 24TB array, the rebuilt takes ~ 14 hours. However, even doing a full copy of the entire 12TB array parts only would take longer over a standard 1G network. The 'can not live without' bits are stored off-sight sure but still, pulling them back down over the internet and our wonderfully fast (sarcastic) USA internet would be painful also. I think it comes down to how big your arrays are to how long it actually takes to rebuild vs repopulate. My very large arrays can rebuild much faster then repopulate for example. Reply
  • ganeshts - Tuesday, December 30, 2014 - link

    The reason we do RAID-5 is just to have a standard comparison metric across different NAS units that we evaluate. RAID-5 stresses the parity acceleration available, while also evaluating the storage controllers (direct SATA, SATA - PCIe bridges etc.)

    I do mention in the review that we expect users to have multiple volumes (possibly with different RAID levels) for units with 6 or more bays when using in real life.

    We could do RAID-6 comparison if we had more time for evaluation at our disposal :) Already, testing our RAID-5 rebuild / migration / expansion takes 5 - 6 days as the table on the last page shows.
    Reply

Log in

Don't have an account? Sign up now