Comments Locked

48 Comments

Back to Article

  • damianrobertjones - Monday, August 10, 2015 - link

    Any chance of adding the price to the top of the page? I can then evaluate suitability etc.
  • SirGCal - Monday, August 10, 2015 - link

    I saw it a few times throughout the article including the top page. $845
  • colinstu - Monday, August 10, 2015 - link

    What's with the benchmarks that aren't compared to anything else?
  • ganeshts - Monday, August 10, 2015 - link

    This is the first of three file servers that we are reviewing with a completely new evaluation methodology. Since we are just starting out, there are no comparison numbers, but I do link to OpenBenchmarking.org pages for each of the tests - so one can see what other systems are capable of with respect to that benchmark.
  • DanNeely - Monday, August 10, 2015 - link

    I think U-NAS deserves more credit for the compactness of their case designs. Being much bulkier than units from Synology/etc has always been an issue for anyone in a space constrained situation (or just with a spouse who grumbles about how much space all of your toys takes up). This case is only 15% larger in volume than Synologies DS1815+ (12.4 vs 14.4 liters); and their 4 bay model comes even closer to the DS414 (7.8 vs 8.7l).

    Assuming I decide on a DIY box to replace my WHS2011 box in a few months, the U-NAS NSC-400 is exactly what I'm looking for in a chassis (and half the size of the box it'd be replacing).
  • xicaque - Monday, November 23, 2015 - link

    I have built two of these units. I am very happy with them. for 200bucks (case only with some wires) is not a bad deal. I am running freenas.
  • DanNeely - Monday, August 10, 2015 - link

    Do you have any power consumption numbers available?
  • Johnmcl7 - Monday, August 10, 2015 - link

    There are some power figures on the last page, 70W under load and 38W idle.
  • DanNeely - Monday, August 10, 2015 - link

    two numbers in the text is a far cry from the normal table showing rebuild times and power levels shown in a normal NAS review.
  • ganeshts - Monday, August 10, 2015 - link

    Dan, Yes, it felt a bit odd for me to leave out that table in a NAS review.. The issue for me is that it is not worth it to spend more than 'N' hours on evaluating a particular system / review a particular product, and setting up the SPEC SFS 2014 benchmarks and processing it took a lot of time.

    I hope to address this issue in future file server reviews (now that SPEC SFS 2014 seems to be stable), but not for the next two (which have already been evaluated and are just pending write-up)
  • nevenstien - Monday, August 10, 2015 - link

    An excellent article on a cost effective File Server/NAS DIY build with a good choice of hardware. After struggling with the dedicated NAS vs. File server question for over a year I decided on FreeNAS using jails for whatever service I wanted to run. I was not a FreeNAS fan before the latest versions which I found very opaque and confused. My experience in the past with how painful hardware failures can be on storage systems even at a PC level convinced me the ZFS file system is the file system of choice for storage systems. The portability of the file system trumps everything else in my opinion. Whether you install FreeNAS or ZFS based Linux the ZFS file system should be the one that is used. When a disk fails its easy and when the hardware fails it’s just a matter of moving the disks to hardware that is not vendor dependent which means basically any hardware with enough storage ports. The software packages of the commercial NAS vendors is great but the main priority for me is the data integrity, reliability portability than the other services like serving video, web hosting or personal cloud services.
  • tchief - Monday, August 10, 2015 - link

    Synology uses mdadm for their arrays along with ext4 for the filesystem. It's quite simple to move the drives to any hardware that runs linux and remount and recover the array.
  • ZeDestructor - Monday, August 10, 2015 - link

    If you virtualize, even the "hardware" becomes portable :)
  • xicaque - Monday, November 23, 2015 - link

    are you pretty good with Freenas? I am not a programmer and there are things that the freenas manual does not address in a clearer way to me. I have a few questions that I like to ask offline. Thanks.
  • thewishy - Tuesday, December 1, 2015 - link

    Agreed, after data corrupting following a disk failure on my synology, it's either a FS with checksum or go home.

    Based on those requirements, it's ZFS or BRTFS. ZFS disk expansion isn't ideal, but I can live with it. BRTFS is "getting there" for RAID5/6, but it's not there yet.

    The choice of board for the cost comparison is about 2.5x the price of the CPU (Skylake pentium) and Motherboard (B150) I decided on. Add a PCI-E SATA card and life is good.
    Granted, it doesn't support ECC, but nor do a lot of mid-range COTS NAS units.
  • Navvie - Monday, August 10, 2015 - link

    Any NAS or Fileserver which isn't using ZFS is a non-starter for me. Likewise a review of such a system which doesn't include some ZFS numbers is of little value.

    I appreciate ZFS is 'new' but people not using it are missing a trick and AnandTech not covering it are doing a disservice to their readers.

    All IMO of course.
  • tchief - Monday, August 10, 2015 - link

    Until you can expand a vdev without having to double the drive count, ZFS is a non starter for many NAS appliance users.
  • extide - Monday, August 10, 2015 - link

    You can ... you can add drives one at a time if you really want (although I wouldn't suggest doing that...)
  • jb510 - Monday, August 10, 2015 - link

    Or one could use BtrFS. Which could stand for better pool resizing (it doesn't, that's just a joke people).

    Check out RockStor, it's no where near as mature as FreeNAS but it's catching up fast. Personally I'd much rather deal with Limux and docker containers than BSD and jails.
  • DanNeely - Monday, August 10, 2015 - link

    If there're major gotchas involved it's a major regression compared to other alternatives out there.

    I'm currently running WHS2011 + StableBit DrivePool. I initially setup with 2x 3GB drives in mirrored storage (raid 1ish equivalent). About a month ago, my array was almost completely full. Not wanting to spend more than I had to at this point (I intend to have a replacement running by December so I can run in parallel for a few months before WHS is EOL) I slapped in an old 1.5GB drive into the server. After adding to the array and rebalancing it I had an extra 750GB of mirrored storage available; it's not a ton but should be plenty to keep the server going until I stand it down. I don't want to lose that level of flexibility in being able to add un-matched drives into my array at need with whatever I use to replace my current setup with.

    If the gotcha is that by adding a single drive I end up with an array that's effectively a 2 drive not-raid1 not-raid0ed with a single drive, it'd be a larger regression in a feature I know I've needed than I'm confortable just to gain a bunch of improvements for what amount to what-if scenarios I've never encountered yet.
  • rrinker - Monday, August 10, 2015 - link

    This chassis looks like just the thing to replace my WHS box. I was probably just going to run Server 2012 R2 Essentials and change over my StableBits DrivePool to the standard Server 2012 version. ALl these NAS boxes and storage system that everyone seems to go nuts over - none of them I've seen have the flexibility of the pooled storage that the original WHS, and WHS 2011 with DrivePool have had all along. Of course there are the Windows haters - but my WHS has been chugging along, backing up my other computers, storing my music and movies, playing movies through my media player, and the only time it's been rebooted since I moved to my new house a year and half ago was when the power went out. It just sits there and runs. One of the best products Microsoft came up with, so of course they killed it. Essentials is the closest thing to what WHS was. Replacing a standard mid tower case with something like this would save a bunch of space. 8 drives, plus a couple of SSDs for the OS drive.. just about perfect. I currently have 6 drives plus an OS drive in my WHS, so 8 would give me even more growing room. I have a mix of 1TB, 2TB, and 3TB drives in there now, with this, up to 8x 4TB which is a huge leap over what I have now.
  • DanNeely - Monday, August 10, 2015 - link

    At $400 for a (non-education) license, S2012 R2 Essentials is a lot more expensive than I want to go. If I build a new storage server on Windows I'm 99% sure I'll be starting with a standard copy of Win10 for the foundation.
  • kmmatney - Monday, August 10, 2015 - link

    The only thing missing from Windows 10 is the automated backup,which works great on WHS. That's the main thing holding me back from changing from WHS. I had to do a few unexpected bare-metal restores after installing Windows 10 on a few machines, and WHS really came through there. I had several issues restoring, but at the end of the day, it was successful in every instance.
  • kmmatney - Monday, August 10, 2015 - link

    I'm also a WHS 2011 + stablebit drivepool user. Best of everything - you can add or remove single drives easily, the data is portable and easy to extract if needed, you can choose what gets mirrored, and what doesn't. The initial balancing takes a while, but after that the speed is fine. I'm up to 8 drives now (7 in the drive pool), and can expand to 12 drives with my Corsair carbide case and a $20 SATA card. I keep an 80GB SSD out of the pool for running a few Minecraft servers. This DIY NAS is interesting, but it would be far cheaper for me to just replace some of my smaller drives with 4 TB models if I need more storage.

    Since WHS 2011 is Windows 7 based - it should still last a while - I don;t see a need to replace it anytime soon. But my upgrade path will probably be Windows 10 + Stablebit drive pool. Cheap and flexible.

  • DanNeely - Monday, August 10, 2015 - link

    WHS 2011 is a pure consumer product (and based on a a server version of windows not win7); meaning it only has a 5 year supported life cycle. After April 2016, it's over and no more patches will be issued.
  • Navvie - Tuesday, August 11, 2015 - link

    I agree. Not being able to expand vdevs easily is a limitation. But weighing the pros and cons, it's a small price to pay.

    The last time I filled a vdev, I bought more drive and created an additional vdev.
  • BillyONeal - Monday, August 10, 2015 - link

    If you want to pay the premium for hardware that can run Solaris nobody's stopping you.
  • ZeDestructor - Monday, August 10, 2015 - link

    ZFS is available on both FreeBSD and Linux, so it's no more expensive than boring old softraid on Linux.
  • bsd228 - Friday, August 14, 2015 - link

    What premium? I've run Solaris on many intel and amd motherboards, but most recently with the HP Microserver line (34L, 50L, 54L).
  • digitalgriffin - Monday, August 10, 2015 - link

    These are good articles. And for someone with a serious NAS requirement they are useful.
    But 99% of home users don't need a NAS
    The 1% of us that do, only 1% need 8 bays with a $200 case and slow $400 intel board. That's a serious game system start up with at least 6 SATA connection motherboard.

    For example Cooler Master HAF912 will hold over 8 drives and is $50.
    6 SATA port motherboard 1150 socket mb $120.
    3.2GHz i-3 (low power processor Y or T version for $130)
    PCIe SATA card $50.

    Lets see you build a build a budget system that can:
    Handle 5 drives (boot/cache, Raid 6 (two drives + 2 parity))
    Handle transcoding with Plex server.
  • ethebubbeth - Monday, August 10, 2015 - link

    Your proposed setup does not support ECC memory, which is essential for any sort of software RAID style configuration. The system in the article does. I would not want to run a NAS without ECC memory unless I were using a hardware RAID card with cache battery backup.
  • brbubba - Monday, August 10, 2015 - link

    This system is quite capable of running Plex transcoding, check the cpu benchmark scores. If you want even more power grab a E3C226D2I and throw in an i7.
  • HideOut - Monday, August 10, 2015 - link

    All this power an d still USB 2.0 ?
  • DanNeely - Monday, August 10, 2015 - link

    It's a 2013 SoC, so no native support on Intel's support. I'm not sure if ASRock deliberately decided not to support it; or just ran out of PCIe lanes. It looks like they should have a few still available but I might be missing something. The SoC has 16 total; 8 go to the PCIe slot, 2 go to sata controllers, 3 to lan controllers, the GPU is a single lane PCIe model. That leaves 2 lanes unaccounted for...
  • DanNeely - Monday, August 10, 2015 - link

    Also, it was never intended for use in consumer systems. USB3 primarily matters for backing a NAS to an external HD (or pulling files off of one); Avonton was intended for higher end business class NASes, that whether rackmount or standalone would be primarily accessed over the network.
  • brbubba - Monday, August 10, 2015 - link

    Glad to see more mainstream sites posting these types of reviews. I was seriously considering the U-NAS boxes, but they aren't exactly what I call mainstream and I have yet to see any US retailers stocking their products.
  • DanNeely - Monday, August 10, 2015 - link

    It appears you can order their cases direct from the manufacturer and pay in USD, so the lack of 3rd party resellers is not a major problem. For my location in the US northeast, they wanted $16.66 to ship the 4bay case. No indication of shipping time was given; so if they don't have a US distribution point they're either shipping slowboat or eating the cost of airmail.
  • Paul357 - Monday, August 10, 2015 - link

    A great system for a NAS/Plex Media server. Still though, I'd wait to see what Denverton brings to the table. If it even is announced this year....
  • bobbozzo - Monday, August 10, 2015 - link

    Hi,

    1. would like to have seen more discussion about the power supply quality and other possible choices; will most 1U PSUs work, or is cabling going to be a problem? Would an SFX PSU fit?

    2. I didn't notice any mention of noise levels.

    3. any idea why the VDI performance was poor?

    thanks!
  • mdw9604 - Tuesday, August 11, 2015 - link

    Would like to see an option for redundant power supplies, even it means a bigger chassis.

    I have a couple of Synology DS1813+ and like them, but my next NAS will need to be beefier and will want some enterprise features, so looking at ZFS, redundant power supplies & possibly an iLO/Drac /Remote Console Card, as it will be located in a data center.

    This one doesn't quite make the cut.
  • xicaque - Monday, November 23, 2015 - link

    Can you elaborate on redundant power supplies? Please? What is their purpose?
  • nxsfan - Tuesday, August 11, 2015 - link

    I have the ASRack C2750d4i + Silverstone DS380, with 8x3.5" HDDs and one SSD (& 16GB ECC). Your CPU and MB temps seem high, particularly when (if I understand correctly) you populated the U-NAS with SSDs.

    If lm-sensors is correct my CPU cores idle around 25 C and under peak load get to 50 C. My MB sits around 41 C. My HDDs range from ~50 C (TOSHIBA MD04ACA500) to ~37 C (WDC WD40EFRX). "Peak" (logged in the last month) power consumption (obtained from the UPS - so includes a 24 port switch) was 60 W. Idle is 41 W.

    The hardware itself is great. I virtualize with KVM and the hardware handles multiple VMs plus multiple realtime 1080p H.264 transcodes with aplomb (VC-1 not so much). File transfers saturate my gigabit network, but I am not a power user (i.e. typically only 2-3 active clients).
  • bill.rookard - Tuesday, August 11, 2015 - link

    I really like this unit. Compact. Flexible. Well thought out. Best of all, -affordable-. Putting together a budget media server just became much easier. Now to just find a good itx based mobo with enough SATA ports to handle the 8 bays...
  • KateH - Tuesday, August 11, 2015 - link

    Another good turnkey solution from ASRock, but I still think they missed a golden opportunity by not making an "ASRack" brand for their NAS units ;)
  • e1jones - Wednesday, August 12, 2015 - link

    Would be great for a Xeon D-15*0 board, but most of the ones I've seen so far only have 6 sata ports. A little more horsepower to virtualize and run CPU intensive programs.
  • akula2 - Monday, August 17, 2015 - link

    >A file server can be used for multiple purposes, unlike a dedicated NAS.

    Well, I paused reading right there! What does that mean? You should improve on that sentence; it could be quite confusing to novice members who aspire to buy/build storage systems.

    Next, I don't use Windows on any Servers. I never recommend that OS to anyone either, especially when the data is sensitive be it from business or personal perspective.

    I use couple of NAS Servers based on OpenIndiana (Solaris based) and BSD OSes. ZFS can be great if one understands its design goals and philosophy.

    I don't use FreeNAS or NAS boxes such as from Synology et al. I build the Hardware from the scratch to have greater choice and cost saving factors. Currently, I'm in Alpha stage building a large NAS Server (200+ TB) based on ZoL (ZFS on Linux). It will take at least two more months of effort to integrate to my company networks; few hundreds of associates based in three nations work more closely to augment efficiency and productivity.

    Yeah, few more things to share:

    1) Whatever I plan I look at Power consumption factor (green), especially high gulping ones such as Servers, Workstations, Hydrib Cluster, NAS Server etc. Hence, I allocate more funds to address the Power demand by deploying Solar solutions wherever it is viable in order to save some good money in the long run.
    2) I mostly go for Hitachi SAS drives and SATA III about 20% (Enterprise segment).
    3) ECC memory is mandatory. No compromise on this one to save some dough.
    4) Moved away from Cloud service providers by building by private cloud (NAS based) to protect my employee privacy. All employee data should remain in the respective nations. Period.
  • GuizmoPhil - Friday, August 21, 2015 - link

    I built a new server using their 4 bay model (NSC-400) last year. extremely sastisfied.

    Here's the pictures: https://picasaweb.google.com/117887570503925809876...

    Below the specs:

    CPU: Intel Core i3-4130T
    CPU cooler: Thermolab ITX30 (not shown on the pictures, was upgraded after)
    MOBO: ASUS H87i-PLUS
    RAM: Crucial Ballistix Tactical Low Profile 1.35V XMP 8-8-8-24 (1x4GB)
    SSD: Intel 320 series 80GB SATA 2.5"
    HDD: 4x HGST 4TB CoolSpin 3.5"
    FAN: Gelid 120mm sleeve silent fan (came with the unit)
    PSU: Seasonic SS-350M1U
    CASE: U-NAS NSC-400
    OS: LinuxMint 17.1 x64 (basically ubuntu 14.04 lts, but hassle-free)
  • Iozone_guy - Wednesday, September 2, 2015 - link

    I'm struggling to understand the test configuration. There seems to be a disconnect in the results. Almost all of the results have an average latency that is looking like a physical spindle, but yet the storage is all SSDs. How can the latency be so high ? Was there some problem with the setup, such that it wasn't measuring the SSD storage but something else ? Could the tester post the sfs_rc file and the sfslog.* and sfsc*.log files ? So we can try to sort out what happened ?

Log in

Don't have an account? Sign up now