If you want a small NAS in a similar form factor I'd recommend Helios64 5-bay NAS https://kobol.io/. It is an Arm64 board runs mainline Armbian. Also comes with 2.5Gbit networking and a built in UPS battery.
I don't understand why people who care about security and have linux knowledge would use Synology/QNAP. They are both proprietary, often exposed to the internet, and packed full of so many features that they are consistently full of vulnerabilities (SynoLocker/QLocker etc).
I use synology because I tried many alternatives, and none worked out of the box.
I finally got one (SmartOS; I also tried FreeNAS) working, but I used the intel chip with a timebomb clock line for the build.
Then, I gave up. 4 hours after the synology was home, I was much farther along than I’d gotten in a month on the other machine.
I’d definitely pay a premium for a supported open source + hardware NAS combo that supported docker, vm’s and offsite client-side encrypted backup (with dedupe/compression) out of the box. Also, I want it to draw < 10W, excluding disks.
Until then, synology wins, and isn’t a hobby project.
iXSystems, the company that develops FreeNAS (now called TrueNAS), makes their own hardware for it with full support and decent prices. TrueNAS has also come a very long way in the past couple of years, before it used to be a bit rough around the edges but it's now a very solid competitor especially running on their hardware.
The one potential downside is it's not as beginner friendly as Synology or QNAP UI-wise, but I actually like that about it as I'm not a fan of the UI on either.
The major downside to ZFS-based systems like TrueNAS is that for a home or small business user you can't expand the storage with a new drive when you're running low on space. It's designed for data centers where you can afford to build a whole new array when you need more storage.
With Synology you go "oh, I'm down to 1 TB free, well there's this deal on a 10 TB drive, pop it in, now I have 11 TB free"
Right, 2 drives. Not "a new drive". Now you're buying twice as many disks as you would with a Synology and wasting 50% of your capacity on parity. And you better have set up your initial array with 2 drive vdevs as well or you're going to have a sub-optimal experience.
This is the attitude I see a lot in ZFS support forums. "I don't see the problem, just buy twice as many drives!"
> This is the attitude I see a lot in ZFS support forums. "I don't see the problem, just buy twice as many drives!"
This is incorrect on several levels.
You most certainly can create a vdev with a single drive in it and add it to the zfs pool. So go ahead, buy that single 10TB drive and add it to your pool.
That's not a wise thing to do though, so I don't understand why you'd want to. You'll have no redundancy at all, as soon as the drive dies everything is lost. Which pretty much completely defeats the point of having a NAS. So don't do that. But if you really want to, you can.
I want to add a single drive since I can't afford more than a single drive. But I still want to keep the data security of one or more parity drives. Synology lets me do that. ZFS doesn't.
On a Synology NAS (which just uses Linux mdraid underneath the hood so this part isn't exactly some proprietary magic) if you have an array with parity (the equivalent of raid-z/z2), you can add a drive, and it expands the array with that one drive, keeping the parity and recalculating it for the new configuration of drives.
So I can go from an array of 3 x 10 TB disks where one is parity (20 TB usable storage), and then just pop in one more disk and now I have an array with 4 x 10 TB disks (30 TB usable storage) with the same one-disk parity. I can lose any one disk, and lose no data.
ZFS can't do that, since it does't support modifying vdevs. So if I want to be able to add a single drive and expand my storage at any time while keeping the same level of redundancy, ZFS makes no sense.
Synology's configuration of mdraid+BTRFS makes way more sense than ZFS. Unfortunately they haven't contributed it to free software so nobody else can have it (specifically the part of passing through the parity data so that checksum errors in BTRFS can be fixed with mdraid knowledge). I would prefer to not have to rely on Synology's cost-cutting hardware and raft of probably not very secure software. But for the use case of me and the small businesses I support, ZFS has been a non-starter due to the costs.
> So I can go from an array of 3 x 10 TB disks where one is parity (20 TB usable storage), and then just pop in one more disk and now I have an array with 4 x 10 TB disks (30 TB usable storage) with the same one-disk parity. I can lose any one disk, and lose no data
RAID-5 is fragile. You can lose only one disk as you say, but the odds of succesful rebuild are not so great (assuming you have a NAS for data reliability in the first place).
> expand my storage at any time while keeping the same level of redundancy
But you don't keep the same level of redundancy when adding a drive. The more drives you add in RAID-5, the lower your probability of a successful rebuild after the loss of one drive.
I've seen a lot of articles and blog posts like this, but their numbers never seem to make sense. It says that reading through a 4-disk 8 TB array you only have a 15% chance of success. I have full-array BTRFS scrubbing scheduled monthly, according to this my array should have reported errors many times a year...
And of course, no matter what, no form of RAID/ZFS is a backup.
But doesn’t that come back to their point? With syno you pop in a new disk and it rebuilds the array with the new disk and you have more space and the same redundancy? Raid 5/6 whichever
With btrfs, you can add one or however many new devices you want to a storage pool, then rebalance to ensure redundancy across the whole pool. If the device you add is already btrfs formatted, its contents get added to the storage pool, rather than requiring a reformat.
It really surprises me that zfs apparently cannot do this.
The main reason I use btrfs is the flexibility. Subvolumes instead of partitions, and easy expandability. Storage should be dynamic, not static.
> It really surprises me that zfs apparently cannot do this.
Likewise. I really want to like ZFS, but with the 'buy twice the drives or risk your data' approach as above really deters me as a home user.
ZFS has been working on developing raidz expansion for a while now at https://github.com/openzfs/zfs/pull/8853 but I feel that it's a one-man task with no support from the overall project due to that prevailing attitude.
BTRFS is becoming more appealing, even though it has rough edges around RAID write holes that really isn't a big deal, and reporting of free space. I can see my home storage array going to BTRFS in the near future.
> The main reason I use btrfs is the flexibility.
I agree, and I as a small home user, I really like the RAID using different sized disks. E.g. running a raid 1 on three disks: 2TB+4TB+6TB.
It also offers the possibility to increase the storage size over time when drives fail by replacing them with a larger disk.
> last I checked RAID 5/6 is basically asking for data loss with modern drive sizes
This is a debate I would love to see with people who have experience. Since I've seen individuals speak with authority on both sides.
I get that if you have a basic array of disks humming along with a big-ass ext4 partition, once one drive dies, the risk of the other drives being riddled with errors is huge.
But what if your array is both (1) using ZFS or BTRFS (with data checksumming) and (2) has scheduled full-disk data scrubs once a month or so? Wouldn't you catch the initial recoverable errors quick enough?
User of both QNAP devices and one of the iX systems devices here.
ZFS is sexy, but it requires planning and understanding and (as stated by another poster) adding storage in pairs of drives if you want to increase storage incrementally and maintain drive redundancy.
One of the perks of something like a QNAP or a Synology is the support for simply adding a single new drive to an existing RAID5 or RAID6 array, and having the storage box add it transparently while data is migrated to the new, larger RAID array. You pop in another 10TB drive in your RAID6 array and you increase the size of the array 10TB as you'd expect.
Or, if you've finally outgrown your 6-bay device which is full of 3TB drives, you can replace the existing drives with 12TB drives, then once they've all been replaced increase the size of the array to match the new drive sizes. This is done while the device is running and serving data - no downtime, though things may slow down as you would expect during migration operations.
From an end-user perspective this is a very different experience. Yes, FreeNAS/TrueNAS is cool, but I put a Synology at my dad's house.
The scenario you mention can easily be done with ZFS as well. I run a raidz1 and recently migrated from 3x4TB drives to 3x10TB. I bought one drive at a time and gradually expanded the pool. For each new drive I added I simply had to resilver the pool and I was done.
You most certainly can add drives to a zfs pool to expand it.
I've been running zfs on my file servers for ~17 years, have expanded the pool many times. In all that time I've only built a new machine once. Currently still running on my 2009 file server build. I've swapped and added drives to it over the years though.
You can add drives to a ZFS pool, but you need to either replace or add them in massive chunks (or smaller chunks if you're happy buying 2x as many disks as you actually need).
If I want one disk redundancy.
Today I can afford 2 10 TB disks.
Next year I need more than 10 TB capacity and I can afford one more disk.
Two years from now I need another 10 TB capacity and I can afford one more disk.
How can I perform this migration with ZFS? Going from 10 TB - 20 TB - 30 TB of capacity, adding one disk at a time, without losing redundancy.
Or say next year and two years from now 12 TB drives are cheaper. So with (10TB+10TB) + (12TB) + (12TB), Synology will give me 32 TB of usable space and I will have one drive redundancy throughout the whole time.
Honestly curious, this is a real-life situation that me and several of my friends have done with Synology NAS. For this use case, I would love to use cheaper and more performant used hardware, and not have to rely on proprietary software that phones home. ZFS requires upgrading your disks all at once, unRAID has single-disk performance, straight-up Linux BTRFS is "unstable".
> Honestly curious, this is a real-life situation that me and several of my friends have done with Synology NAS.
I guess I don't understand why optimize for the cost of a single drive, above all criteria?
Between this and the other comments, you've mentioned that Synology is over-priced, lower quality, lower performance, proprietary and phones home. Are you really better off vs. building a higher-quality more performant lower-cost ZFS server that's fully open source and has better reliability?
If Synology is higher cost, maybe take that difference in price to buy an extra drive or two?
To me a NAS is all about reliability.
> and I will have one drive redundancy throughout the whole time
Mentioned in the other comment, but that's not a good way of looking at it. What matters is the probability of loss of data while rebuilding the data after one drive has died. The more drives you have in that set, the larger the probability of loss. Your risk is increasing with every drive you add.
If you're comfortable with the large and ever increasing risk of loss (by adding drives without adding redundancy) then Synology is probably indeed a better match for your use case then ZFS.
I don't really understand why pay for dedicated NAS hardware if reliability isn't priority #1, but that's me.
Personally, for stuff that I care about but not quite that much, I just keep on the SSD on my laptop. It'll very probably be fine but there is risk of loss (same as Synology).
For the things I care deeply about, they go on the ZFS server with tons of redundancy, snapshots and backups. I'd never trust the truly precious data to anything other than ZFS.
> You most certainly can add drives to a zfs pool to expand it.
You can't replace drives with bigger ones and expand the pool. This is important, if you have 4/5/6/8 bay chasis and exactly the same amount of drives in the pool.
You can, although you need to replace all the drives in the pool. You can swap one, wait for resilver (or for a month, if you're in a budget), do the next one... And once you replaced every drive, you do like this:
All of their hardware is off the shelf parts, including the case, the motherboard and the drives. I built my own FreeNAS setup using the same components that FreeNAS was selling bundled together at the time. It ended up being about 2/3rds the price.
This is true, but it's more about support. iXSystems provides full testing and support on their hardware, which OP mentioned as a want. DIY is cheaper, especially since you have the option to buy used parts, but the premade systems are actually reasonably priced especially compared to some of the markups on Synology hardware.
Well, I did buy a QNAP TS-419P many years ago. It's still running mainline Debian, that was why I bought it. I would have replaced it with a newer model if the new ones were similarly open, but they're not.
Seriously considering a Helios64, once they get their supply issues resolved.
SHR is just a friendly gui to automatically juggle mdraid arrays to fit when you have different-sized disks (e.g. if you have 2x8 TB disks and 2x10 TB disks, SHR will create one 4-disk 8 TB mdraid array and one 2-disk 2 TB mdraid array and append them to a single volume).
The one proprietary bit Synology has is a way to use mdraid parity to fix checksum errors detected in BTRFS.
That seems perfect spec-wise. Would you mind giving a quick review of the acoustic characteristics of the case?
I'm looking to move away from a QNAP box, and one of the driving reasons is the horrible "hard-plastic hard-mount everything" design that couldn't amplify hard drive noise any more if they'd done it on purpose.
(The other reasons are that I'd rather manage ZFS myself, and the need for more than gigabit ethernet)
Another suggestion for QNAP owners is to simply replace the firmware with a regular Linux distribution. This is what I’ve done and haven’t looked back.
Is this commonly possible? I know some of their devices can run a normal distribution, but when I looked into this recently I didn't see confirmation for current models.
I have a 4 bay TS-453 Pro (Celeron J1900). I swapped out one of the HHDs for an SSD with the distro install and created a ZFS array with other three drives.
In my experience, BIOS/EFI comes up if you mash F2 with a HDMI monitor and a USB keyboard and mouse attached. Your mileage may vary.
A few niggly bits: the LCD says “System Starting” until LCDd/lcdmon starts and there is no control over the HDD activity lights. Fan Control is sufficient to quiet the fans to a tolerable level once Smart Fan is disabled in the EFI.
I _desperately_ want something like this, but in a 1U 4-drive form factor. If someone is working on something like this, _please_ let me know. It doesn't even have to be an RK3399 based system, just something that works with a mainline (or near-mainline) linux distro and will host an SMB server & DLNA server.
Why not a 1U Supermicro server? They have options that are short depth or with Atom processors. If you just want a 4 disk 1U server, those would be a good option.
Or is it a question of budget? If that’s the case, what about a used server (like those from UNIXSurplus)?
Or is it a question of power? If that’s the case, then... I don’t quite know in that case.
For other popular, affordable used short-depth 1U servers that you can get with 4 external 3.5" drive bays, there's the Dell R210 II, R220, and maybe R230.
Without getting into questions of possible security implications/perceptions of where servers are designed and manufactured... I do like the simplicity of some of the Supermicro options. I currently have a short-depth 1U Atom-based one, which runs passively-cooled except for the PSU fan, which I've replaced with a soldered-in practically silent Noctua. I intentionally got a mobo without a crazy BMC with IPMI, but I still don't assume the hardware is very trustworthy. It might still be more trustworthy than a popular consumer board.
(BTW, if you're looking at any quiet/cool-running server that uses an Intel Atom C2xxx or some other Atom models, make sure that either it isn't a lemon one, or it has a mitigation. [1][2]
Thanks, that's good to keep in mind. ECC is great, though I personally don't want IPMI. (The only IPMI implementation I looked at so far appeared likely chock full of vulnerabilities, and it was sharing a NIC. There was a jumper that ostensibly disabled the BMC, but didn't appear to disable it fully.)
I mean power and price are a factor, but I haven't seen a hot-swap drive 1U system that that's not big and noisy. I guess part of the problem is that my rack is right next to my desk (noise) and can only support 24" of depth (I'm real drunk and somehow forgot deeper 1U hot-swap servers exist).
Not ARM-based though, but they do have a variant that can host 4 pico-itx boards: http://www.casetronic.com/corporates/42-t1040.html . I gather you may be able to convert that one easier to fit an ARM board, or RISC-V for that matter.
iStarUSA make some cases that might suit you. I can't directly link to its search results, but they have a requirements selector here: http://www.istarusa.com/en/istarusa/index.php
Seems to be so common with these niche SBCs and accessories. They look so cool, but are unobtainable. People sometimes complain about the over-use of RasPis, but one thing they have going for them is that you can always find them available from many different sources.
I personally have a qnap Nas because I wanted something cheap. I did not enabled all the fonction and I will definetly not enable all the "internet functions".
Many 2 bay NAS cost close to that figure, so it's still convenient, and more bays are never enough.
The only two problems with the Helios, which I monitor since last year, is that it appears to be not 100% stable yet, although most problems reported in the forums seem related to excessive clock speed (which can be throttled down), and being always out of stock. It's a revolutionary product given the price and features (how many noticed it also has a small UPS on board?), therefore as soon as a new production batch is ready it goes away almost immediately.
I think the 4 Bay Size ( The 4S ) fits me better. I just prefer something taller wider than taking lots of ground space. I dont have the luxury to live in a large flat. Although the 4S is now discontinued?
I dont quite understand "excessive cloud speed". Assuming I am only using it for file transfer and nothing more would it still be a problem? Or is it something to do with Filesystem. I haven't checked out if the default support something like ZFS or BTRFS.
It seems much more efficient with a small, DC, li-ion UPS than an external UPS which will use an AC inverter (and now you gotta decide, spring more for pure sine wave?) and heavy lead-acid batteries.
It seems like a gimmick to me. It depends on the use case, I guess. So you have your NAS on a built-in UPS. Now what about the rest of your network? Switch? Router? Modem? You still need an external UPS.
It's mostly a gimmick, but for home/small business users I think it is actually pretty useful.
A mobile power bank-sized battery like this can probably power a NAS like this for at least 10-15 minutes (personal experience messing around with USB-C).
Most home/small business NAS usage is SMB file sharing, and SMB writes are async. Just a minute to sync writes and close the file system safely is huge for most users.
As someone who has supported a small business, just being able to handle the 5 minutes between someone running the coffee maker at the same time as the fridge and then resetting the breaker is huge.
I don't understand why people who care about security and have linux knowledge would use Synology/QNAP. They are both proprietary, often exposed to the internet, and packed full of so many features that they are consistently full of vulnerabilities (SynoLocker/QLocker etc).