Well that's appropriate, I'm eyeball deep in a storage machine build and an article extolling a NAS pops in my RSS feed. Per usual I read it and per usual it's full of hardware RAID and simple filesystems with a fancy UI.
There is a time/place for such things but there are important points to keep in mind when building storage, especially if you care about your data being safe and recoverable over time.
Some bullet points
✅ Hardware RAID only protects you from disk failures
✅ Hardware RAID is NOT portable
✅ Software RAID is portable but ONLY protects you from disk failures
✅ All but a few filesystems suffer from bitrot and this can NOT be avoided
✅ CPU/RAM/Cable/Controller Card (even the built in ones) CAN and WILL fail taking your data with them
✅ btrfs and zfs can keep bitrot at bay, IF properly configured
✅ btrfs failure modes are catastrophic and horrifying
✅ zfs failure modules are catastrophic but are less horrific than btrfs
✅ ext4 and exFAT are the two most reliable "dumb" filesystems you can pick
✅ ntfs is OK but isn't reliably compatible with anything outside Windows land
✅ hfs[+] (Apple's filesystems) are hot garbage and may the dieties come to your aid if you suffer any form of failure
✅ vfat/fat32 are great at rotting from the inside out, slightly less painful than hfs[+] for rot and failures
✅ lvm and lvm2 are handy but wholly useless for dynamic provisioning
✅ software raid + lvm2 + ext4 is NOT a durable solution ; lots of moving parts, many failure modes and not so subtle problems lay down this path. bitrot is very real here
✅ zfs and btrfs CAN be used with 1Gb RAM machines
✅ zfs and btrfs do NOT suffer bitrot IF configured propery
There is a pattern here: if you try to use a tech stack that's not entirely unified (RAID + lvm2 + filesystem folks I'm looking at you) you're going to have a LOT of moving parts, no data durability (checksums exist in zfs and btrfs at the filesystem level for a reason!) and your failure modes will be entertaining at best. If any one of those 3 pieces of the stack fail you're into a world of pain recovering that layer and hoping the others remain unaffected.
That and the bitrot problem: computers are comprised of a LOT of hardware components working together to do things. If ANY hardware related to your disk stack (CPU, RAM, disk controller, power cable, data cable, south bridge, north bridge, usb controller, etc) goes awry it'll surface higher up the "stack" in the software as some form of failure.
The traditional RAID + lvm2 + filesystem stack makes some assumptions about those moving parts generally working and really only the disks failing. That's a recipe for problems if something other than the disk goes bad along the way. Nevermind cascading disk failures... The software/hardware tools will tell you the disks are problematic but any replacements will immediately go bad. That's not a fun, fast or easy process to sort out. Nevermind you're likely going to incur subtle bitrot problems along the way that only a filesystem check will be able to see. It won't be able to correct rot coming from subtly broken (or confused) RAID or lvm2 either. The traditional filesystems aren't structured to recover data that's subtly corrupted at lower layers.
Enter zfs. [Editors note: btrfs has some of zfs' features but falls short in many ways] ZFS assumes ALL THE THINGS will break. Yeah, it has a "it'll break and it's my job to handle that mess" attitude towards data durability. You get choices: from no durability through wildy durable (think 99.999% reliable). The default is to checksum all the data being read/written to the disk and to tell you if reads/writes or checksums fail. Basically it's assuming something WILL go wrong (I promise it will at some point) and tell you about it. If you setup your disks with zfs in a way where >1 copy of the data exists (you can have >1 copy even with a single disk BTW) it'll fixup the filesystem on the fly and report a checksum error. If there are enough errors for a disk, it'll drop the disk offline so you can investigate. If you have >1 disk you can do some really neat things to avoid disk controller failure problems, cabling issues and more. ECC ram also protects you from RAM problems. You got choices to reduce the pain.
To say zfs is durable and resilient understates its capabilities greatly. Nevermind you can deploy FreeNAS and similar systems pretty cheaply and easily these days. You can even re-purpose an old computer OR a Rasbperry Pi 3b+ for zfs storage (notes on the latter coming soon to a post near you).
When you see the whiz bang, pretty appliance think about the above. Even a Synology is a shiny layer on top of the traditional software raid + lvm2 + ext4 model. It's not as durable or resilient as they want you to believe. A computer running FreeNAS has better durability and resiliency and will cost you the same.
Thank you for reading.