But their hardware is also terrible. Their disk stations for consumers had 1G NICs until recently, and still underpowered CPUs. The sales had to decline for them to be convinced to upgrade to 2.5G in 2025. But then they removed an optional slot for 10G in 923+ model (they still would have made money from it, as it costs +$150), so when the industry moves to 10G, you can’t upgrade the component and should buy the whole unit. The construction is plastic.
I have a 920+, and it’s too slow, frequently becomes unresponsive when multiple tasks are run.
They lag, and need to be constantly forced to improve?
Selling 10 units at $10 profit is far far better than 100 units at $1.50 profit. Maybe even $2 per.
Why?
Because the more you sell, the more support, sales, and marketing staff you need. More warehouses, shipping logistics, office space, with everything from cleaners to workststions.
Min/Max theory is exceptionally old, but still valid.
So making a crappier product, with more profit per unit, yet having sales drop somewhat, can mean better profit overall.
There are endless ways to work out optimal pricing vs all of the above.
But... in the end, it was likely just pure, unbridled stupid running the show.
The economic notion is called marginal profitability. Better sales are a good thing if the marginal profit is positive, ie, each extra unit sold still increases the overall profit, so in your example it's still profitable if the new model brings $1.5 profit per unit, and you stop only when the marginal profit per unit turns negative.
In tech the model is often misleading, since the large investments to improve the product are not just a question of current profitability, but an existential need. Your existing product line is rapidly becoming obsolete and even if it's profitable today, it won't be for too long. History is full of cautionary tales of companies that hamstrung innovation to not compete against their cash cows, only to be slaughtered by their competition next sales season. One more to the pile.
> So making a crappier product, with more profit per unit, yet having sales drop somewhat, can mean better profit overall.
This will never work in a competitive market like for NAS. The only thing that will get you higher profit margins is a good reputation. If you're coasting by on your reputation, sales and customer experience matter. Less sales one quarter means less people to recommend your product in the next one, which is a downward spiral. A worse customer experience obviously is also a huge problem as it makes people less likely to recommend your product even if they bought it.
They went for a triple-whammy here from which they likely won't recover for years. They now have less customers, less people who are likely to recommend their product, and their reputation/trustworthiness is also stained long-term.
Crappier products at higher margins only works if you're a no-name brand anyways, have no competition, or have a fanatical customer base.
The appeal for me was the "it just works" factor. It's a compact unit and setup was easy. Every self-built solution would either be rather large (factor for me) and more difficult to set up. And I think, that's what has kept Synology alive for so long. It allows entry level users to get into the selfhosting game with the bare minimum you need, especially if transcoding (Plex/Jellyfin) is mentioned.
As an anecdote, I've had exactly this problem when buying my last NAS some time ago. It was DS920+, DS923+ vs. QNAP TS-464. The arguments for QNAP were exactly what you write. Newer chip, 2.5G NICs, PCIe Slot, no NVMe vendor lock-in. So I bought the QNAP unit. And returned it 5 days later, because the UI was that much hot garbage and I did not want to continue using it.
Lately, the UGreen NAS series looks very promising. I'm hearing only good things about their own system AND (except for the smallest 2-bay solution) you can install TrueNAS. It mostly sounds too good to be true. Compact, (rather) powerful and flexible with support for the own OS.
As the next player, with mixed feelings about support, the Minisforum N5 Units also look promising / near perfect. 3x M.2 for Boot+OS, 5 HDD slots and a PCIe low-profile expansion slot.
I now have a mini pc next to my NAS, and leaving my NAS to only file storage chores. That said, I also am running NVidia Shield TV Pro boxes with Kodi for local media and largely don't have to worry about the encoding.
I bought an inexpensive used Mac Mini and attached a standard HDD USB3 enclosure to it with multiple drives. Works great for streaming to any network appliance I want to use.
I sold my synology for an AOOStar WTR Max. It arrived with an issue (usb4 port didn't work) but replacement was quick and easy. So far, I'm rather happy. Really hesitated with Minisforum.
Yep, I had two different models that had been running for about seven years each and had an excellent experience overall until Synology tried to change their drive policy.
I get all the points about EOL software and ancient hardware, but the fact of the matter is I treat it like an appliance and it works that way. I agree that having better transcoding would be nice. But my needs are not too sophisticated. I mostly just need the storage. In a world with 100+ gig LLM models, my Synology has suddenly become pretty critical.
Hi there, I was looking to get a NAS that I can just install and not have to worry about maintenance too much and senility was at the top of the list. If not synology what would you suggest?
In my case, Synology has worked fine. Reliability is a big deal for non-backup RAID (not the same as "backup," but does the trick, 90% of the time).
It's entirely possible that their newer units are crappier than the old workhorses I have.
I don't use any of the fancier features that might require a beefier CPU. One of the units runs a surveillance station, and your choices for generic surveillance DVRs is fairly limited. Synology isn't perfect, but it works quite well, and isn't expensive. I have half a dozen types of cameras (I used to write ONVIF stuff). The Surveillance Station runs them all.
Synology's fine - even ideal - for that use case. If you want to run Docker containers, run apps for video conversion like Plex, etc, then you'd likely want to consider something with a beefier CPU. For an appliance NAS, Synology's really pretty great.
I was just mentioning personal experience. It wasn't even an opinion.
I would love to know what a "good deal" is. Seriously. It's about time for me to consider replacing them. Suggestions for a generic surveillance DVR would also be appreciated.
I am not necessarily disagreeing with you but context is important. I've had 918+ and 923+ and the cpu has idled through all my years of NAS-oriented usage.
Originally I planned to also run light containers and servers on it, and for that I can see how one could run out of juice quickly. For that reason I changed my plan and offloaded compute to something better suited. But for NAS usage itself they seem plenty capable and stable (caveat - some people need source-transcoding of video and then some unfortunately tricky research is required as a more expensive / newer unit isn't automatically better if it doesn't have hardware capability).
A significant part of the prosumer NAS market isn’t running these for storage exclusively. They usually want a media server like Plex or Enby or Jellyfin at minimum and maybe a handful of other apps. It would be better to articulate this market demand as for low power application servers, not strictly storage appliances.
Simplification is the key. My setup went from: Custom NAS hardware running vendor-provided OS and heavyweight media serving software -> Custom NAS hardware running TrueNAS + heavyweight media server -> Custom NAS hardware running Linux + NFS -> Old Junker Dell running Linux + NFS. You keep finding bells and whistles you just don't need and all they do is add complexity to your life.
Not OP, I went back and forth about having containers etc on my NAS. I can of course have a separate server to do it (and did that) but
a) it increases energy cost
b) accessing storage over smb/nfs is not as fast and can lead to performance issues.
c) in terms of workflow, I find that having all containers (I use rootless containers with podman as much as possible) running on the NAS that actually stores and manage the data to be simpler. So that means running plex/jellyfin, kometa, paperless-ngx, *arrs, immmich on the NAS and for that synology's cpu are not great.
In general, the most common requirements of prosumers with NAS is 2.5gbps and transcoding. Right now, none of Synology's offerings offer that.
But really the main reason I dislike synology is that SHR1 is vendor locked behind their proprietary btrfs modifications and so can only be accessed by a very old ubuntu...
This kept me from buying one too. One of the models I considered would make me choose between an M.2 cache OR a 10gbe nic. I didn't know they are plastic now either. It's a shame, I really want to like them. I also heard it some "bootleg" OS you could install over DSM but not sure what it's called. Synology were trying to silence it iirc
Are there any other NASes out there that a) support ZFS/BTRFS, b) support different-sized drives in a single pool, and c) allow arbitrary in-place drive upgrades?
Last I checked, I believe I didn't find anything that satisfied all three. So DSM sits in a sweet spot, I think. Plus, plastic or not, Synology hardware just looks great.
There must be more than that, another explanation, if they are slow. Ten year old CPUs were plenty fast already, far more than enough even, to power an NAS device.
My Windows 11 often takes many seconds to start some application (Sigil, Excel, whatever), and it sure isn't the fault of the CPU, even if it's "only" a laptop model (albeit a newish one, released December 2023, Intel Core Ultra 7 155H, 3800 (max 4800) Mhz, 16 Cores, 22 Logical Processors).
Whenever software feels slow as of the last 1+ decades, look at the software first and not the CPU as the culprit, unless you are really sure it's the workload and calculations.
You are correct that the software should perform better, but I don't think the average buyer understands this - they buy a new (and sometimes quite expensive) device, yet it feels sluggish for them, so they feel like they bought a bad product.
Another factor related to speed is that, they didn’t allow using NVMe slots for storage pool until recently for new models (in 920+ still you can’t do that; even if they allowed it, the limited PCI lanes of that CPU would limit the throughput). So a container’s database has to be stored in mechanical HDDs.
Again other companies moved on, and I remember there were a lot of community dissatisfaction and hacks, until they improved the situation.
Their hardware is limited already, and they also artificially limit it further by software.
They changed course now, and allow using any HDD. Will DSM display all relevant SMART attributes? We will see!
On a DS920+ users will run various containers, Plex/Jellyfin, PiHole, etc. The Celeron J4125 CPU (still used in 2025 on the 2 bay DS225+) is slow when used with the stuff most users would like to use on a NAS today, and the software runs from the HDDs only. Every other equivalent modern NAS is on N100 and can use the M.2 drives for storage just like the HDDs, which makes them significantly more capable.
That depends on the CPU… Some are optimised for power consumption not performance, and on top of that will end up thermally throttled as they are often in small boxes with only passive cooling.
A cheap or intentionally low-power Arm SoC from back then is not going to perform nearly as well as a good or more performance oriented Arm SoC (or equivalent x86/a64 chip) from back then. They might not cope well with 2.5Gb networking unless the NICs support offloading, and if they are cheaping out on CPUs they might not have high-spec network controller chips either. And that is before considering that some are talking to the NAS via a VPN endpoint running on the NAS so there is the CPU load of that on top.
For sort-of-relevant anecdata: my home router ran on a Pi400 for a couple of years (the old device developed issues, the Pi400 was sat waiting for a task so got given a USB NIC and given that task), but got replaced when I upgraded to full-fibre connection because its CPU was a bottleneck at those speeds just for basic routing tasks (IIRC the limit was somewhere around 250Mbit/s). Some of the bottleneck I experienced would be the CPU load of servicing the USB NIC, not just the routing, of course.
> far more than enough even, to power an NAS device.
People are using these for much more than just network attached storage, and they are sold as being capable of the extra so it isn't like people are being entirely unreasonable in their expectations. PiHole, VPN servers, full media servers (doing much more work than just serving the stored data), etc.
> There must be more than that, another explanation
Most likely this too. Small memory. Slow memory. Old SoC (or individual controllers) with slow interconnect between processing cores and IO controllers. There could be a collection of bottlenecks to run into as soon as you try to do more than just serve plain files at ~1Gbit speeds.
The Synology DS925+ for example does not have GPU encoding. For an expensive prosumer-positioned NAS this is crazy. They can't let us have both 2.5gb NICs and a GPU.
I have a 920+, and it’s too slow, frequently becomes unresponsive when multiple tasks are run.
They lag, and need to be constantly forced to improve?