Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have an unRAID homelab. It's kind of really awesome in the sense that it lets the home user incrementally add compute and capacity over time without having to do it all in one big shebang and doesn't require nearly the fussing over of my prior linux server.

I started mine with a spare NUC and some portable USB drives and its grown into a beast with over 100TB spread across a high performance SSD backed ZFS pool and an unRAID array, 24 cores, running about 20 containers and a few VMs without breaking a sweat and so far (knock on wood) zero data loss.

All at a couple hundred dollars every so often over the years.

One performance trick it supports is also letting you overlay fast SSD storage over the array, which is periodically moved onto the slower underlying disk. It's transparent, so when you write to the array you can easily get several hundred MB/sec which will automatically get moved onto warm storage periodically. I have two fast SSDs RAIDed there and easily saturate the network link when writing.

The server basically maintains itself, I only go in every so often and bump the docker containers at this point. But I also know that I can add another disk to it in a about 10 minutes and a couple hundred bucks.



> The server basically maintains itself, I only go in every so often and bump the docker containers at this point. But I also know that I can add another disk to it in a about 10 minutes and a couple hundred bucks.

Yes. UnRAID rightfully gets a lot of attention for its flexibility in upgrading with disks of any size (which feels like magic), but for me its current >100-day uptime while maintaining a UnRAID array, three VMs, and a few other services is just as important. The only maintenance I do is occasionally look at notifications, and every month (if that often) upgrade plugins/Docker containers with new versions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: