Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

HDDs are the backbone of my homelab since storage capacity is my top priority. With performance already constrained by gigabit Ethernet and WiFi, high-speed drives aren’t essential. HDDs can easily stream 8K video with bandwidth to spare while also handling tasks like running Elasticsearch without issue. In my opinion, HDDs are vastly underrated.


I run a hybrid setup which has worked well for me: HDDs in the NAS for high-capacity, decent-speed persistent storage with ZFS for redundancy, low-capacity SSDs in the VM/container hosts for speed and reliability.


Same, I run my containers and VMs off of 1TB of internal SSD storage within a Proxmox mini PC(with an additional 512gb internal SSD for booting Proxmox). Booting VMs off of SSD super quick so its the best of both worlds really.


Yes, those workloads are mostly sequential I/O, that HDDs can still handle. Most of my usage is heavily parallel random I/O like software development and compiles.

You also have the option of using ZFS with SSDs as L2ARC read-cache and ZIL write-cache to get potentially the best of both worlds, as long as your disk access patterns yield a decent cache hit rate.


I do something similar as well for my primary storage pool appliance of 28TB available. It has 32GB of system ram so I push push as much in to ARC Cache as possible without the whole thing toppling over; roughly 85%. I only need it for an NFS end point. It's pretty zippy for frequently accessed files.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: