I find it very acceptable on my local servers that run my mail, web, mastodon, etc servers. I prefer the configuration in kubenetes templates instead of spread all over my filesystem in /etc (config in one place and fucking unit files in another), /var/lib, /opt, etc.
Right, you've acquired the taste. But why should people who haven't acquired the taste do this? To me seems like a mountain of unneeded complexity compared to just running a local webserver directly in a docker container with "docker run --restart always <image>".
Because when your docker server gets smoked, what happens? If you're using Swarm or Nomad or k8s, there's an argument there. `docker run` isn't.
I've very recently moved over to a home k3s cluster--a couple old desktops and some relatively new ARM SBCs with NVMe slots. I don't use it at work, at least not directly, but it's been the least-painful solution I've found to run stuff; once you understand the model (not trivial but not crazy) and find the headspace to explain why things in k8s are the way they are, reasoning through it is pretty easy (which is really just the story of any moderately complex system, exacerbated in a few ways because of some Google-attitude carryover), and I can also yank half the machines out of the cluster before an application fails.
(Both Swarm and Nomad are fine tools, but the lack of easy resources to get up to speed led to me failing them out of consideration.)
> Because when your docker server gets smoked, what happens?
In the context of local or home servers: my raspberry pi currently has like a 3 year uptime now, but if it randomly croaked I'd just buy a new raspberry pi, docker pull the image, and get it up and running again.
So you are saying the main benefit of k8s is that if some of your hardware in the cluster dies then your services are automatically still up and running? Isn't a home k8s setup vulnerable to the same failure as well if the master node gets smoked?
Seems like a lot of complexity cost for something that happens extremely rarely in a home setup at least (I could see how it would be useful for high reliability cloud systems).
- I have three control planes, not one, for about 20 x86-64 and 24 arm64 cores and ~160GB of RAM across 7 nodes. For much the same reason as my NAS uses raidz2, I don't have to drop everything and fix the universe if something fails because Home Assistant just keeps going.
- With Longhorn, I have three instances of every data volume in the cluster. Longhorn also provides incremental data backups to an S3 server--Minio on the NAS, in my case, which replicates to an offsite.
- I can run things like CrunchyData's Postgres operator, which provides a solid baseline setup and additional features like point-in-time restore--and as I run personal but public-facing apps on the cluster this is beneficial.
- Logging and monitoring are centralized and were easy to do. I don't have to go attach to a container, I just go look at Grafana.
- I have a consistent interface for working with all my applications, and I like it more than Swarm's by a lot.
By having this cluster, made out of spare computers, some cheap NVMe drives, and a couple SBCs I wanted to experiment with anyway,I've removed everything I care personally about from cloud providers except for offsite backup and my email sending. I know where it is, I can hack on projects trivially, and I know it'll be there tomorrow so long as the house doesn't burn down.
The benefit to me is if one of my nodes physically dies i have enough room on my cluster to still run all my pods, they just automagically reschedule to another node.
If my master dies then I have to do a little extra work, but not much.
Turns out to run the stuff I run I need more than 1 machine :).. luckily a couple of years back when my job was dissolving a dept and had a stack of uneeded NUCs headed for recycling (pretty nice ones, i've added a little ram but that's it) I tossed them in a box and took them home instead. I've also added a couple of PI/Arm nodes just for kicks.