We used to manually ssh to deploy our dozens of nodes, just a handful on developers. git pull, restart service.
Then we got to hundreds of nodes. Chef, chef, and more chef. Deploys were typically run with a chef-client run via chef ssh (well, a wrapper around that for retries). With dozens of services and many dozens of engineers, this worked well enough.
Then we got to thousands of nodes. And hundreds of developers working on a multitude of services.
We've adopted k8s. It has been a lot of work, but the deploy story is wonderful. We make a PR and between BuildKite and ArgoCD we can manage canary nodes, full roll outs, roll backs, etc. We can make config changes or code changes easily, monitor the roll out easily, and revert anytime. I still don't _like_ k8s mind you - I don't think programming with templates and yaml is a good thing. But I've come to terms with that being the best we will have for now.
We deploy small clusters everywhere in the same pattern, I love argocd. This article fails to understand the use case for kubernetes, and arguably doesn't fully understand the cloud.
Kubernetes is revolutionary, to think it's not is foolish.
Then we got to hundreds of nodes. Chef, chef, and more chef. Deploys were typically run with a chef-client run via chef ssh (well, a wrapper around that for retries). With dozens of services and many dozens of engineers, this worked well enough.
Then we got to thousands of nodes. And hundreds of developers working on a multitude of services.
We've adopted k8s. It has been a lot of work, but the deploy story is wonderful. We make a PR and between BuildKite and ArgoCD we can manage canary nodes, full roll outs, roll backs, etc. We can make config changes or code changes easily, monitor the roll out easily, and revert anytime. I still don't _like_ k8s mind you - I don't think programming with templates and yaml is a good thing. But I've come to terms with that being the best we will have for now.