I'm in the same camp. I think a lot of these anti-k8s articles are written by software developers who haven't really been exposed to the world of SRE and mostly think in terms of web servers.
A few years ago I joined a startup where everything (including the db) was running on one, not-backed-up, non-reproducible, VM. In the process of "productionizing" I ran into a lot of open questions: How do we handle deploys with potentially updated system dependencies? Where should we store secrets (not the repo)? How do we manage/deploy cronjobs? How do internal services communicate? All things a dedicated SRE team managed in my previous role.
GKE offered a solution to each of those problems while allowing me to still focus on application development. There's definitely been some growing pains (prematurely trying to run our infra on ephemeral nodes) but for the most part, it's provided a solid foundation without much effort.
Exactly, all these articles seem to come from operational novices, who think in terms of 1-2 click solutions. K8s is not a 1-2 click solution, and clearly isn't designed to be; it's solving particular tough operational problems that if you don't know exist in the first place you won't really be able to evaluate these kinds of things properly.
If a group literally doesn't have the need to answer questions like the ones you posed, then OK, don't bother with these tools. But that's all that needs to be said - no need for a new article every week on it.
> it's solving particular tough operational problems that if you don't know exist in the first place
They probably don't exist for the majority of people using it. We are using k8s for when we need to scale, but at the moment we have a handful of customers and it isn't changing quickly any time soon.
As soon as you go down the road of actually doing infrastructure-as-code, using (not running) k8s is probably as good as any other solution, and arguably better than most when you grow into anything complex.
Most of the complaints are false equivalence: i.e. running k8s is harder than just using AWS, which I already know. Of course it is. You don't manage AWS. How big do you think their code base is?
If you don't know k8s already, and you're a start-up looking for a niche, maybe now isn't the time to learn k8s, at least not from the business point of view (personal growth, another issue).
But when you do know k8s, it makes a lot of sense to just rent a cluster and put your app there, because when you want to build better tests, it's easy, when you want to do zero trust, it's easy, when you want to integrate with vault, it's easy, when you want to encrypt, it's easy, when you want to add a mesh for tracing, metrics and maybe auth, it's easy.
What's not easy is inheriting a similarly done product that's entirely bespoke.
> Of course it is. You don't manage AWS. How big do you think their code base is?
This seems like a fairly unreasonable comparison. The reason I pay AWS is so that I _do not_ have to manage it. The last thing I want to do is then layer a system on top that I do have to manage.
Problem is hidden assumptions. Happens a lot with microservices too. People write about the problems they're solving somewhat vaguely and other people read it and due to that vagueness think it also is the best solution to their problem.
They are engineers, writing about what they do and what their companies sell. I wouldn't ascribe an "imposition" to that!
As a practitioner or manager, you need to make informed choices. Deploying a technology and spending the company's money on the whim of some developer is an example of immaturity.
A few years ago I joined a startup where everything (including the db) was running on one, not-backed-up, non-reproducible, VM. In the process of "productionizing" I ran into a lot of open questions: How do we handle deploys with potentially updated system dependencies? Where should we store secrets (not the repo)? How do we manage/deploy cronjobs? How do internal services communicate? All things a dedicated SRE team managed in my previous role.
GKE offered a solution to each of those problems while allowing me to still focus on application development. There's definitely been some growing pains (prematurely trying to run our infra on ephemeral nodes) but for the most part, it's provided a solid foundation without much effort.