Hacker Newsnew | past | comments | ask | show | jobs | submit | aceBacker's commentslogin

A lot? I mean, it's not rust but it's not that bad.


Rust gives me pause because it seems like yet another golden hammer language with bad developer ergonomics


It's definitely an upgrade over Python. I remember learning Go and saw "type Foo struct { ... }" and was sure this meant the language had algebraic data types and pattern matching and all that fun stuff. It didn't. It was fine.


Agree to disagree


Kubernetes isn't that complicated, it's distributed computing that is complicated. Kubernetes makes it as simple as it can be without oversimplifying it.


Just wanted to chime in and hard agree on this. I remember the world where people were trying to build things like kubernetes before it existed. There was a period where enough people had left google post ipo to realize that they needed something like borg but nothing really existed like that outside of elgoog.

Twitter had Aurora, then elsewhere mesos and mesosphere popped up and it seemed like there were going to be about 100 different frameworks until Kubernetes dropped and basically ate the industry alive. K8s feels pretty stable from where I’m sitting.


Not to mention all the various in-house systems in the 2010-era. I worked at a small place back in 2013 that had built their own container platform about a year before Docker was released.

It was more like workqueue than borg. But it worked well enough. Eventually we got tired of maintaining our own snowflake scheduler and switched to Kubernetes.

But I'm also old enough to remember pre-borg systems used in HPC. Maui, TORQUE, etc. I wasn't surprised batch scheduling finally got used for service deployment.


Even in the post docker world people were trying to build container platforms on docker that resembled other things but had their own issues. I definitely had to maintain one of those for three years.


Same. And my previous employer still runs the "thing" we built.


Kubernetes is kind of stable except for the relentless deprecation and introduction of features. I.e. ingress going from v1beta1 to v1 and dropping any support for the v1beta1 defs, service accounts no longer having tokens associated with it, etc.

It really makes for a constant churn where there's a good change something breaks if you update kubernetes. Either that or you pick a version and stick with it for 2 years while it's supported and then jump to the new one and fix all the breakage.


Running a microservices architecture on a shared cluster is complicated.

* Because you are running multiple workloads on the same kernel, you need to protect them from each other, from both correctness and performance perspectives.

* Because the workloads have different scaling characteristics, you need to solve a bin-packing problem to make efficient use of the resources.

* Because of said bin-packing, workloads move around a lot, so you invite a service discovery problem much more intense than classic DNS is meant to solve.

* Because normal software only knows how to use DNS, you invite the need for sidecars and virtual network overlays.

When you have this set of problems then k8s seems appropriate. I work at a company with this set of problems but no Kubernetes, and boy is it a production. But my takeaway from that is less "use Kubernetes" and more "try not to have those problems." Use relatively monolithic architectures, one or a few services on their own stable pools of nodes, with boring old reverse-proxy load balancers and DNS, for as long as you possibly can.


I have come at this problem from a bit of a different angle by asking what is the closest I can possibly get to the hypothetical dream state of everything is automated, autoscaling blah blah blah as possible with the absolute smallest budget in terms of not only actual costs but time budget as well.

I only know the GCP ecosystem kind of well so I don't fully know to what extent these things exist in AWS and Azure but there I think there is a really nice path you can get on with the serverless route that skips K8s entirely but keeps you very well aligned in case you ever need to "upgrade" or get out of the GCP ecosystem.

I write very stock standard gRPC services and then put them onto Cloud Run (which has a very Heroku like workflow) and stick https://cloud.google.com/api-gateway in front of things and now my API is running on the exact same setup as any other service Google is running in production. Huge amounts of logic get moved out of my code base as a result.

If you are also willing to write your APIs a fairly particular way https://google.aip.dev/ it starts to become trivial to integrate other things like https://cloud.google.com/workflows, https://cloud.google.com/pubsub and https://cloud.google.com/tasks which is traditionally where a lot of the "state" and weirdly complicated logic previously lived in my code. I'm now not really writing any of that.

Now it's all declarative where I just say what I want to happen and I don't have to think about much else beyond that because it too is using that same internal GCP infrastructure to handle all the complicated parts around what to do when things go wrong.

But to me they are all extremely heavily aligned with the K8s path so the lock in certainly doesn't feel as scary.

You have to still ask the question why not just deploy the monolith to the VM and move on until you need to think about anything else?

For me it’s not JUST an investment in the future but in the very real immediate benefits it’s the huge amount of code I don’t have to write and all the insanely advanced stuff I get for free.

To give a quick example of each though. On the code side I don’t think about things like authN, authZ, retry logic, health checking, most security outside of things like input validation, logging, tracing etc. All of that now is just a configuration setting for me.

Then on the advanced features side, just to give one example. If I set a security policy saying this service can do the following actions on another service and I don’t end up using most of those permissions it will automatically notify me in the future to let me know and help me rewrite the security policy to only use the things I need in practice, it can even help me test that new policy to ensure things don’t break… That was previously basically an entire persons job to find and fix those kinds of things, usually manually because the amount of code needed to automate it is painful to think about. Now it’s just another feature I pick up for free.


K8s is trying to do distributed computing on a low-level OS platform that was never developed or designed for this purpose. Look at things like plan9 to see how much easier and more elegant it could be. Sure, plan9 doesn't provide k8s' autoscaling and provisioning features out of the box but the basic building blocks are all there. (And no, Linux is nowhere close to providing these patterns because custom user-space implementations of kernel-side features are only made possible on a totally ad-hoc basis.)


You have to keep wondering if plan9 is so great, why nobody uses it really. Is it a conspiracy, or is it that plan9 is not that great after all?


Everybody uses Plan9, it's called "The Web" now.


>why nobody uses it really

If i where you i would stop use argumentation's like that.

OpenVMS has still the best Clustering tech...why is "nobody" using it? And btw p9 is very much used (WSL for example).

Linux is for sure not the best kernel, not for server not for desktop and not for mobile...it's good enough and everyone circles around it, it has some bacon for everyone (even if the bacon is sometimes disgusting).

https://en.wikipedia.org/wiki/9P_(protocol)

Same with unix in itself...it's good enough.

Some years ago you could argue that everyone uses mysql...why even bother with that postgresql-thing.


You have to admit that it's tuned towards stateless and loadbalanced workloads, though. Not every workload fits into the k8s box. I like k8s but for everything else, k8s can make things overly complicated.


Having worked on the system it’s based on: no it could be much simpler.


What about kubernetes' implementation makes it more complex than necessary for delivering distributed computing? Genuine question.


For me it is how state is mixed with configuration and how it encourages various operators to just go in and add their own metadata all over objects.


I don't know if I would agree with that. Using Kubernetes on a good day is not complicated, but it does have a lot of moving parts. There are many different flavors and distributions of Kubernetes, a lot of concepts that can prevent your workloads from running (because system pods taking too much resources, affinity, persistent storage, kubelet/etcd/apiservers cyclic dependencies, x509 certificates). An actually HA Kubernetes deployment needs an external load-balancer for no reason at all (kubelet could talk to multiple apiservers like all "distributed system" software does, but that's still not implemented).


To understand how difficult distributed computing is, at the container level of abstraction, one has to try Docker swarm.


> Kubernetes makes it as simple as it can be without oversimplifying it.

What about something like Hashicorp Nomad and Docker Swarm? The popularity of the former and the maintenance status of the latter aside, I think they achieve much of the same as Kubernetes in ways that are simpler, which is enough for the majority of deployments out there.

For example, most pieces of software that run in containers have a docker-compose.yml file which can then be fed into Docker Compose to launch an environment on a single node, say for local testing and development, or to just explore a piece of software in a throwaway environment. What Docker Swarm does, is take basically the same specification and add the ability to run containers across multiple loads, do networking across them in a reasonably secure way, whilst being able to set up resource limitations etc. as needed, as well as scale the containers across the available nodes and manage storage with volumes, bind mounts or even plugins for something like GlusterFS or just NFS.

Docker Swarm doesn't concern itself with the concept of Pods because you don't always need those - regular containers can be enough without the additional abstraction in the middle. Docker Swarm doesn't concern itself with the concept of a Service, since you can just access containers based on their names through the built in DNS abstraction, especially if you don't need complicated network isolation (which you can also achieve at server level). Docker Swarm doesn't really care about an Ingress abstraction either, since you can just make your own Nginx/Caddy/Apache container and bind it to ports 80 and 443 on all of the nodes where you want to have your own ingress. No PersistentVolume and PersistentVolumeClaim abstractions either, since the aforementioned bind mounts and volumes, or network storage are usually enough. And the resource usage and API are exceedingly simple, you don't even need to worry about service labels or anything like that, since in most cases you'll only care about the service name to access the container through.

If I built my own container orchestrator, I'd strive for that simplicity. Seems like projects like CapRover also recognized that: https://caprover.com/ Same with Dokku: https://dokku.com/

If you're in a company that has never really run advanced A/B tests or doesn't really need to do complex 2-stage DB migrations or blue-green deployments, circuit breaking and other fancy stuff, there's not that much use in going with Kubernetes, unless you really just want to hire for it and also pay someone else to manage it for you.

Personally, with tools like Kompose https://kompose.io/ I'd advise that you start with Docker and Docker Compose at first (locally, or for dev environments) and then branch out to Docker Swarm or Nomad, before eventually migrating over to Kubernetes, if you need to, maybe with something like K3s or K0s clusters at first. Or maybe even Portainer/Rancher, since those make the learning curve of Kubernetes far more tolerable. Or, at the risk of increasing the complexity of your deployments, go with Helm as well because the templates for Deployments and other objects that Helm creates by default are surprisingly useful in avoiding YAML hell.

Of course, some say that Docker Swarm is dead and you shouldn't ever touch it, which is why I mention Nomad (even though HCL is a little bit odd at times), which is also great, with the added quality of supporting non-container deployments. Either way, ideally look for a way to run apps in containers because they feel like the "right" abstraction, whilst finding the best fit for the features that you actually need vs the complexity introduced.

In short: containers are pretty good, running them has lots of options, although some are indeed complicated. Kubernetes is not the simplest way to run them, in my experience.


What do you think about secret volumes like kubernetes does? https://kubernetes.io/docs/concepts/configuration/secret/#co...


Heh, my corp locks down the edge updates and bundles them with the OS updates. Edge is going to be vulnerable to this one for months maybe a year longer that chrome.


Where I work they lock down Chrome updates as well.


I think the answer is the same reason the male brain is larger than the female brain on average. The part of the brain that handles fighting. It's a pretty large part of our brains.

The reptilian hindbrain that handles violence isn't needed nearly as much now that we have society and civilization. It's shrinking.


In that case, should we expect to find more shrinking among human male brains than among human female brains?


With nx, If you have a package that is imported into another can you have nodemon rebuild just the imported package? I have a heavy handed fix for yarn workspaces but it is hacky. I'd prefer a better solution if nx can do that.


I don't think it's as bad as that.

You can still create a local account and login to your PC without a callback to Microsoft.


And that way you get the username you want too, instead of the first five letters of your name/email.

"Johnathan Smith" => "johna" == yuck.


Robert Martin is the dude that said the only acceptable target is 100%. Of course he also said you shouldn't plan to actually hit that target, just get as close as is possible without having to test things that don't matter like frameworks.

Kent Beck recently went through and clarified TDD and he definitely doesn't advocate 100%. https://youtube.com/playlist?list=PLlmVY7qtgT_lkbrk9iZNizp97...


a defense of temporary insanity seems reasonable to me. Otherwise they're going to have to establish another motive. And I bet any other motive will still have a reasonable doubt.


Someone that can take something confusing they've written and then refactor it into something easy to read and modify is better than most compsci skills. Because that person can usually suffer through hours of googling to find a math solution to a problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: