Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Borg, Omega, Kubernetes: Lessons learned over a decade (2016) (acm.org)
73 points by mlerner on Jan 7, 2023 | hide | past | favorite | 12 comments


Discussed at the time:

Borg, Omega, Kubernetes: Lessons learned from container management over a decade - https://news.ycombinator.com/item?id=11216020 - March 2016 (24 comments)


Too bad the current headline deviates from the other 2 submissions since "Borg, Omega, Kubernetes" is arguably more illustrative of the contents than "3 rando container management systems no one would have heard of"


That previous title maxed out HN's limit of 80 chars, and the new title needed to squeeze the year in as well. But I figured out a way ;)


Question for those running kubernetes/nomad for dev/alpha/beta/prod:

- are changes promoted and pushed automatically dev to beta to prod

- or is there a internal ticketing/admin system that collects requests to promote and install new container versions that when approved are only then run through k8s etc

This is one aspect of deployment I don't see much of in the context of containers/orchestration and the rest


In our case:

- We use ArgoCD to manage k8s deployments.

- Every new build can be pushed to dev or QA by a trivial operation, a commit to a repo. It does not happen automatically but would be trivial to make it so. We have more than one environment (dev, integration, QA), and plan to become able to spin up more on demand.

- A release is cut at a particular commit of the main branch, or, in rare cases, from a release-hotfix branch. After thorough testing, it is trivially pushed to prod, again, by making one commit and pushing it.

- The same way, everything can be rolled back by reverting a commit and pushing to the repo.

At one of my previous jobs, we used Spinnaker, and it was even more streamlined: pick the right container versions from a GUI.


Yea we used ArgoCD with our k8s cluster too at my previous job.

And all the merges were done on Github.


In our company, From dev to integration to QA cluster it is an automated push, At each stage tests are run and if all of em pass, auto pushed to the next stage. From QA to staging and production, it is a manual push.


Automated deployments from test, canary to production, done region by region. Happens when operating team is ready for a new release.


We CD to dev, and cut releases to prod. Do both using ArgoCD.

This is really an aspect of you CD system, rather than Kubernetes though.


And I'd like to know who set up an automated feature branch deployment on test servers ? I liked it when using some hosting platforms for small side projects but I've never used it on company large ones.


The one very unique thing about Kubernetes is the approach of unified resource definitions and some kind of unified API for everything. It is very unlikely of anything I've encountered so far. I think that it's possible to build some kind of universal API engine on the same principles. It feels like untapped potential. Like you're creating new database entities by defining CRDs and getting the whole CRUD operations automatically. You're creating your microservices as something like k8s controllers working in loop to adjust your system to the spec.


I'm not sure I'd say it's untapped. I know many places abuse this and use k8s as a database and framework for manipulating resource state. I think the loop-to-spec approach is rather primitive and hard to debug when you have a lot of loops trying to converge on their own myopic vision of the state.

The API is well organized and powerful, that's for sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: