Hacker Newsnew | past | comments | ask | show | jobs | submit | xiwenc's commentslogin

I'm new to the crossplane ecosystem. Today suddenly i noticed my opentofu provider pod is failing. Upon further investigation it turns out the version i'm using has been moved behind paid wall.

I'm surprised this type of dark practice is now accepted in the industry. I'm sad.


OP here. We, from Low-ops.com, needed a way for users to get started with their private instances with minimal friction. Since it is an application platform that comes preloaded with diverse set of services, a wildcard (public) subdomain is needed.

Tech wise it is a simple nextjs app that uses AWS Route53 as store and state.

I’m happy to answer any question you might have.


I'm working on Internal Developer Platform for private clouds. Kind of like private Heroku. It works standalone and installed fully automatic. With primary focus on Low Operations and Self-service where app developers can focus on delivering real business value instead of boilerplate tasks or waiting for other teams to plan and execute standard tasks.

We originally started supporting Low-code solution called Mendix. Now we support any type of web app that can be packaged as an OCI image.

You can read or try it at: https://low-ops.com


A bit more info from CEO: https://redis.io/blog/agplv3/

Sounds like SSPL did not yield the desired outcome.

Glad AGPL is an option now.


It did yield the desired outcome: Google and AWS are no longer using it.


If by desired outcome you mean split the developer community and then chase them away to a newly forked competitor that is now widely used by all the cloud providers and users that prefer open source; complete success!

But I doubt that was the outcome that they hoped for. They created a large and successful competitor that by nature of being a fork does exactly the same thing. It's pretty hard to compete with yourself and the differentiate from your own product.

Honestly, I think Redis Inc. was better off when there was just one code base. AGPL just marginalizes them further. It's not an acceptable license for many corporate legal departments. So, it would necessitate buying a commercial license for such companies. I.e. Fortune 500 companies, public companies, and pretty much anything with a legal department worthy of the name. Note how Redis advertises AGPLv3 as "one" of the available licenses. The whole point of that license is selling commercial licenses.

Valkey is at this point stable and supported and a drop in replacement. It's pretty much the default choice for anyone not interested in buying a commercial license. That genie isn't going back in the bottle with this license choice.

More importantly, Valkey is a pretty active Github project with dozens of contributors in the last month. More than double those in Redis. Those commits aren't going to Redis. And Redis still requires the right to re-license your commits if you try to contribute. That's how they were able to pull this stunt to begin with. I doubt a lot of the Valkey contributors will be moving back to that status quo.


Not sure how the hyperscale clouds switching to a compatible/competing project from which Redis Labs gets no value or contribution is a win.


How did you come to that conclusion? GCP is still offering Memorystore for Redis, Valkey and Memcached. https://cloud.google.com/memorystore


Unfortunately commercial companies tend to work behind closed doors/gardens. We are working on Low-Ops.com and would love to fully support NextJS. Self-hosting is the future :)


Great initiative! As a relatively late swimmer (started learning in my thirties), more people should be able to swim.

I was expecting to see a map or list of places/cities to find out whether it’s safe to swim. But it seems that is not part of this initiative?


Being able to swim at a basic level seems like a fairly basic safety thing but I expect a fair number of people who grew up in places where swimming wasn't commonplace never learned. I went to school someplace that required passing a swim test or at least taking a class but don't know how rigidly that was enforced.


I’m baffled to see so many anti-k8s sentiments on HN. Is it because most commenters are developers used to services like heroku, fly.io, render.com etc. Or run their apps on VM’s?


I think some are just pretty sick and tired of the explosion of needless complexity we've seen in the last decade or so in software, and rightly so. This is an industry-wide problem of deeply misaligned incentives (& some amount of ZIRP gold rush), not specific to this particular case - if this one is even a good example of this to begin with.

Honestly, as it stands, I think we'd be seen as pretty useless craftsmen in any other field due to an unhealthy obsession of our tooling and meta-work - consistently throwing any kind of sensible resource usage out of the window in favor of just getting to work with certain tooling. It's some kind of a "Temporarily embarrassed FAANG engineer" situation.


Fair point but I think the key point here is unnecessary complexity versus necessary complexity. Are zero-downtime deployments and load balancing unnecessary? Perhaps for a personal project, but for any company with a consistent userbase I'd argue these are a non-negotiable, or should be anyways. In a situation where this is the expectation, k8s seems like the simplest answer, or near enough to it.


They are many ways to do deployments without downtime and load balancing is easy to configure without k8s.


I agree with this somewhat. The other day I was driving home and I saw a sprinkler head and broke on the side of the road and was spraying water everywhere. It made me think, why aren't sprinkler systems designed with HA in mind? Why aren't there dual water lines with dual sprinkler heads everywhere with an electronic component that detects a break in a line and automatically switches to the backup water line? It's because the downside of having the water spray everywhere, the grass become unhealthy or die is less than how much it would cost to deploy it HA.

In the software/tech industry it's common place to just accept that your app can't be down for any amount of time no matter what. No one checked to see how much more it would cost (engineering time & infra costs) to deploy the app so it would be HA, so no one checked to see if it would be worth it.

I blame this logic on the low interest rates for a decade. I could be wrong.


This week we had a few minutes of downtime on an internal service because of a node rotation that triggered an alert. The responding engineer started to put together a plan to make the service HA (which would have tripled the cost to serve). I asked how frequently the service went down and how many people would be inconvenienced if it did. They didn't know, but when we checked the metrics it had single-digit minutes of downtime this year and fewer than a dozen daily users. We bumped the threshold on the alert to longer than it takes for a pod to be re-scheduled and resolved the ticket.


This is most sensible thing I’ve read on here in a while. Engineers’ obsession with tinkering and perfection is the slow death of many startups. If you’re doing something important like banking or air traffic control fair enough but a CRUD app for booking hair appointments will survive a bit of downtime


You assume that the teams running these systems achieve acceptable uptime and companies aren't making refunds for missed uptime targets when contracts enforce that, or losing customers. There is definitely a vision for HA at many companies, but they are struggling with and without k8s.


Why would wanting redundancy be a ZIRP? Is blaming everything on ZIRP like Mercury was in retrograde but for economics dorks?


It depends on the cost of complexity you're adding. Adding another database or whatever is really not that complex so yeah sure, go for it.

But a lot of companies are building distributed systems purely because they want this ultra-low downtime. Distributed systems are HARD. You get an entire set of problems you don't get otherwise, and the complexity explodes.

Often, in my opinion, this is not justified. Saving a few minutes of downtime in exchange for making your application orders of magnitude more complex is just not worth it.

Distributed systems solve distributed problems. They're overkill if you just want better uptime or crisis recovery. You can do that with a monolith and a database and get 99.99% of the way there. That's good enough.


Redundancy, like most engineering choices, is a cost/benefit tradeoff. If the costs are distorted, the result of the tradeoff study will be distorted from the decisions that would be made in "more normal" times.


Because the company overhired to the point where people were sitting around dreaming up useless features just to justify their workday.


> It's some kind of a "Temporarily embarrassed FAANG engineer" situation.

FAANG engineers made the same mistake, too, even though the analogy implies comparative competency or value.


Any software engineer who thinks K8 is complex shouldn’t be a software engineer. It’s really not that hard to manage.


I think the key word is “needless” in terms of complexity. There are a lot of k8 projects that probably could benefit from a simpler orchestration system— especially at smaller firms


For me it was DC/OS with marathon and mesos! It worked, it was a tank and it's model was simple.There was also some nice 3rd party open source systems around Mesos that where also simple to use. Unfortunately Kube won.

While nomad can be interesting again it's a single "smallish" vendor pushing an "open" (see debacle with Teraform) source project.


do you have a simpler orchestration system you'd recommend?



How is it more simple?


Every time I read about Nomad, I wonder the same. I swear I'm not trolling here, I honestly don't get how running Nomad is simpler than Kubernetes. Especially considering that there are substantially more resources and help on Kubernetes than Nomad.


Well, for starters, you don't have to have your apps containerized to work with Nomad (though it can handle containers as well as executables).

But for some deeper details, I'd suggest checking out the comments in this reddit thread[0] (as well as some of the linked articles therein).

E.g. From a comment by /u/Golden_Age_Fallacy: A great use of Nomad is on reduce the burden of on-boarding a team(s) of developers who are unfamiliar with cloud native deployments / systems(even containers!).

Nomad jobspecs are very simple and straight forward, as compared to the complexity and pure option overload you get in k8s and helm.

From /u/neutralized: It's much easier to use than k8s. Easy to setup, easy to manage, much more shallow learning curve. Nothing super fancy. Just works. I migrated a startup I was at off of a self-managed k8s setup to Nomad a few years ago and they've never looked back.

From /u/esity: My team is currently building out a fully automated nomad cluster service offering internally(fortune 10)

It's super awesome. Easy. Little headache. Integrates with consul and vault. We are literally planning to replace thousands of vms for K8s with nomad. Containers are faster, more resilient and writing hcl is actually fun once you learn it

Now, there is a rather more lengthy comment, by /u/thomasbuchinger, that goes through the pros and cons he experienced in trying Nomad out and his conclusion is that, while he wouldn't discourage anyone from using it, "k3s and a few well-known simple projects give you 80% of Nomands [sic] features. Are as easy to operate, afford you more options in the future and have a ton of documentation/tutorials...available."

There are more comments in the thread and again links to a bunch of blogposts/articles/etc., including one from fly.io that seemed pretty detailed, discussing the Googly origins of both k8s and Nomad (fly.io used Nomad but found that it wasn't the best fit for them, which is also discussed in their post -- actually, I'm going to put the link to their post below[1], since I think it is worthwhile).

Hope all this helps.

[0] https://old.reddit.com/r/devops/comments/11nsxo3/opinions_on...

[1] https://fly.io/blog/carving-the-scheduler-out-of-our-orchest...


No, it just looks and feels like enterprisy SOAP XML


For me personally, I get a little bit salty about it due to imagined, theoretical business needs of being multi-cloud, or being able to deploy on-prem someday if needed. It's tough to explain just how much longer it'll take, how much more expertise is required, how much more fragile it'll be, and how much more money it'll take to build out on Kubernetes instead of your AWS deployment model of choice (VM images on EC2, or Elastic Beanstalk, or ECS / Fargate, or Lambda).

I don't want to set up or maintain my own ELK stack, or Prometheus. Or wrestle with CNI plugins. Or Kafka. Or high availability Postgres. Or Argo. Or Helm. Or control plane upgrades. I can get up and running with the AWS equivalent almost immediately, with almost no maintenance, and usually with linear costs starting near zero. I can solve business problems so, so much faster and more efficiently. It's the difference between me being able to blow away expectations and my whole team being quarters behind.

That said, when there is a genuine multi-cloud or on-prem requirement, I wouldn't want to do it with anything other than k8s. And it's probably not as bad if you do actually work at a company big enough to have a lot of skilled engineers that understand k8s--that just hasn't been the case anywhere I've worked.


Genuine question: how are you handling load balancing, log aggregation, failure restart + readiness checks, deployment pipelines, and machine maintenance schedules with these “simple” setups?

Because as annoying as getting the prometheus + loki + tempo + promtail stack going on k8s is —- I don’t really believe that writing it from scratch is easier.


* Load balancing is handled pretty well by ALBs, and there are integrations with ECS autoscaling for health checks and similar

* Log aggregation happens out of the box with CloudWatch Logs and CloudWatch Log Insights. It's configurable if you want different behavior

* On ECS, you configure a "service" which describes how many instances of a "task" you want to keep running at a given time. It's the abstraction that handles spinning up new tasks when one fails

* ECS supports ready checks, and (as noted above) integrates with ALB so that requests don't get sent to containers until they pass a readiness check

* Machine maintenance schedules are non-existent if you use ECS / Fargate, or at least they're abstracted from you. As long as your application is built such that it can spin up a new task to replace your old one, it's something that will happen automatically when AWS decommissions the hardware it's running on. If you're using ECS without Fargate, it's as simple as changing the autoscaling group to use a newer AMI. By default, this won't replace all of the old instances, but will use the new AMI when spinning up new instances

But again, though: the biggest selling point is the lack of maintenance / babysitting. If you set up your stack using ECS / Fargate and an ALB five years ago, it's still working, and you've probably done almost nothing to keep it that way.

You might be able to do the same with Kubernetes, but your control plane will be out of date, your OSes will have many missed security updates. Might even need a major version update to the next LTS. Prometheus, Loki, Tempo, Promtail will be behind. Your helm charts will be revisions behind. Newer ones might depend on newer apiVersions that your control plane won't support until you update it. And don't forget to update your CNI plugin across your cluster, too.

It's at least one full time job just keeping all that stuff working and up-to-date. And it takes a lot more know-how than just ECS and ALB.


It seems like you are comparing ECS to a self-managed Kubernetes cluster. Wouldn't it make more sense to compare to EKS or another managed Kubernetes offering? Many of your points don't apply in that case, especially around updates.


A managed Kubernetes offering removes only some of the pain, and adds more in other areas. You're still on the hook for updating whatever add-ons you're using, though yes, it'll depend on how many you're using, and how painful it will be varies depending on how well your cloud provider handles it.

Most of my managed Kubernetes experience is through Amazon's EKS, and the pain I remember included frustration from the supported Kubernetes versions being behind the upstream versions, lack of visibility for troubleshooting control nodes, and having to explain / understand delays in NIC and EBS appropriation / attachments for pods. Also the ALB ingress controller was something I needed to install and maintain independently (though that may be different now).

Though that was also without us going neck-deep into being vendor agnostic. Using EKS just for the Kubernetes abstractions without trying hard to be vendor agnostic is valid--it's just not what I was comparing above because it was usually that specific business requirement that steered us toward Kubernetes in the first place.

If you ARE using EKS with the intention of keeping as much as possible vendor agnostic, that's also valid, but then now you're including a lot of the stuff I complained about in my other comment: your own metrics stack, your own logging stack, your own alarm stack, your own CNI configuration, etc.


(Apologies for the snark, someone else made a short snarky comment that I felt was also wrong and I thought this thread was in reply to them before I typed it out -- thank you for the reply)

- ALBs -- yeah this is correct. However ALBs have much longer startup/health check times than Envoy/Traefik

- Cloudwatch - this is true, however the "configurable" behavior makes cloudwatch trash out of the box. you get i.e. exceptions split across multiple log entries with the default configure

- ECS tasks - yep, but the failure behavior of tasks is horrible because there're no notifications out of the box (you can configure it)

- Fargate does allow you to avoid maintenance, however it has some very hairy edges like i.e. you can't use any container that expects to know its own ip address on a private vpc without writing a custom script. Networking in general is pretty arcane on Fargate and you're going to have to manually write and maintain the breakages from all this

> You might be able to do the same with Kubernetes, but your control plane will be out of date, your OSes will have many missed security updates. Might even need a major version update to the next LTS. Prometheus, Loki, Tempo, Promtail will be behind. Your helm charts will be revisions behind. Newer ones might depend on newer apiVersions that your control plane won't support until you update it. And don't forget to update your CNI plugin across your cluster, too.

I think maybe you haven't used K8S in years. Karpenter, EKS, + a GitOps (Flux or Argo) makes you get the same machine maintenance feeling as ECS but on K8S without any of the annoyances of dealing with ECS. All your app versions can be pinned or set to follow latest as you prefer. You get rolling updates each time you switch machines (same as ECS, and if you really want to you can run on top of Fargate).

By contrast, if your ECS/Fargate instance fails you haven't mentioned any notifications in your list -- so if you forgot to configure and test that correctly, your ECS could legitimately be stuck on a version of your app code that is 3 years old and you might not know if you haven't inspected the correct part of amazon's arcane interface.

By the way, you're paying per use for all of this.

At the end of the day, I think modern Kubernetes is strictly simpler, cheaper, and better than ECS/Fargate out of the box and has the benefit of not needing to rely on 20 other AWS specific services that each have their own unique ways of failing and running a bill up if you forget to do "that one simple thing everyone who uses this niche service should know".


ECS+Fargate does give you zero maintenance, both in theory and in practise. As someone, who runs k8s at home and manages two clusters at work, I still do recommend our teams to use ECS+Fargate+ALB if they satisfy their requirements for stateless apps and they all love it because it is literaly zero maintenance, unlike you just described what k8s requires.

Sure there are a lot of great feature with k8s which ECS cannot do, but when ECS does satisfy the requirements, it will require less maintenance, no matter what kind of k8s you compare it against to.


Depending on use case specifics, Elastic Beanstalk can do that just fine.


He named the services. Go read about them.


I’m not sure which services you think were named that solve the problems I mentioned, but none were. You’re welcome to go read about them, I do this for a living.


I think you're just used to AWS services and don't see the complexity there. I tried running some stateful services on ECS once and it took me hours to have something _not_ working. In Kubernetes it takes me literally minutes to achieve the same task (+ automatic chart updates with renovatebot).


I'm not saying there's no complexity. It exists, and there are skills to be learned, but once you have the skills, it's not that hard.

Obviously that part's not different from Kubernetes, but here's the part that is: maintenance and upgrades are either completely out of my scope or absolutely minimal. On ECS, it might involve switching to a more recently built AMI every six months or so. AWS is famously good about not making backward incompatible changes to their APIs, so for the most part, things just keep working.

And don't forget you'll need a lot of those AWS skills to run Kubernetes on AWS, too. If you're lucky, you'll get simple use cases working without them. But once PVCs aren't getting mounted, or pods are stuck waiting because you ran out of ENI slots on the box, or requests are timing out somewhere between your ALB and your pods, you're going to be digging into the layer between AWS and Kubernetes to troubleshoot those things.

I run Kubernetes at home for my home lab, and it's not zero maintenance. It takes care and feeding, troubleshooting, and resolution to keep things working over the long term. And that's for my incredibly simple use cases (single node clusters with no shared virtualized network, no virtualized storage, no centralized logs or metrics). I've been in charge of much more involved ones at work and the complexity ceiling is almost unbounded. Running a distributed, scalable container orchestration platform is a lot more involved than piggy backing on ECS (or Lambda).


I hear a lot of comments that sound like people who used K8s years ago and not since. The clouds have made K8s management stupid simple at this point, you can absolutely get up and running immediately with no worry of upgrades on a modern provider like GKE


Hating is a sign of success in some ways :)

In some ways, it's nice to see companies move to use mostly open source infrastructure, a lot of it coming from CNCF (https://landscape.cncf.io), ASF and other organizations out there (on top of the random things on github).


It’s one of those technologies where there’s merit to use them in some situations but are too often cargo culted.


For me it is about VMs. Feel uneasy knowing that any kernel vulnerability will allow a malicious code to escape the container and explore the kubernetes host

There are kata-containers I think, they might solve my angst and make me enjoy k8s

Overall... There's just nothing cool in kubernetes to me. Containers, load balancers, megabytes of yaml -- I've seen it all. Nothing feels interesting enough to try


vs the Application getting hacked and running lose on the VM?

If you have never dealt with, I have to run these 50 containers plus Nginx/CertBot while figuring out which node is best to run it, yea, I can see you not being thrilled about Kubernetes. For the rest of us though, Kubernetes helps out with that easily.


if a 4-core VM with a single application is hacked, that's it

if there's a kernel vulnerability in something simple (like dirtycow, which was if I remember correctly about pipes) then the attacker will take over your entire 128 core machine and all the hundreds applications there


Kubernetes itself is built around mostly solid distributed system principles.

It's the ecosystem around it which turns things needlessly complex.

Just because you have kubernetes, you don't necessarily need istio, helm, Argo cd, cilium, and whatever half baked stuff is pushed by CNCF yesterday.

For example take a look at helm. Its templating is atrocious, and if I am still correct, it doesn't have a way to order resources properly except hooks. Sometimes resource A (deployment) depends on resource B (some CRD).

The culture around kubernetes dictates you bring in everything pushed by CNCF. And most of these stuff are half baked MVPs.

---

The word devops has created expectations that back end developer should be fighting kubernetes if something goes wrong.

---

Containerization is done poorly by many orgs, no care about security and image size. That's a rant for another day. I suspect this isn't a big reason for kubernetes hate here.


Reading through the homepage one part stood out: they burn through quite some laptops. I wonder, perhaps it’s related to their environment? Salpeter is probably damaging the electronics. Consumer laptops were never designed to be out on sea for such extended time.

Now… what could they do about this issue? Assuming this is the root cause of the failures.


> what could they do about this issue?

The problem is the moist, salty air, circulating through the electronics.

Use a tablet (it's closed, doesn't suck in air for cooling), ideally a waterproof one. There are also "sealed", splash-proof keyboards, but even if using a regular keyboard, replacing that from time to time is far cheaper than replacing a whole laptop.


I can't find the link right now but I believe they've learned to clean all the ports and keep the laptops in boxes with desiccants to keep them dry


> keep the laptops in boxes with desiccants to keep them dry

Such a simple solution. Have a special low-humidity environment.-


The sad reality is probably not.

I personally would prefer organizations to own their hardware as in the early age of internet. It was meant to be decentralized. However in the last 2 decades centralization has prevailed.

I think it is sad because look at the CrowdStrike incident earlier this week. Or outages in AWS, cloudflare etc. These are examples why decentralization would give people/organizations power and control.

This mentality of making it “someone else’s problem” with outsourcing is a fairy tale. In the end your business is at risk. Let alone the overhead and inefficiencies.

Perhaps another analogy: if one eats out every day and never learnt how to cook a meal themselves. When the situation presents itself there is no cook around. One would probably starve or resort to simple food sources like whole fruits.


KLM, specially on long flights, has a free tier wifi where you can use major instant messaging without charge. If you want to surf the web, it’s pretty cheap. If i recall around 30-40 euro for a 9-10hours flight. Or pay less for an hour.

Was considering to “hack” my way out of the free tier. But paying was just too easy and it’s affordable.

Sorry for boring addition/story.


...it’s pretty cheap. If i recall around 30-40 euro for a 9-10hours flight.

I would love to be in a position where I could consider this cheap. FWIW, I pay about €46 per month for Internet service.


If you pay 1000+ for the flight an additional 40 Euros isn't that bad.


That’s how they get you.

The bias is known as “Anchoring Effect”, where your perception of subsequent prices is skewed by the initial high price.


Except you’re on a literal plane flying through the sky. This isn’t the same as being on the ground with permanent cables attached to your internet connection. Not only is this incredible that you’d get internet at all, it’s totally reasonable for it to cost a lot more!


Assuming there are 300 people on a flight and half of them purchase this, that would be $4-6000 per trip for internet access. What do you think the actual margins on this access are, especially relative to the other eye watering and obscene surcharges airlines impose?


Times have changed, with Starlink the cost is going to be a rounding error. Free high speed Wifi will probably be available on long haul flights within the next 3-5 years.


With “cheap” i meant it was still affordable. If it was 10x then it would not be affordable. Just like a bottle a water costs more at the airport, internet access costs more on a plane in the sky.

Perhaps i am a bit biased i would expense the bill to the company. A few hours of work definitely pays back the prepaid internet.

As mentioned, a decade or two ago, this was not possible or very limited to the elites. I certainly dont feel or behave like an elite. So it is “affordable” to me


singapore air has free unlimited wifi. only need a krisflyer account. speed is also decent.


Southwest Airlines offers free IM through iMessage and WhatsApp too. Should be able to tunnel internet traffic through iMessage.


I'm trying to recall the name of the app that does this, but one of the travel-tracking apps uses Apple's push notification system (which the network treats as "messaging") to send e.g. gate changes to subscribing devices through this "messaging only" network.

The APNS payload is a JSON blob that's limited to 4KB, with a few required pieces of information but mostly free-form, so it's definitely in the scope of e.g. a (text-only) blog post split over a few messages.


Believe you're referring to Flighty


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: