Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dont agree with OP. In fact on the contrary, much of the backend software is becoming lot more reliable than it ever was before. Lately most of the back end dev is leveraging various cloud services like dynamodb, big table etc, which have north of 99.99% uptime. Back when I was at Yahoo!, reaching those SLIs was a monumental goal. But nowadays its expected.

Any microservice built leveraging these cloud services, is already far more reliable than building things from scratch. Obviously, there is lots of room to shoot oneself in the foot. But overall, I am far more optimistic of backend core services than what OP presents.



I agree. The author seems to be hung up on the way things used to be but the market shifted completely.

Containers completely changed the market and made redundancy cheap. As a result you might have a fault but overall the system will be far more stable. Staged rollouts used to be reserved for top tier enterprises. They are now accessible to everyone.

His main complaint is about clients but I think his memory of the good old days is seriously lacking or tainted. Software used to suck and web based software has a huge boost in stability and reliability.


The problem is you're both right. Containers worked great to isolate and manage a large class of faults (though arguably e.g. BEAM would handle the same class of faults better).

Let's say fault chance due to program design was x and container orchestration fails also to correct an independent y, so we went from reliability (1 - x) to (1 - x * y). Total system reliability goes up even thought we didn't get better at writing programs, though so does complexity.

But actually, the market generally only wants (1 - x) reliability. That y was invented for the few situations situations where x alone was unacceptably large relative to other classes of software. Now everyone is using it, which means x drops proportionately across the board. If container orchestration made you 10x as reliable, most CFOs and customers will be even happier with 2x as reliable at the same cost, so your actual software will degrade to be 5x more unreliable.

So for people who want to use "software" and not "systems" (which includes developers, hobbyists, and many people in specific lines of technical work who combine prêt-à-porter with bespoke software) the world now sucks horribly.


Agreed. TL;DR: Application-level reliability did not improve. Orchestration/Ops System-level reliability made up for it.


These managed services are of less benefit when the websites that are running on them, like the authors describe, are flaky and unreliable. Like the OP I open the js debugger often - including for relatives.


There have been quite a few major Internet disruptions in recent times and all those "lot more reliable" services proved to not be that reliable after all. We're talking about hours of downtime here. And all those who decided to depend on them were suddenly down, at the mercy of their overlords.

It may get better over time but so far it hasn't exactly been a panacea.


reliability is giving the right answer. availability is being ready to give the right answer.

cloud services have poor reliability and good availability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: