> All this complexity I learned must have been for a reason!
It doesn't have to be so emotional.
Htmx can be helpful to keep all your state in one place, it's simpler to reason about and make changes. Lower cognitive load for the system is better for smaller teams and particularly lone developers.
You can accomplish the same thing by going full front end and minimize backend code with a similar small library that takes care of most to all of it for you.
Living in the front end, with all app state in the front end, has distinct advantages. Lower latency for interaction. Lower server costs. Offline capable. It has some cons like slower initial render. If you don't like JavaScript, JavaScript.
Unless you are optimistic in your approach, that isn't true, both have to talk to the back end.
> Lower server costs.
Not necessarily. It depends on what language on the back end you're using and how your front end code is working. If it is making the API calls anyway then the cost should be the same, or close to the same.
> 7. ... Another business I worked for literally hired and fired four separate teams to build an on-prem OpenStack cluster, and it was the most unstable, terrible computing platform I've used, that constantly caused service outages for a large-scale distributed system.
I've seen similarly unstable cloud systems. It's generally not the tool's fault, it's the skill of the wielder.
> The bigger cost is what will happen to your business when you're hard-down for a week because all your SQL servers are down, and you don't have spares, and it will take a week to ship new servers and get them racked. Even if you think you could do that very fast, there is no guarantee. I've seen Murphy's Law laugh in the face of assumptions and expectations too many times.
Lets ignore the loaded, cherry picked situation of no redundancy, no spares, and no warranty service. Because this is all magically hard since cloud providers appeared even though many of us did this, and have done this for years....
There is nothing stopping an on-prem user from renting a replacement from a cloud provider while waiting for hardware to show up. That's a good logical use case for the cloud we can all agree upon.
Next, your cost comparison isn't very accurate. One is isolated dedicated hardware, the other is shared. Junk fees such as egress, IPs, charges for access metal instances, IOPS provisioning for a database, etc will infest the AWS side. The performance of SAN vs local SSD is night and day for a database.
Finally, I can acquire that level of performance hardware much cheaper if I wanted to, order of magnitude is plausible and depends more on where it's located, colo costs, etc.
Depends on the problem. But I don't find a lot of companies that are all marketing and a bare cupboard of an engineering department. They exist, but they are not a universal.Also, most companies that are in this state today have shifted to it from one where product development with the engineering, was actually at least competent.
If you find marketing the hardest part, and most here probably will, you are likely an engineer foremost.
You need a good enough product, and you need it in front of the right buyers. Both aspects can be a significant obstacle to create a business.
Seems par for the course that even AWS employees don't even understand their pricing. I noticed the pricing similarity and tried to deploy to .metal instances. And that's when I got hit with additional charges.
If you turn on a .metal instance, your account will be billed (at least) $2/hr for the privilege for every region in which you do so. A fact I didn't know until I had racked up more charges than expected. So many junk fees hiding behind every checkbox on the platform.
Sigh. This old trope from ancient history in internet time.
> Yes, you can probably buy a server for less than the yearly rent on the equivalent EC2 instance.
Or a monthly bill... I can oft times buy a higher performing server for the cost of a rental for a single month.
> But then you've got to put that server somewhere, with reliable power and probably redundant Internet connections
Power:
The power problem is a lot lower with modern systems because they can use a lot less of it per unit of compute/memory/disk performance. Idle power has improved a lot too. You don't need 700 watts of server power anymore for a 2 socket 8 core monster that is outclassed by a modern $400 mini-pc that maxes out at 45 watts.
You can buy server rack batteries now in a modern chemistry that'll go 20 years with zero maintenance. 4U sized 5kwh cost 1000-1500. EVs have pushed battery cost down a LOT. How much do you really need? Do you even need a generator if your battery just carries the day? Even if your power reliability totally sucks?
Network:
Never been easier to buy network transfer. Fiber is available in many places, even cable speeds are well beyond the past, and there's starlink if you want to be fully resistant to local power issues. Sure, get two vendors for redundancy. Then you can hit cloud-style uptimes out of your closet.
Overlay networks like tailscale make the networking issues within the reach of almost anyone.
> Yeah, you can skip a lot of that if your goal is to get a server online as cheaply as possible, reliability be damned
Google cut it's teeth with cheap consumer class white box computers when "best practice" of the day was to buy expensive server class hardware. It's a tried and true method of bootstrapping.
> You have to maintain an inventory of spares, and pay someone to swap it out if it breaks. You have to pay to put its backups somewhere.
Have you seen the size of M.2 sticks? Memory sticks? They aren't very big... I happened to like opening up systems and actually touching the hardware I use.
But yeah, if you just can't make it work or be bothered in the modern era of computing. Then stick with the cloud and the 10-100x premium they charge for their services.
> I've worn the sysadmin hat. If AWS burned down, I'd be ready and willing to recreate the important parts locally so that my company could stay in business. But wow, would they ever be in for some sticker shock.
Nice. But I don't think it cost as much as you think. If you run apps on the stuff you rent and then compare it to your own hardware, it's night and day.
You're missing the purpose of the cache. At least for this argument it's mostly for network responses.
HDD was 10ms, which was noticeable for cached network request that needs to go back out on the wire. This was also bottle necked by IOPS, after 100-150 IOPS you were done. You could do a bit better with raid, but not the 2-3 orders of magnitude you really needed to be an effective cache. So it just couldn't work as a serious cache, the next step up was RAM. This is the operational environment which redis and such memory caches evolved.
40 us latency is fine for caching. Even the high load 500-600us latency is fine for the network request cache purpose. You can buy individual drives with > 1 million read IOPS. Plenty for a good cache. HDD couldn't fit the bill for the above reasons. RAM is faster, no question, but the lower latency of the RAM over the SSD isn't really helping performance here as the network latency is dominating.
Rails conference 2023 has a talk that mentions this. They moved from a memory based cache system to an SSD based cache system. The Redis RAM based system latency was 0.8ms and the SSD based system was 1.2ms for some known system. Which is fine. It saves you a couple of orders of magnitude on cost and you can do much much larger and more aggressive caching with the extra space.
Often times these RAM caching servers are a network hop away anyway, or at least a loopback TCP request. Making the question of comparing SSD latency to RAM totally irrelevant.
If you don't have scale, you don't need most of the features. Fire up PC, load application. Setup egress port open to internet. Setup application backup on cron job. Done until scale problems arise.
Correct, my point absolutely doesn't apply to someone who is just doing their thing, even maybe 2 orders of magnitude more stuff than their thing.
But when your local IT goon says its going to be 8 months to procure the next set of hard drives for your next order of magnitude, it's a real problem and you have real money to invest in solving it, just not owning a data center money.
Regardless, the switch shows they pay attention and are willing to change.