Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting to see this article again since it was the trigger of starting a homelab for me. After realizing cloud services are putting a major dent in my pocket to get a lousy startup idea off the ground, I started to wonder if there's "The Cheapest" way? (I'm not cheap but I'm very frugal)

Nowadays internet speed is great to do self hosting. I have a business line internet at home with ~1gb up&down! Bought couple of 6-7 year old enterprise Dell servers (2x12core xeon, 128gb ram each) and no longer pay any cloud provider ... i'm also hosting 2 backend solutions for mobile apps with decent traffic for friends' startups!

The learning experience has been tremendous! It has actually gotten a lot better and easier with new solutions coming out for homelabs. Get started with Proxmox clusters and go from there...



After this talk [0] I had several most interesting conversations with media folks about the real cost and advantages of "cloud".

One thing that came up is development. Modern devops culture is quite a good thing, and what's lovely about "cloud" - as in the ability to quickly buy compute and storage capability - is that ideas you would have tinkered with in on-prem labs (or across private sites) for months can be imagined and prototyped in hours.

I'm a big advocate of rapid prototyping as a _huge_ business lever, because the ability to try out ideas quickly, to easily reconfigure things, is the key for time to market. You can quickly see if something is going to fly or not.

And that's where the advantage ends.

After that, it's all downhill. Asymmetry. Lock-in and portability. Trust and privacy issues. Security perimeters. Unpredictable costs....

So the way forward is to render unto Caesar only the things that are Caesar's.... in other words, take the advantages of "cloud" when it suits you, and then get the hell out of Dodge.

What is ongoing from that conversation is media companies being interested in strategic planning to build, and even share, their own distributed computing resources to pull back to once a technology is off the ground.

Someone even mentioned that it's time for a European Cloud initiative,

[0] https://www.youtube.com/watch?v=6OL2XmlgpdA


Yeah it's interesting devops is a lock-in on the cloud when (if you squint, a lot) it should be the opposite: there are devops tools that .... almost ... should make you more independent.

IMO it should be a sneaky powerful declaration by major corps that your app should be built to be deployed nearly at will on at least two clouds. I mean terraform is so tantalizingly close to it, until it isn't. This is like Bezos sending out the "thou shalt service everything".

AWS knows this and they are all about lock-in. They want you on the more complicated products, because those are really hard to move off of. Oh yes, don't use cassandra, use dynamo. Man you'll never move off that.

So if you let the devs have "you can develop on AWS" but then they have to deploy on Hetzner ... that will force the devs to be far more cloud-independent. I guess if I was a CIO (never let me become one) I'd try to institute that.


> I'm a big advocate of rapid prototyping as a _huge_ business lever, because the ability to try out ideas quickly, to easily reconfigure things, is the key for time to market. You can quickly see if something is going to fly or not. And that's where the advantage ends.

Too many businesses aren’t even properly utilising that key advantage. They’re moving servers to the cloud but still using their outdated development and deployment processes, and things move just as slowly in the cloud as they used to on prem. They know what Infrastructure as Code means, but only as separate words.


    They’re moving servers to the cloud but still using their outdated development and deployment processes, and things move just as slowly in the cloud as they used to on prem.
For many non-tech corps, the purpose of moving to cloud is to downsize IT admin staff. It works well.


nah, not really. we hoped it would. it didn't.

your sysadmins are now Cloud Admins and can get an extra 50k in the market with a GCP or AWS certs. you're going to bump up their salary, right?

the useless offshore team is now a Cloud useless offshore team, and also wants their 20% bump. And bet your ass that Tata or Cognizant will get blood from a stone to make it happen, cuz as useless as they may seem you still need them.

change control meetings haven't gone anywhere, and if anything they're more important since now your entire infrastructure is a long one-liner away from being borked; cloud is an API, basically. just because you're not racking and stacking doesn't mean the demand, architecture & design, review boards, implementations, and due diligence steps go faster.

now we need an entirely new strategy to handle costs, since our architects and procurement can't track day to day cost changes easily, so when SuperDev decides he's going to #yolo 6 VMs and a few dozen containers into existence to test a few things we now have launch a technical and financial investigation into 1) how that happened, and 2) how much it cost.

still gotta use fortigates or palo altos, and internal networking hasn't changed too much overall; lean teams to begin with.

so in exchange for shoveling huge quantities of OpEx to companies that don't deserve more money, we don't really cut labor, and lose control of practically every other facet of our infra. Hope that Azure AD doesn't fail again, cuz the dashboard says 100% green but nothing is working and the execs are concerned.


What do you mean by asymmetry?


Maybe the most demonstrable is egress/ingress bandwidth. But since there's a power asymmetry when dealing with mega-corporations I had other asymmetries in mind too.


The cloud providers are so large that they don’t really need you. It’s all about churn management at the macro level. With hardware and software, there’s always an end of quarter leverage point.

I spent most of my career in large enterprises. The leverage you have against AWS or Microsoft is 0 compared to the old days. They are probably landing more infrastructure every month than my global company had in datacenters 15 years ago.


I feel like a lot of that shifted now.

You can just have on-premise k8s and keep most of the velocity gained from developers being able to "just run stuff" instead of anything having to go thru sysadmins.

You can just rent few servers off OVH to start and not have to worry about actual hardware, while still being few times cheaper than cloud.

Yeah you won't have access to the slew of cloud services and will have to deploy your own database but with amount of readily available code and software to do it it doesn't really slow down experimenting all that much


> You can just have on-premise k8s

You can deploy bare k8s, but then you'd figure that you need a lot more, starting with a load balancer (luckily there is MetalLB).

It's all possible, but not simple.


Yeah the delta in cost/CPU/memory/storage between self hosting on second hand stuff and paying cloud services is insane. It's a no brainer if your use case needs beefier hardware than the typical $5/month VPS host and you don't have enterprise level liability/uptime expectations.

With stuff like proxmox you get a pretty similar level of ease of use to managed VM services too.


I generally find these discussions unproductive on HN because of how binary they become, but definitely once you go off the beaten path it seems a lot cheaper (sans power though) to run your own infra. I've been thinking of doing some data ETL for something that generates 1 GB / day and has retention for 30 days and it's much cheaper to host a Postgres myself at that scale than run it in the cloud. I could store in something like S3, but then I need to deal with querying S3. I'd like to combine some of this with cloud infra but I suspect cloud bandwidth costs would kill me. Colo-ing 2-3 servers is probably the best bet here.


I was an ops engineer at Fastmail. We ran our own hardware. A mix of Debian on one stack and SmartOS (Illumos from Joyent) and there were plenty of physical problems and costs. Now putting petabytes of data with replication and syncing and all that would have been a cluster F on the cloud, we missed out on a lot of the awesome newer deployment other tooling because we had written our own. Before I left we swapped out a good chunk of these snowflake software but it was impressive actually how well it worked. Good multi data center multi master mysql support in 2008? Killer feature. Maintaining that in 2020? Horrible.

Also there were plenty of upstream routing issues where solving that became a headache. The #1 thing we wanted was uptime and the #1 outage was our upstream providers having trouble routing to other upstream providers.

The number one reason and tradeoff for cloud is uptime and availability and the cost of not having it


> The number one reason and tradeoff for cloud is uptime and availability and the cost of not having it

100%. I ran API networking teams at a Big Tech and I know the difference. My workload here is at a hobbyist+ grade, no controls, no compliance, best effort SLA. I don't want to ignore the reality that to get enterprise grade uptime and availability, it's really hard to do this on prem.


> The number one reason and tradeoff for cloud is uptime and availability and the cost of not having it

Uptime? There have been quite a few catastrophic cloud failures. And some lasted hours.

Five nine is something like 5 minutes of downtime a year. The clouds aren't anywhere near that.


If you use it correctly, with multiple availability zones and even multi regions you can reach very high reliability. They surely don't offer five 9s for a single zone. I am not aware of many multi regions outages. And they are also getting better over time, spending a lot more engineering hours on reliability that most companies. And if they fail to deliver on SLA they might give you money (depending on your contract I guess).


Do you really think on-prem has more uptime than commercial cloud? For non-tech companies, no even close. Commercial cloud is adding nines to uptime and saving money by removing in-house IT admin staff.


the biggest miscalculation is really on expected uptime. People say they need 5 nines, yet take their car in for service twice a year for a tire rotation and oil change. How much uptime is _really_ required?


It's insane between actually renting servers, or even classical "bare" VPS providers and cloud too.


I'm always curious about the power cost here. Everyone I've talked to who runs servers at home either lives in a low COL area where electricity is cheap (and electricity is dirty, but I don't really want to get into that) or just pays the power as a sort of "cost of playing around" which is completely fair (it's not like I need to use my table saw at home.) Most folks I know who run servers use a lot more power than I do at home. Of course I don't know how much of that is just that the folks that build expensive homelabs being attracted to beefy, power hog servers as opposed to an actual incentive to cut power costs.


My rack draws around 750W at mid-idle (not totally idle but not doing anything big either). I have about $0.14/kWh delivered electricity cost in southwest Ohio. This means the rack costs me approximately $77 per month or approximately $920 per year. That's not a particular small cost, but it is far less than I would pay to host the same things at a professional data center, or shivers the cloud.

If I really wanted to optimize for power efficiency, I could do much better. I've seen decent homelab setups (with NAS, router, switch, and some slow compute nodes) that run under 100W, which would cost me only $10 per month in power, and would be far more powerful than a small DigitalOcean droplet.


> I've seen decent homelab setups (with NAS, router, switch, and some slow compute nodes) that run under 100W, which would cost me only $10 per month in power, and would be far more powerful than a small DigitalOcean droplet.

I do some homelabbing at home and i do work for some "big tech". The difference, essentially, is reliability and high availability.

Most homelab posts i see are one decent (not even large) disaster away from losing everything.

I do something in that space at home, mostly around data backup and replication, but i am well aware that in case of decent disaster I'd probably be at least a couple of days offline (potentially up to one or two weeks).

Most people underestimate facet of the discussion.


Agreed. Most people seem to take a very cavalier approach to backups, for example, or using risky Ceph/ZFS setups without understanding the consequences.

In the absolute worst-case scenario for me, barring a lightning strike that fried my UPS and entire rack, I would lose a day’s worth of changes, as my backup node kicks on daily to ingest snapshots. Downtime would probably be 15 minutes or so - boot up backup, change target IP address on other nodes to access it.

I’m only running RAIDZ1, so I’d have to lose two disks in a VDEV for this to occur. I understand and accept the risks, but were I hosting anything of import, I’d probably accept the additional power draw of keeping the backup server on 24/7 and stream snapshots to it continuously.

Also, of course, I’d be streaming those snapshots off-site. Currently I do so for things like photos and documents.

If I lost 2/3 of my compute nodes, I’d be down for a bit longer, as I’d have to shift workloads to the backup server (which is a dual socket with enough RAM to handle it), and currently it doesn’t run K8s. I can shift things to Docker Compose easily enough, or I suppose I could register it as a worker node that’s just tainted most of the time.


I'm posting this just to compare power rates not to win an internet fight. My power bill is tiered, and at the lowest tier I'm paying double your rates ($0.28 / kWh.) Once I add a rack like that on I'd probably be bumping up to the next tier which will increase my power to something like 2.5-3x that rate. Because of the tiered system even my non-server load will be metered at a higher cost. It's really not worth it here to run servers at home if I'm not being highly power conscious.


Meanwhile in Germany, the tiers are the other way around: base fees and meter costs increase effective rate at the low-consumption end, while you end up with only (even discounted, somewhat, usually!) per-kWh rates dominating at high consumption. Industrial loads also get assessed for peak power sustained for like a few minutes at least once a year, to reflect capex/depreciation of sufficiently-overprovisioned distribution transformers and other related last-mile power-handling capacity. This is relatively negligible if you average over 10% of your peak draw, though. And even beyond, recent energy prices Matt have shifted the balance spot to even more-peaky consumption.


I looked into cloud, but it's just not really feasible. It really is possible to do a lot with a little at home.

I'm using about 60-100W for my home-prod, and a lot of it is "older". I'm running about 15 small VMs at any given time these days, and probably 20 containers.

I think my biggest single draw is the Mikrotik CCR1036 in the garage, but it saved me from buying new gear. Sure there's a break even point with hydro, but that's years in the future when the device is free. It's also pretty fun to watch VPN connections testing at 700Mbps from home.

I don't really care about uptime, and I've got gigabit fibre to the house, so bandwidth isn't a huge problem. It worked fine on 300/60Mb cable too.

Ryzen 3 2200G, 32GB RAM, 1T NVMe, 10TB HDD. This one runs services.

Orange Pi 5 16GB with another 500GB NVMe. This runs redundant services and monitoring.


My rack lives in an expensive energy market, and pulls just under 500W. Almost 40% of that is a Dell R430, which by itself costs about $50/mo to run.

Next time I have the energy (hah) for this flavor of home maintenance, the idea is to split the work it does off to a few fanless systems, I'm pretty sure I can knock at least 100W off that. Main challenge there is storage - I have a SAS shelf and need a low-wattage machine that speaks SF-8088.

Experimented some with a couple Raspberry Pis for some things, but they just don't seem built to run 24/7. One lasted about 4 months, the other died at about 12. (They PXEbooted, no local storage, it wasn't that.)


I have a Plex server with 6 spinning disks and a GPU which costs me around $20/month. The most expensive part is when the GPU is running (based on my Kill-A-Watt.) I host around 15 Docker services on it.

I think even with the cost of electricity, you can easily beat cloud hosting on a per-month basis. But, factoring in the initial cost of hardware and electricity, it's probably a wash.

But then, if you're running a hypervisor, and would otherwise have a LOT of VMs in the cloud, maybe it swings back the other way?


My quarter rack Home lab in my basement is pulling 133 Watts right now with a low power 1U Supermicro SOC, an older Supermicro mini tower used as a NAS, and my networking gear and Internet modems. I don't have any intense workloads running at home. I was really power conscious when buying my two home servers as I know it is very easy to buy a beefy server off of Ebay that sounds like a jet engine 24/7 and pay out the nose on my home power bill.


My rack is about 500W, up to about 700W when the backup server kicks on (lots of spinning disks).

That’s a UniFi Dream Machine Pro, UniFi 24 port switch (powering two APs), 3x Dell R620s with a few SSDs and NVMe, and 2x Supermicros (one of which is the aforementioned backup server), each with a lot of spinners. Also some additional load from the overhead of the rack UPS.

I pay about $0.08/kWh, although with the base fee of $40 it's more like $0.11/kWh. In any case, it means I pay maybe an extra $30-40/month for my homelab, plus whatever additional heat load costs it places on my A/C.

If I moved to somewhere where electricity was significantly pricier, I would probably either invest in home solar, compress compute to a single node, or both.


Don't forget that if you live in a cooler climate, server heat can offset heating bills if you host at home.


What will you do if your startup suddenly takes off?

That's the main advantage of the cloud early on -- flexibility.


> What will you do if your startup suddenly takes off?

Sit back and relax? Being massively overprovisioned is a benefit of homelabs.


> Sit back and relax? Being massively overprovisioned is a benefit of homelabs.

there are many dimensions to provisioning, not all of them are one ebay/amazon/newegg purchase away.

you could hardly get a symmetrical 10 gbps internet connection at home (in most places), and if you do it would be unlikely to be timely (and in that case, your business could probably be suffering).

Frankly, i think that the time when your startup is taking off might be the right time to start thinking about moving to the cloud (or to a proper datacenter).

If anything, if your startup is taking off then you're starting to get a real sense of what kind of compute and storage you actually need, and can maybe negotiate accordingly (eg: long-term committment for resources in some clouds give you very relevant discounts).

EDIT: regarding the internet connection... on a consumer connection, most contracts include a minimal guaranteed bandwidth that's usually way lower than the advertised peak bandwidth. i wouldn't be surprised to discover people getting throttled at those speeds if they start getting serious traffic...


The shitty code most places are running have such horrific latency between awful SQL queries and choices like Node as a backend that the difference between a 1G and 10G uplink is unlikely to make a large difference, especially if you’re caching static content with a CDN.

This does presuppose you have a business class internet connection, of course.


Upgrade to a professional connection ?


quoting myself:

> you could hardly get a symmetrical 10 gbps internet connection at home (in most places), and if you do it would be unlikely to be timely (and in that case, your business could probably be suffering).

if your startup takes off and you can't get a professional connection on time... it might just drive users away. particularly paying users.

and depending on where you are, that might not even be possible at all.


Depends on what "taking off" means for the start up too. Taking off at a mass consumer scale might need that flexibility, taking off and even getting to market saturation in a specific B2B might be achievable on a raspberry pi level hardware. There are many more of the later.


Agreed - 1gbps is a surprising amount of bandwidth, you could easily host a fairly popular mobile app, saas, etc with plenty of breathing room. And in a lot of cases, you can just move your static file hosting behind Cloudflare, or onto something like s3, and give yourself even more room to grow.


And even if you only use 500Mbit/s it's more than 3k$ in cloud egress fees saved.


Any business that you can run from a home server with a residential business line is not the kind of business we are talking about here. Yes, you can potentially serve a lot of customers with that setup, but your reliability story is terrible so you better have very forgiving customers.

What if your internet goes out? Even with a business line, I've had to wait five days for them to replace a fiber line that a squirrel chewed through.

What if the power goes out? I just had a five hour power outage. Even if you have a battery backup, when the neighborhood power is out for a while, the ISP equipment will die when its batteries go out.

What if your hardware dies and you aren't home to switch it out, assuming you even have spare hardware?

What if your A/C goes out and your server overheats and has to get shut down?

All of these are things you usually don't have to deal with when using the cloud or even a $5 VPS, because they design for all of these failure cases from the start.

If you're running a business from your house, it is by definition a lifestyle business, and that's not really what we are talking about here.


    Any business that you can run from a home server 
    with a residential business line is not the kind 
    of business we are talking about here.
What kind of business are we talking about here? What does the "taking off" in your previous post mean, exactly?

Depends on what you're trying to do.

You are not going to be able to run a Netflix competitor out of your garage.

You're not going to get high availability without some significant investment and even then you'll be at the mercy of whatever your ISP is doing upstream in the event of a power outage. I live in an area where we average something like 99.99% power uptime, but not everybody is so lucky.

You could, potentially, host something that serves up something non bandwidth-intensive to tens of thousands of users, give or take an order of magnitude. (SaaS, APIs, etc) You can do a lot of interesting things with a homelab and some of them are potentially profitable.

Perhaps more crucially: you're not exactly locked in to a homelab. You can start with that and once you reach a certain point, migrate to colo or cloud.


over the last 15 years my residential internet and power supply have been considerably more reliable than us-east-1


The only realistic concern for me here is the ISP failure. Even then, if I really wanted to I could have both AT&T and Spectrum uplinks with an LACP bond.

I have enough battery backup to run my rack for about 30 minutes, more if I shut down a node. That’s more than enough time for me to set up my generator and route power; it has an extended gas tank and can power the rack, fridges/freezers, fans, etc. for over 24 hours. I periodically run drills on this to ensure it’s doable, and that the gear works as expected.

If I’m not home, then yes, the latter would fail. Dual hardware failure is an unlikely scenario; single node failure is handled between K8s and Proxmox clustering.


That would be dream come true! Everything is automated with Infrastructure as Code tools like terraform, plumi, dagger etc... You can easily point to another K8S and redeploy there then update the domain DNS records to divert traffic.


Additionally, if the company starts to grow you can hire other people who are familiar with the cloud provider (and it has extensive documentation). The last company I worked at that had stuff running on a rack in the office just had a "Keep Calm and Sudo On" printout taped to the cage and the guy who'd set it all up had quit.


App still needs to be written to be scalable. And if shit hits the fan moving to cloud (...or just renting dedicated servers at 1/3 the cloud cost) isn't too bad


This is precisely what I plan on doing when I eventually launch my app. I have 3x Dell R620s in a Proxmox cluster running K8s nodes, Ceph on NVMe with CSI-RBD, and a separate Supermicro 2U exposing ZFS for spinning storage. The only missing link is 10G networking, because 10G switches aren’t super cheap.


this is the most undersung hero. Internet speeds are reaching the equivalent of a datacenter. This will open up so many possibities, that it makes it seem like the 2000s again. And finally we might actually need ipv6


I just moved out of the consumer internet dystopia that is California (or most of US I guess) to Spain and am just about to get 10 Gbit symmetric for €25/mo. Even if I get half of that I’m ecstatic. This kind of infrastructure is so conducive to all kinds of interesting and “decentralized” innovations.


I wish I could get that in london :(.

uk internet infra is backwards.


Bad regulation. Ofcom is allowing Openreach to keep asymmetric pricing for FTTP. Unjustified market segmentation.

The govt has a focus on download speeds (which are useful for high quality video) and does not care about upload or latency which filters down.

On reflection, even the the terms "upload" and "download" are based on the assumption connections are for consuming media.


In Germany you pay 60+€/month for 100mbit down.


Yes indeed. My Bell.ca ~1gb fiber has a monthly cost of $100 + $20 for dedicated IP. Since it's business line fiber internet it comes with monitored service quality meaning it's prioritized over others using regular/residential fiber internet (claimed!)

BTW, I can get multiple lines if i'd ever need it


half of asia is already using IPv6; I think Africa had a single /24 of IPv4 reserved for it back in the day


datacenter network speeds are ~400+Gbit (between data centers) and in the Tbit range within data centers. A typical server now has a 10Gbit NIC.


I basically wrote the same thing a few months ago. TL;DR for most systems public cloud is not cost effective not secure. Another interesting Apple exploit using Spector on Safari on the front page today. Most systems better off self hosted with cloud tools to manage: https://open.substack.com/pub/rakkhi/p/why-you-should-move-b...


was the business line much more expensive than home rates? and were you given a static ipv4 IP?


Not GP, but related: I've had the same DHCP IP from Xfinity (residential internet service) for well over a decade (and I think for 16 years).

At what point does a dynamic-but-unchanging IP become functionally static?


In my case Xfinity kept the same IP for me for two years, then an outage happened where everyone in the neighborhood lost connectivity. When connectivity was restored I got a different IP.

I feel like the biggest difference is the fact that there's no guarantee that the dynamic IP won't change, so all systems need to be prepared for that, or you need to be mentally prepared for that day.


I am with Bell in canada. My business fiber internet is $100/m and I also pay $20/b for dedicated IP


Renting dedicated server off company like OVH, or even co-locating your own is also far cheaper than cloud, few times over, without fuss of turning your house into datacenter.


There is a time cost, but most of it is a one off, and not that difficult, and there are tools to make it easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: