Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could always rack mount some Mac Minis.


It’s sad that racking and maintaining your own physical hardware is becoming such a lost art… I appreciate the up-front simplicity of cloud offerings as much as anyone, but there’s something to be said for owning your own hardware and avoiding the continual rent payments you’re sending the cloud providers.

The wisdom is that cloud providers are better at infra than you, and that the economies of scale make it better to piggy back on what they’re doing, but… AWS is the most profitable part of Amazon for a reason. They’re overcharging you.


For most orgs: AWS is not overcharging IMHO.

When you look at the cost of the hardware + hosting. Yes, it certainly looks and feels that way.

But if you've dealt with corporate IT, and had to deal with 3-6 month lead times on getting hardware, or politics to get your hands on hardware to get stuff done.

AWS is cheap. It gives you velocity.

If your company is large enough that it can offer the elasticity of resources that Amazon offers or even 1/4 of it... and you have an IT org that will let it happen. Yes, AWS is a waste.

But with AWS... when a project dies, you can wipe its costs out, people won't hold onto hardware so they have hardware for the next project, etc...

Trust me. I've been IT, I can spec and build rack systems. I am a software dev. And I've been a dev most all my career.

For 90%+ of orgs... they don't have the maturity and skills to handle that type of infra without substantially distracting from their primary business.


I find at AWS I am always wasting engineer time optimizing dumb things. Do you know how much a TB of ram costs? Or 10TB of blazing fast NVME costs? Less than $5K. How much does that cost at AWS?! This is not even considering bandwidth which AWS overcharges soo much. Yet I waste time.

Also, maintaining servers is not hard at a proper data center. It is often more hands off than the migrations cloud providers force on their customers.


Yeah. Also, to be honest, we still have process and approvals to get through to spin up AWS stuff.

It’s not like process disappears just cause you’re not on your own hardware. Infra is still its own team with its own budgets poking and prodding at every damn turn for every little thing till rejecting your requests, you escalate and then have a 4 week battle over needing the space.


Then your company should have stayed on prem.

If you are paying the price of being on prem, which is really lack of ability to provision and de-provision infra quickly. There's little point to the cloud, unless you just have no infra to begin with (small companies).

I'm in a small firm now. I can't imagine having an approval process to spin up a few instances to run my tests and spin them down after. That'd be silly.


Really? At my old company each division had its own budget and account and then you’d be an iam member of an account and spin up services under that account, but there’s no central authority to send a request to. There were tools to analyze underused services across all accounts (like EC2 instances constantly under 2% cpu load meaning they were flagged for downsizing if possible).


Getting good results on AWS/GCP is neither easy nor simple: it’s a different set of headaches.

It’s still a win for a lot of use cases and I still do it quite often, but the meme that it’s this “click and you’ve just hired the best ops team in the world to work for you” and so the 50-500% markup is actually a bargain is horseshit. A Bizon box in your living room fucks AWS up on flops/$ on most instance types and pays for itself in 30 days.

It is one of the best ops teams on Earth: but they’re working for you like the Google search team is working for the user.


The problem with using a cloud provider is that you still need to know what you're doing.

Your application isn't going to magically become HA/DR. You still have to make it that way, from your application design/coding up through the deployment.

I mean, if you're not storing your session IDs in a data store that's reachable by all the nodes behind your load balancer then no amount of infrastructure is going to save you.


A great illustration of this is the GitHub outage a few years back. They had a fairly well distributed application layer but the database topography at the time didn’t consider the failure mode, even though the application layer did.

That’s a realistic scenario no matter whether you’re bare metal, building out your own cloud, or using someone else’s. No amount of AWS/GCP/Azure/et al marketing changes that.


And when your comcast link dies, GG man. Oh, what about when you drop a drive?

Yes, you have to learn things to goto the cloud, and I won't say it is all roses, it ain't. But... AWS is less likely to fsck it up.

If you have the constant load to burn the flops 24x7x365... go for it. If you have the ops team to do it... go for it.

If you don't... take a bit of time and learn the cloud which is much easier than getting on-prem right.

Especially for smaller firms, this isn't even a close call IMHO.


Back in the day, Google was an innovator by using lots of cheap commodity servers instead of a few expensive ones and just accepting failures as a fact of life. I wonder 25% seriously whether there's an opportunity for a similar mad genius move to pay for business class fiber at a half dozen remote employee's homes across the country and just have a good replication/failover strategy. 24/7 on call isn't that big of a deal if you just have to go into your basement to swap a drive. Going to be on vacation? Don't be the primary site while you're out.


People like vast.ai are making moves in this geberal direction FWIW if you’re passionate about it.


I work in cloud and you sound like a shill.


More a realist. If you have the scale to go on prem. Do it.

Most firms don't. Or don't have the skills.

Also, the cloud can help an IT project recover from errors. Let's say, I'm about to buy 500k of hardware to setup some storage. I get my requirements, I architect it, do my design work, and then buy the hardware. I have to over provision a bit because of reality and human error... But when I discover that the requirements, shift 2mo in my project, and I've already ordered the hardware... I may be hosed.

This isn't hypothetical, this is what happens. Things evolve and shift. The cloud allows for more agility. If your firm is large enough, or has its stuff together enough, go for it on-prem.

I've got 20+ years on prem.. I've seen it fail all over. I've seen cloud be a mess too. But if you told me to clean up one. I'll take the cloud.


Anyone who thinks the blanket statement “cloud is expensive” really doesn’t know what they are talking about.


For me, I prefer hosted/cloud, preferably managed.

I’m quite capable of setting up whole server stacks. I did it for years, but I stopped, some time ago, and consider myself to be, for want of a better word, incompetent at being a modern admin.

I think I’d screw the pooch, so I prefer that someone who does it every day, handle it.

But I write Swift code, every day, so I’m not incompetent at everything.


The question I've often heard asked when deciding on build vs buy (which can apply to cloud vs. bare metal) is:

Are we in the business of building, maintaining and operating <thing to build> or do we want to buy that as a service instead and focus on our actual core business?

There's more to the cost of building and operating than just the hard costs.

Retaining good modern IT talent is getting harder and harder - and I'm not even talking about salaries.. You need a whole department including strong leaders who can hire, train, and lead the right people, etc..

This is something most companies wouldn't even know where to start with.


I don't think you need a whole department to rack a few minis and run some buildkite agents on them.


It depends on how important it is to you.

You can throw a bunch of boxes in a closet and it'll work. A surprisingly large amount of the early Internet was "a spare box under my desk."

The problems start when they become part of your critical path and you're on vacation and nobody knows WTF is happening.

I mean, it's a risk. If you're OK with that risk then go for it.

It's really about the politics of your office.

If everyone is OK with the idea that the box is in some closet somewhere that's fine. I've been part of a bunch of startups where we were running infrastructure on spare hardware. Sure it's not HA, but we didn't need it...or it was at least HA enough for what we needed.


Yeah, to be clear I wouldn't advocate this route for your core product. But running CI workers? Sure. Especially macs, which have onerous usage restrictions in the cloud that negate most of the elasticity benefits you might otherwise see.


You need one good motivated sysadmin.

But if you can motivate that same sysadmin to spend his skills on something more directly benefiting your company, then you should still buy it in.


I think you need one motivated developer who likes to tinker with hardware for a few hours a month.

Given how popular homelabs are, I don't think this would be too hard to find.


I don’t want to build a business around a dev who likes to “tinker with hardware”.


Why the scare quotes? It's a skill like any other. Startups are built by folks who wear many hats.


Now your business depends on that one person, congratulations. Hope they don’t realize that their skills are better suited to working at AWS/Azure etc. for 2x the money. Which they are.


It's not core stuff. If the one sysadmin leaves, you buy in a solution.


I mean, fair enough I guess. Why play the OpEx vs CapEx game when you can just pay both.


Sure, like any other high-level fictional situation you can surely come up with many valid fictional counter-points of your own, but cloud hosting is popular for a reason.

And I think in most cases companies want to focus their employees and efforts on their core business, and if that doesn't include setting up and maintaining hardware in the long-term, then you don't build, you buy.


I'm a former build engineer and used to do everything on prem and I gotta say I miss it (not being a build engineer, the on prem experience). Since those days pretty much every company I've worked at moved their CI/CD to the cloud and I gotta say it feels so much slower, even when working from home.

I remember twice switching from an in-house jenkins/teamcity/whatever type of CI to Azure devops and the thing I remember the most was how much longer it took a build to complete as well as the massively longer time downloading a build from Azure vs from within the office. Even when working from home the on-prem stuff was faster.

The thing is, the build/devops teams seem to be about the same size in both cases. It's just kind of worse in pretty much every case when we do CI in the cloud.

Notes

- My experiences are largely for game development so the build times and artifact sizes can be quite large.

- I've only ever had CI/CD experience with Azure, I've not tried other cloud providers

- Since this is game development and we're using CI downtime is more acceptable than other cases. That said, I don't remember much downtime when I was working as a build engineer. I have seen periods of 1-2 hours of downtime once in a blue moon but then again I've seen that with Azure. In both cases it wasn't so much the setup but a build script deployment issue.

Also being able to cool off in the rack room when it's a hot day is always a treat :)


Sometimes the flexibility and time savings is worth the added upfront cost. Similar to how companies like to hire consultants or lease office space. Being able to walk away is better for short term, because companies value profits in the short term.


You’ve never had to drive to the colo at 3am huh?


No, I had proper OOB access and could do any kind of power cycling or remote console access from my desktop.


Never seen a hardware failure, I see.


The snark is getting a bit tiring. No, plenty of hardware failures but we had redundancy. Our availability wasn’t dependent on how fast I cold drive to the data center at 3am. Drive dies, who cares, there are plenty of hot spares, we’ll deal with it during business hours. Server dies, who cares, there are lots of them. We have remote hands too, so simple hardware replacement is something you can get cheap onsite labor to do.

If your operations halt until some poor sysadmin has to drive to the colo, you are absolutely doing it wrong.


Where does this stop? Do you produce your own electricity? Farm your own food? Make your own silverware and shoes? Sometimes it's just easier to outsource the things you don't want to (or aren't good at) doing yourself.

If I wanted to host a website, sure, I can build a server out of parts and negotiate with my ISP and get a business pipe and handle all caching and such. Or like I can pay a provider $5/mo and get better performance and reliability with no management overhead. Yeah, maybe over 5 years I'd save more money doing it myself... but it's not worth the time.

If I wanted to generate a photo or a dozen, or a few paragraphs of text, that's like a few cents worth of cloud AI. Maybe low single-digit dollars. Or I could spend thousands on fat GPUs or a Macbook, spend forever training it, and still end up with a sub-par result.

AWS is profitable not just because they're overcharging you but because they are providing a hugely useful service for millions of businesses that don't want to deal with that infrastructure themselves, any more than they'd want to manage their own plumbing or electrical grid or roads and bridges leading to their office. DIY makes sense if you're doing it as a hobby or if your scale is so big that you would incur significant savings to in-house it, but for millions of small and medium businesses, it's just not the most practical approach. Nothing wrong with that.

I mean, it's like saying development is such a lost art... why hire a dev if you can learn to code yourself? Sure, but not everyone wants to, can, or has time.


I hate to say this but it has gotten to the point where I'm starting to farm some of my own food (I'm starting to get fed up with produce quality issues in my hometown).

Still haven't started on the silverware or shoes yet.

I do agree with you though. If you are a non-tech company or a company that lacks the human resources you might as well go with the cloud.


That's not anything to be ashamed of. It's awesome you grow your own food!


Depends on the financials.

My employer has generated its own electricity and steam for decades.

For a small business - different story.


I have a base-model m1 Mac Mini and it's a beast. I'm using it as my build/deploy server and also as a back-end server (for running jobs) for the prototype I'm working on. I also do development on it when I want to use my big monitor rather than my laptop. And I listen to music and run Cookie Clicker at the same time while doing development.

Got three databases up and running too. It's a beast. I'd definitely consider self-hosting with a few Mac Minis, that would be fun and they're really cute, sleek devices too. I paid $650 for it and consider it a great deal. Definitely should've gotten it with more than 8gb of ram but I got it to try it out and haven't yet really needed to upgrade to a unit with more memory.


Interestingly enough I was actually discussing this with a friend (who works in enterprise IT) the other day. Basically rack servers are purpose build for the task, with hot swappable components, redundant power/storage, multiple NICs, ECC, remote management, and so on. They come with enterprise support and can be easily maintained in the field.

Meanwhile a Mini cluster is literally a bunch of mini pcs in a rack, and idk if Apple even supports this kind of industrial use. While it's a quality product the Mini isn't really designed for the datacenter.


> and idk if Apple even supports this kind of industrial use. While it's a quality product the Mini isn't really designed for the datacenter.

I think they know of it and tacitly approve of this use case, as evidenced by the Mac Mini having the same form factor for ages. They’re well aware that a lot of people use Minis (and Studios now) in data centers, and that the Mini footprint is sort of “standardized” at this point.


That 10GbE NIC option on the low end Mac Mini was a dead giveaway.


IIRC, Apple did indeed have a server SKU for the Intel Mac mini at some point.


It's called Xserve:

https://en.wikipedia.org/wiki/Xserve#Intel_Xserve

But since Apple discontinued Xserve and macOS Server, they seems like don't care about this business anymore.


They actually had a Mac Mini Server as well for a bit. It made sense because it had a second hard drive instead of an optical drive and came with a Mac OS X server, back when that was a standalone $499 product: https://support.apple.com/kb/SP586

(Not sure what differentiates the later model Mac Mini Servers from the regular Mac Minis, since Mac OS X Server just became a $19 App Store purchase, and optical drives were no longer a thing in Mac Minis)


I have one of the 2009 Mac mini Servers running Ubuntu 22.04 LTS like a champ. It’s still a great machine. Upgrading the HDDs to SSDs was a bit of a chore, but doable.

They discontinued the Mac mini Server line in October 2014, which was still sold with two drives instead of one. Configurable to order with SSDs by that time.


There was also a "server" model Mini but it was very short lived and was basically a regular Mini with the "Server" software pre-installed, something that you could just throw in via the App Store with one click anyway.


It had a five year run and saw four different hardware models. It included two hard drives instead of either one hard drive and an optical drive, or just one hard drive (after they ditched ODDs).

Mac OS X Server was its own operating system originally. It was still the same core OS, but had a ton of additional servers built in. Non-exhaustively, they included IPSec VPN, email, calendaring, wiki, SMB and AFS file shares (including support to act as a Time Machine backup destination), LDAP, DNS, and software update caching before it came to macOS proper. The Server app released via the App Store was a shadow of Mac OS X Server.

These were quite popular in small professional offices like law firms.


I saw it save the ass of a client who had one, as they got robbed and all their desktop computers, mostly iMacs, were stolen. The Mini was as much as lost in the wiring in the network closet and was overlooked, so everything had backups.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: