Hacker Newsnew | past | comments | ask | show | jobs | submit | d1sxeyes's commentslogin

It’s also opex vs capex, which is a battle opex wins most of the time.

Opex is faster. Login, click, SSH, get a tea.

Capex needs work. A couple of years, at least.

If you are willing to put in the work. Your mundane computer is always better than the shiny one you don't own.


That's because of company policies. An SME owner will buy a server and have it in the rack the next day.

Of course creating a VM is still a teraform commit away (you're not using clickops in prod surely)


If you want something at all customized, it takes longer than that to receive the server. That being said, you can buy a server that will outperform anything the cloud can give you at much better cost.

SME and "a server" is doing some big weight lifting here.

If you want a custom server, one or a thousand, it's at least a couple of weeks.

If you want a powerful GPU server, that's rack + power + cooling (and a significant lead time). A respectable GPU server means ~2KW of power dissipation and considerable heat.

If you want a datacenter of any size, now that's a year at least from breaking ground to power-on.


And multiple years from the boardroom making a decision to build a data center to breaking ground.

It depends. Grant funding (e.g. in academia) makes capex easier to manage than opex (because when the grant runs out you still have device).

I think it wins because opex is seen as stable recurring cost and capex is seen as the money you put in your primary differentiation for long term gains.

For mature Enterprises my understanding is that the financial math works out such that the cloud becomes smart for market validation, before moving to cheaper long term solution once revenue is stable.

Scale up, prove the market and establish operations on the credit card, and if it doesn’t work the money moves onto more promising opportunities. If the operation is profitable you transition away from the too expensive cloud to increase profitability, and use the operations incoming revenue to pay for it (freeing up more money to chase more promising opportunities).

Personally I can’t imagine anything outside of a hybrid approach, if only to maintain power dynamics with suppliers on both sides. Price increases and forced changes can be met with instant redeployments off their services/stack, creating room for more substantive negotiations. When investments come in the form of saving time and money, it’s not hard to get everyone aligned.


True, but for a lot of companies “our servers are on-prem” is not a primary differentiator.

i think we are saying the same thing?

Capex may also require you to take out loans

Which is incredibly difficult in the public sector. Yes, there are various financing instruments available for capital purchases but they're always annoying, slow and complicated. It's much easier to spend 5k per month than 500k outright.

Your numbers don't line up, if you are spending 5k in cloud costs, and on prem is 1/3 of cloud. At 48 month replacement cycle, 1/3 of 5k * 48 months is 80k. So it is 80k vs 5k a month for 48 months.

I think the primary reason that people over fixate on the cloud is that they can't do math. So renting is a hedge.


You hit the nail on the head regarding the math. Most teams treat cloud costs as an inevitable tax rather than an engineering variable. As someone with an accounting background turned Cloud Architect, I see this 'math gap' daily. Usually, it's not a cloud vs. on-prem issue, but a lack of infrastructure discipline—idle resources and unoptimized NATs burn through that 48-month budget faster than hardware depreciation ever would. I’ve been using a 'Hardened by Design' framework to cut this waste by 50% without the overhead of moving back to a data center. Efficiency is often just better IaC.

The whole discussion and the article are just an instance of an optimization problem, for a crowd that claims to be technical, the fact that the discussion has so much heat is revealing.

Would love to see people read, write and do more math.


It’s not really about the numbers though.

Even spending 10k recurring can be easier administratively that spending 10k on a one time purchase that depreciates over a 3 year cycle in some organisations because you don’t have to go into meetings to debate whether it’s actually a 2 or 4 year depreciation or discuss opportunity costs of locking up capital for 3 years etc.

Getting things done is mostly a matter of getting through bureaucracy. Projects fail because of getting stuck in approvals far more often than they fail because of going overbudget.


> It’s not really about the numbers though.

Of course not.


Well, capex has a multi-year depreciation schedule and has to cover interest rates. So the simplified "opex wins most of the time" is right.

But we are talking about a cost difference of tens of times, maybe a few hundred. The cloud is not like "most of the time".


If you're interested in this in more detail, check this out:

https://blackwinghq.com/blog/posts/a-touch-of-pwn-part-i/


This is a great read, but note that it's specific to Windows and Dell/Lenovo/Microsoft.

Apple does it different(ly), and I'd argue more securely. Being able to specify the full chain of hardware, firmware, and software always has its advantages.

Apple's fingerprint readers do not perform authentication locally -- instead the data read from the sensor (or derivatives thereof) is compared to a reference which is stored in the secure enclave in the Apple silicon (Ax Tx or Mx) of the Mac or iOS device itself.


The problem is that the de facto standard is `.claude`, which is problematic for folks not using Claude.

Your skill then just becomes an .md file containing

>any time you want to search for a skill in `./codex`, search instead in `./claude`

and continue as you were.


I see it similar to browser user-agents all claiming to be an ancient version of Mozilla or KHTML. We pick whatever works and then move on. It might not be "correct," but as long as our tools know what to do, who cares?

My repos are littered with agent-specific files containing “treat this other file as if it were this one.” We’re moving so fast on so many fronts, and it seems odd that this is the persistent problem. It doesn’t even help lock folks into one agent, so I’m not clear why the industry hasn’t yet standardized on one project-specific file name yet.


I’m not convinced it’s that good because of how the deals are structured. For example, top deal where I am at the moment is 9 chicken nuggets plus two medium drinks plus two sauces for 1990 HUF. That’s a two person deal (you don’t need two drinks if you’re on your own), but there are no chips, add a large chips to share at 1270 HUF and your meal costs 3260 HUF. Two four nugget McMoment deals comes to 3060 HUF (small fries, small drink). Are an extra 80ml of coke and half a nugget each worth 200 HUF? Maybe? But it’s definitely not the huge savings it purports to be.

This walkthrough is just an example, open the app yourself and have a look, most of the deals are just an item or two away from being a thing people would actually order.


I agree and don't use those deals. The items or sizes are wrong. I'm always offered "20% off purchases of $10 or more", "$2 any size fries", "$0.29 any size soft drink / tea with minimum spend of $3" which I think are pretty decent and always a savings.

If I do eat McDonalds its usually just a burger + fry + drink usually around ~$6ish unless I'm ordering for others.


Given the author’s domain is .co.uk, and there’s a reference to part of the UK at the bottom, I’d say this is likely aimed at an average Y10/11 (15-16 y.o.). It could perhaps be used with more able kids lower down the school, but I doubt it would be accessible to any under the age of 13.

That’s not very generous. Keeping files in the Recycle Bin is an incorrect use of the Recycle Bin. Keeping conversations in your ChatGPT history is how it’s supposed to be used.

I agree it’s often used to shut down discussion, but most often I’ve seen it when a contributor is losing an argument (their PR isn’t getting merged, or their feature request is rejected, or their bug is marked wont-fix) and they don’t agree.

“Victim blaming” is an odd phrase here. Could you clarify what you mean?


Sure.

As background, when power is misused, you'll often find somebody immediately showing up to explain why it was the fault of the person harmed. In the US, for example, this happens basically any time a cop kills somebody. In analyzing the situation, the agency of the person with power is minimized or ignored; the agency of the person harmed is maximized.

Open-source project are often run as little fiefdoms. Power is concentrated; checks and balances are minimal or nonexistent. Note that I'm not saying that this is bad or good; that's just how it is.

The "just fork it" style of response that the article is addressing, which I don't ever think I've seen in an issue but often see here on HN as a response to some complaint about a project. It's not part of a careful analysis about the costs and benefits of forking. There's also little or no attempt to understand who a project's audience and community is, or the value of the complaint in that context. It's a drive-by response to shut-down a complaint in a way that treats the complain as illegitimate, suggesting that person is wrong for wanting something different from what's on offer.

Does that help?


I still don’t understand how someone who wants something different from what’s on offer is “a victim”.

I do agree that “just fork it” is a flippant and pretty unhelpful thing to say, but just because a piece of software is open source, that doesn’t necessarily automatically mean that its development should follow the designs of a committee of its users.

You are absolutely correct that often, open source projects are indeed run with the maintainers exercising absolute control. I think this is where the tension comes in, because sometimes, folks expect that to be different, and approach the project with a sense of entitlement that somehow the project should change to fit their needs.

“Just fork it” is a way of saying “if you need it to fit your needs, feel free to take what we’ve done so far and add what you need, but we aren’t going to”.

The author’s core argument seems to be summarised here: “In social terms, it’s the equivalent of saying: “If you don’t like society, go start your own civilisation.”

It’s not at all equivalent though. It’s more like “I invited everyone round for dinner, and I don’t want my house to smell of fish, so I’m not cooking fish. If you want to cook fish, you can borrow my pans, but invite everyone round to your house instead.”


> Power is concentrated; checks and balances are minimal or nonexistent.

Is power concentrated? What power do maintainer of FOSS projects have over people who would like to use that project? How can they compel people to do what they want as it relates to the project?

> It's a drive-by response to shut-down a complaint in a way that treats the complain as illegitimate, suggesting that person is wrong for wanting something different from what's on offer.

It can't possibly be suggesting that the person is wrong for wanting something different. The "drive by", "fork it" comment is saying. If you want something different, then make the different thing exist, no one will be able to stop you from making the thing that you want.

Unless you feel that the different thing is the person who is complaining, is entitled to having other people do what the complainer wants, instead of what the maintainer wants?

On the internet; if you wanted to suggest that someone's complaints or suggestions are illegitimate, you wouldn't say "fork it" you would say, "no, that's stupid, you're stupid, how could you suggest such a dumb, stupid, crazy, insane thing?!" surely followed by a series of extra expletives, or angry rage posts.

Or the just fork it comment is from a maintainer. Who has decided that they do not want the suggested changes. In which case, it's still not saying the changes are illegitimate, it's saying that the maintainer objects to them; so they're offering the only remaining solution for the complainer to get the changes they want.


That makes it vanishingly unlikely. On a 16GB RAM computer with that rate, you can expect 64 random bit flips per month.

So roughly you could expect this happen roughly once every two hundred million years.

Assuming there are about 2 billion Windows computers in use, that’s about 10 computers a year that experience this bit flip.


> 10 computers a year experience this bit flip

That's wildly more than I would have naively expected to experience a specific bit-flip. Wow!


Scale makes the uncommon common. Remember kids, if she's one in a million that means there are 11 of her in Ohio alone.


~800 bit flips per year per computer. 2 billion computers with 800 bit flips each is 1,600,000,000,000 (one point six trillion) bit flips.

Big numbers are crazy.


I saw a computer with 'system33', 'system34' folders personally. Also you would never actually know it happened because... it's not ECC. And with ECC memory we replace a RAM stick every two-three months explicitly because ECC error count is too high.


Got any old microwaves with doors that don't quite shut all the way nearby? Or radiation sources?


Nah, office building. And memtest confirmed what that was a faulty RAM stick.

But it was quite amusing to see in my own eyes: computer mostly worked fine but occasionally would cry what "Can't load library at C:\WINDOWS\system33\somecorewindowslibrary.dll".

I didn't even notice at first just though it was a virus or a consequences of a virus infection until I caught that '33' thing. Gone to check and there were system32, system33, system34...

So when the computer booted up cold at the morning everything were fine but at some time and temp the unstable cell in the RAM module started to fluctuate and mutate the original value of a several bits. And looks like it was in a quite low address that's why it often and repeatedly was used by the system for the same purpose: or the storage of SystemDirectory for GetSystemDirectory or the filesystem MFT.

But again, it's the only time where I had a factual confirmation of a memory cell failure and only because it happened at the right (or not so, in the eyes of the user of that machine) place. How many times all these errors just silently go unnoticed, cause some bit rot or just doesn't affect anything of value (your computer just froze, restarted or you restarted it yourself because it started to behave erratically) is literally unknown - because that's is not a ECC memory.


Depends where in the world they are. Here in Hungary, it’s not uncommon to email your-family-doctor@gmail.com


What does that have to do with vibe-coding?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: