Hacker Newsnew | past | comments | ask | show | jobs | submit | amw-zero's commentslogin

And to clarify - only writes on the _branch_ lead to a copy being created right? Writes on the original don’t get propagated to the branch?


What does this even mean? Because people have locked in their data, they’re ok with downtime? I can’t imagine a world where this is true.


It costs a lot of money to move, you don't know if the alternative will be any better, and if it affects a lot of companies then it's nobody's fault. "Nobody ever got fired for buying Cloudflare/AWS" as they say.


It's just that customers are more understanding when they see their Netflix not working either otherwise they just think you're less professional. Try talking to customers after an outage and you will see.


it's not just that, it's the creation of a sorta status symbol, or at least of symbol of normality.

there was a point (maybe still) where not having a netflix subscription was seen as 'strange'.

if that's the case in your social circles -- and these kind of social things bother you -- you're not going to cancel the subscription due to bad service until it becomes a socially accepted norm.


The “simplicity” of Go is just virtue signaling. It has gotchas like that all over the language, because it’s not actually simple.


Yep.

The lack of features means all the complexity is offloaded to the programmer. Where other languages can take some of the complexity burden off the programmer.

Go isn't simple, it's basic.


Perhaps Go is a nice target language for a transpiler, so you could still benefit from the runtime and ecosystem while fixing the bugs in the language itself. Anyone working on this?



As someone who's written commercial software in well over a dozen different languages for nearly 40 years, I completely disagree.

Go has its warts for sure. But saying the simplicity of Go is "just virtue signaling" is so far beyond ignorant that I can only conclude this opinion of yours is nothing more than the typical pseudo-religious biases that lesser experienced developers smugly cling to.

Go has one of the easiest tool chains to get started. There's no esconfig, virtualenv and other bullshit to deal with. You don't need a dozen `use` headers just to define the runtime version nor trust your luck with a thousand dependencies that are impossible to realistically audit because nobody bothered to bundle a useful standard library with it. You don't have multi-page indecipherable template errors, 50 different ways to accomplish the same simple problem nor arguments about what subset of the language is allowed to be used when reviewing pull requests. There isn't undefined behaviour nor subtle incompatibilities between different runtime implementations causing fragmentation of the language.

The problem with Go is that it is boring and that's boring for developers. But it's also the reason why it is simple.

So it's not virtue signaling at all. It's not flawless and it's definitely boring. But that doesn't mean it isn't also simple.

Edit: In case anyone accuses me of being a fanboy, I'm not. I much preferred the ALGOL lineage of languages to the B lineage. I definitely don't like a lot of the recent additions to Go, particularly around range iteration. But that's my personal preference.


You are comparing Go to Python, JS, and C++, arguably the three most complex languages to build. (JS isn't actually hard, but there are a lot of seemingly arbitrary decisions that have to be made before you can begin.) There are languages out there that are easy to build, have a reasonable std lib, and don't offload the complexity of the world onto the programmer.


> You are comparing Go to Python, JS, and C++, arguably the three most complex languages to build.

No, I'm comparing to more than a dozen different languages that I've used commercially. And there were direct references there to Perl, Java, Pascal, procedural SQL, and many, many others too.

> There are languages out there that are easy to build, have a reasonable std lib

Sure. And the existence of them doesn't mean Go isn't also simple.

> and don't offload the complexity of the world onto the programmer.

I disagree. Every language makes tradeoffs, and those tradeoffs always end up being complexities that the programmer has to negotiate. This is something I've seen, without exception, in my 40 years of language agnosticism and part-time language designer.


> because it’s not actually simple

Cue Rich Hickey's Simple made Easy: https://www.youtube-nocookie.com/embed/SxdOUGdseq4 / https://ghostarchive.org/varchive/SxdOUGdseq4


How often are you writing non-trivial data structures?


I love the euphemistic thinking. “We built something that legitimately doesn’t do the thing that we advertise, but when it doesn’t do it we shall deem that hallucination.”


The first topic is "Predicates, Sets, and Proofs."

I use predicates and sets quite often in daily programming.


You can write proofs along with the course, and since they are machine checked you can have confidence that they are correct.

If you don't know, writing a proof in isolation can be difficult, since you may be writing on that isn't actually sound.


Learning math is more about the big ideas. Behind each proof is an insight. Formalizing in a computer is like spell checking a document. It helps you catch small mistakes but doesn’t change the content,

I just think this is a distraction unless your goal is to learn lean and not math.


Hard disagree.

Errors are found in human proofs all the time. And like everything else, going through the process of formalizing to a machine only increases the clarity and accuracy of what you’re doing.


You are correct that mistakes are made all the time - but they tend to be "oh yeah let me fix that right now" mistakes. Or, "oh yeah that's not true in general, but it still works for this case". That's because the experts are thinking about the content of the material - and they are familiar with it enough to tell if an idea has merit or not. Formalism is just a mode of presentation.

Over-emphasis on formalism leads me to conclude you just don't understand the purpose of proofs. You are primarily interested in formal logic - not math.

I would invite you to read a few pages of famous papers - for example Perelman's paper on the Poincaré Conjecture.


Research points to there being a quadratic relationship between automated proof and code size: https://trustworthy.systems/publications/nictaabstracts/Mati....

Specifically, the relationship is between the _specification_ and the proof, and it was done for proofs written in Isabelle and not Lean.

The good news is that more and more automation is possible for proofs, so the effort to produce each proof line will likely go down over time. Still, the largest full program we've fully verified is much less than 100,000 LOC. seL4 (verified operating system) is around 10,000 lines IIRC.


A small note, but seL4 is more of a kernel than a full OS - though in the embedded world for which it is targeted, the difference is pretty fuzzy.


There are 2 main problems in generative testing:

- Input data generation (how do you explore enough of the program's behavior to have confidence that you're test is a good proxy for total correctness)

- Correctness statements (how do you express whether or not the program is correct for an arbitrary input)

When you are porting a program, you have a built in correctness statement: The port should behave exactly as the source program does. This greatly simplifies the testing process.


Several times I've been involved in porting code. Eventually we reach a time where we are getting a lot of bug reports "didn't work, didn't work with the old system as well" which is to say we ported correctly, but the old system wasn't right either and we just hadn't tested it in that situation until the new system had the budget for exhaustive testing. (normally it worked at one point on the old system and got broke in some other update)


Are you aware of how many allocations the average program executes in the span of a couple of minutes? Where do you propose all of that memory lives in a way that doesn’t prevent the application from running?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: