Likely a combination of practicality, and the importance of airflow throughout the sand in order to heat it and pull from it effectively.
Also, water's specific heat capacity is 4.186 J/g°C, while air's is approximately 1.005 J/g°C. It would take much more energy to heat up water than it would to heat up air.
Also, water boils at 100 degrees, and they store it in the sand at 600 degrees.
In what way is it NP-hard? From what I can gather it just eliminates nodes where the pod wouldn't be allowed to run, calculates a score for each and then randomly selects one of the nodes that has the lowest score, so trivially parallelizable.
I'm amused by posts like this because it shows that Go is finally slowly moving away from being an unergonomically simplistic language (its original USP?) to adopt features a modern language should have had all along.
My experience developing in it always gave me the impression that the designers of the language looked at C and thought "all this is missing is garbage collection and then we'll have the perfect language".
I feel like a large amount of the feeling of productivity developers get from writing Go code originates from their sheer LOC output due to having to reproduce what other languages can do in just a few lines thanks to proper language & standard library features.
Unfortunely given that the authors are also related to C's creation, it shows a common pattern, including why C is an insecure language.
> Although we entertained occasional thoughts about implementing one of the major languages of the time like Fortran, PL/I, or Algol 68, such a project seemed hopelessly large for our resources: much simpler and smaller tools were called for. All these languages influenced our work, but it was more fun to do things on our own.
> Rob Pike later explained Alef's demise by pointing to its lack of automatic memory management, despite Pike's and other people's urging Winterbottom to add garbage collection to the language;
I remember when I first got out of uni and did backend Java development, I thought I was incredibly productive because of the sheer amount of typing and code I had to pump out.
After doing a bit of frontend JS I was quickly dissuaded of that notion, all I was doing was writing really long boilerplate.
This was in the Java 6 days, so before a lot of nice features were added, for example a simple callback required the creation of a class that implements an interface with the method (so 3 unique names and a bunch of boilerplate to type out, you could get away with 2 names if you used an anonymous class).
As a Go developer, I do think that I end up writing more code initially, not just because of the lack of syntactic sugar and "language magic", but because the community philosophy is to prefer a little bit of copying over premature abstraction.
I think the end result is code which is quite easy to understand and maintain, because it is quite plain stuff with a clear control flow at the end of the day. Go code is the most pleasant code to debug of all the languages I've worked with, and there is not a close second.
Given that I spend much more time in the maintenance phase, it's a trade-off I'm quite happy to make.
So for me, the question is: are these two things intrinsically the same or coincidentally the same? If it is intrinsically the same, then an abstraction/centralization of logic is correct. If they are coincidentally the same, then its better to keep them separate.
Its premature if I don't know the answer to that question with my current information, which is a common scenario for me when I'm initially writing a new set of usecases.
If I get a 3rd copy of a thing, then its likely going to become an abstraction (and I'll probably have better understanding of the thing at the time to do that abstraction). If I don't get a 3rd copy of that thing, then its probably fine for the thing to be copied in 2 places, regardless of what the answer to my question is.
On the other hand, when your abstraction has more configuration options than methods, it is a sign that (years ago) it really wasn't an abstract at all.
In this specific blog post I suppose only "Ranging Directly over Integers" counts, but I was more generally referring to the introduction of features like generics.
C at least has const ptr. In go I've seen pointers mutated 7 levels down the callstack. And of course, the rest of the sphagetti depended on those side effects.
C is so limited that you would try to avoid mutation and even complex datastructures.
Go is "powerful" enough to let you shoot yourself much harder.
Go with `const` and NonNull<ptr> (call it a reference if you need) would be a much nicer language
They are similar in the sense that there are very few abstractions, relying on the programmer to reimplement common patterns and avoid logical mistakes.
You have to put thought into such things as:
- Did I add explicit checks for all the errors my function calls might return?
- Are all of my resources (e.g. file handles) cleaned up properly in all scenarios? Or did I forget a "defer file.Close()"? (A language like C++ solved this problem with RAII in the 1980s)
- Does my Go channel spaghetti properly implement a worker pool system with the right semaphores and error handling?
> Did I add explicit checks for all the errors my function calls might return?
You can easily check this with a linter.
> Are all of my resources (e.g. file handles) cleaned up properly in all scenarios? Or did I forget a "defer file.Close()"? (A language like C++ solved this problem with RAII in the 1980s)
You can forget to use `with` in Python, I guess that's also C now too eh?
> Does my Go channel spaghetti properly implement a worker pool system with the right semaphores and error handling?
Then stop writing spaghetti and use a higher level abstraction like `x/sync/errgroup.Group`.
>Did I add explicit checks for all the errors my function calls might return?
You can check anything with a linter, but it's better when the language disallows you from making the mistake in the first place.
>You can forget to use `with` in Python, I guess that's also C now too eh?
When using `with` in Python you don't have to think about what exactly needs to be cleaned up, and it'll happen automatically when there is any kind of error. Consider `http.Get` in Go:
resp, err := http.Get(url)
if err == nil {
resp.Body.Close()
}
return err
Here you need to specifically remember to call `resp.Body.Close` and in which case to call it. Needlessly complicated.
>Then stop writing spaghetti and use a higher level abstraction like `x/sync/errgroup.Group`.
Why is this not part of the standard library? And why does it not implement basic functionality like collecting results?
> The http Client and Transport guarantee that Body is always non-nil, even on responses without a body or responses with a zero-length body. It is the caller's responsibility to close Body.
Calling http.Get() returns an object that symbolises the response. The response body itself might be multiple terabytes, so http.Get() shouldn't read it for you, but give you a Reader of some sort.
The question then is, when does the Reader get closed? The answer should be "when the caller is done with it". This can't be automatic handled when the resp object goes out of scope, as it would preclude the caller e.g. passing the response to another goroutine for handling, or putting it in an array, or similar.
Go tooling is more than happy to tell you that there's an io.ReadCloser in one of the structs returned to you, and it can see that you didn't Close() it, store it, or pass it to somewhere else, before the struct it was in went out of scope.
Go and C have partially shared origins. Two of the three creators of Go (Ken Thompson and Rob Pike) were involved in the early days of C. Ken Thompson is even the creator of B, the predecessor of C. There are obvious huge differences between the language but in a more subtle way they're actually quite similar: C is an "unergonomically simplistic language", just as the parent commenter describes Go.
Serverless only makes sense if the lifetime doesn't matter to your application, so if you find that you need to think about your lifetime then serverless is simply not the right technology for your use case.
Agree, it seems like they decided to use Cloudflare Workers and then fought them every step of the way instead of going back and evaluating if it actually fit the use case properly.
It reminds me of the companies that start building their application using a NoSQL database and then start building their own implementation of SQL on top of it.
Ironically, I really like cloudflare but actively dislike workers and avoid them when possible. R2/KV/D1 are all fantastic and being able to shard customer data via DOs is huge, but I find myself fighting workers when I use them for non-trivial cases. Now that Cloudflare has containers I'm pushing people that way.
reply