I do agree that it's good language design, if you can deliver what would be core functionality via a library.
Whether you want to integrate that library into the standard library or not is an independent question of culture and convenience.
(Eg Python does quite well with its batteries-included approach, but if they had better dependency management, using third party libraries wouldn't be so bad. It works well in eg Rust and Haskell.)
As the other commenter pointed out, this isn't restricted to strongly-typed functional languages.
Clojure has core.async, which implements "goroutines" without any special support from the language. In fact, the `go` macro[1] is a compiler in disguise: transforming code into SSA form then constructing a state machine to deal with the "yield" points of async code. [2]
core.async runs on both Clojure and ClojureScript (i.e. both JVM and JavaScript). So in some sense, ClojureScript had something like Golang's concurrency well before ES6 was published.
That's wildly overselling it. Closure core async was completely incapable of doing the one extremely important innovation that made goroutines powerful: blocking.
No, blocking refers to calling a function that blocks. Core.async can't handle that because macros are actually not capable of handling that, you need support from the runtime.
Call a function that blocks in go, the routine will park. Do that in Clojure and the whole thing stalls.
Assuming "function that blocks" means "the carrier thread must wait for the function to return" and "the whole thing" means the carrier thread, then core.async doesn't really have this issue as long as e.g. a virtual thread executor is used.
There is a caveat where Java code using `synchronized` will pin a carrier thread, but this has been addressed in recent versions of Java.[1]
The post I was replying to included explicit mention of ClojureScript, where this does not exist. As it did not for Java for most of core.async's existence. And of course, for virtual threads, that's very much "special support from the language"!
Because JavaScript runs an event loop on a single thread. It's akin to using GOMAXPROCS=1 or -Dclojure.core.async.pool-size=1 . There may be semantic differences depending on the JavaScript engine, but in the case of web browsers, the only function that could possibly stall the event loop is window.alert. As for Node.js, one would have to intentionally use synchronous methods (e.g. readFileSync) instead of the default methods, which use non-blocking I/O.
When using core.async in ClojureScript, one could use the <p! macro to "await" a promise. So there is no use for Node.js synchronous methods when using core.async (or using standard JavaScript with async/await).
I would call this "making use of the platform" rather than "special support from the language". The core.async library does not patch or extend the base language, and the base language does not patch or extend the platform it runs on.
And if you’re craving even more telecoms history after that (as I was when I read it a few years ago) Arthur C Clarke’s “How the World Was One” goes into the history of undersea cables and other telecoms technologies https://en.m.wikipedia.org/wiki/How_the_World_Was_One
Potentially also of interest is this [0] article which describes the attempts to build a nuclear powered boring machine that worked by melting rather than cutting the rock.
Focuses more on the historical side of undersea cables, but I can’t pass up the opportunity to recommend Arthur C Clarke’s “How the World was One” which is an incredible book on the history of telecommunications infrastructure, including the history and economics of undersea cables.
Can Linux not trivially do the same thing as windows with LD_PRELOAD? If so why is this more of an issue on Linux than Windows? Is it really less a technical challenge and more just a matter of Linux getting less support from upstream developers?
LD_PRELOAD is too global to be useful, it's hard to scope it to one process (and not child processes). macOS is better in the sense that it clears DYLD_* variables when the dynamic linker has done its work and the process starts. (Although that can also be painful when you want to run a shell script and set DYLD_* outside)
How? Maybe this should be better documented & recommended. I suppose at some point you're just statically linking with more steps - though for a problem like this it might be worth it.
I was thinking/wondering this myself. Not to reinvent the wheel - more toss an idea around, but a 'venv for LD_PRELOAD' sounds like it'd deal with this pretty handily
Not... in a way I'd use as a distribution/release maintainer. Probably as an administrator [of my LAN]
I specifically said I'm not really trying to solution this, lol. More toying with the LD_PRELOAD aspect than anything
Nix is neat, and I don't think I've used it enough to be too critical - but in some ways it feels like 'extra steps'
I wanted to make a 'reproducible' installation (ala kickstart, not strictly binary)... but it felt very much like distribution work; declaring dependencies and the like
I plan to spend more time with it, I see a lot of merit
The amount of control is great, but the docs could use some work. For my simple goals (install Sway, Ansible, some other things) it was a broadsword when I need a butter knife
It can be done by setting rpath to origin, even post compilation using the patchelf tool. Works great with C shared libraries. Perhaps ABI issues with C++ shared libs introduces other problems.
With the warning that rpath!=runpath, both are called rpath, and which you get depends on your linker and whether you also pass -Wl,--disable-new-dtags
Runpath is the default, and also the one that is non-transitive and overridden by environment variables.
Yes linux _can_, the machinery is there, but culturally the common distros do not. And the defaults do not. On windows I can literally drop a DLL next to an executable and it will pick it up. On linux I have to do a wrapper script to set LD_PRELOAD, or mutate the binary's rpath to get it to load.
It's not really a question of capability, but a question of culture and defaults that makes linux hard to support.
Debian for example goes through great pains (or used to at least) to unbundle shared libraries such as openssl from projects like chromium.
The https certificate “expiration” date is basically just a “fallback to treating this website as http” date. The site is still perfectly accessible and arguably still more secure than an http only site, you just have to click the scary button saying you know what you’re doing and proceed to the website treating it as though it was compromised which isn’t a big deal for the static pages you’re describing.
Besides the certifacte expiration date there are also expiration dates in the protocol itself as newer clients/servers will refuse to use older SSL/TLS versions or ciphers.
But even with "just" certificate expiration the user experience is not even close to "fall back to HTTP". Browsers won't even give you the choice to override certificate check at all with HSTS.
Then there is the fact that the move from HTTP to HTTPS changes all URLs. If only we would have had StartTLS for HTTP - and no, there is no security issue with StartTLS as you will need something like HSTS preloading anyway if you actually want to guarantee security.
Lack of backwards compatibility is absolutely a concern that the security community seems to care little about.