Hacker Newsnew | past | comments | ask | show | jobs | submit | gabcoh's commentslogin

I’m also not seeing the purported images on iOS 26.1 safari


It does not work on iOS Safari at all, it should show a warning at the top saying so


I do not miss the days of "This site best viewed with browser X".


Nor on MacOS Safari.


It apparently only works correctly on chromium-based browsers.


I am on Chrome on Android and it does not work. Although some of them it's like halfway there.


Appears to mostly work in Firefox, including in the browser tab title, which is funny.


It did not work at all for me on latest Firefox (macOS).


Does not work for me with Chrome, though I guess it's missing some font.


I guess it’s not sufficiently prominent (given that you didn’t see it) but this is discussed in detail in the FAQ section


> async operations require a library to work for some reason

Rephrased: ocaml is so flexible that async can be implemented as a library with no special support from the language.

This is the beauty of ocaml (and strongly typed functional languages more broadly)


> This is the beauty of ocaml (and strongly typed functional languages more broadly)

I don't think that's anything specific to strongly typed functional languages. In eg Rust even the standard library relies on third party crates.

Though it is still somewhat amusing to me that loops in Haskell are delivered via a third party library, if you ever actually want them. See https://hackage.haskell.org/package/monad-loops-0.4.3/docs/C...

I do agree that it's good language design, if you can deliver what would be core functionality via a library.

Whether you want to integrate that library into the standard library or not is an independent question of culture and convenience.

(Eg Python does quite well with its batteries-included approach, but if they had better dependency management, using third party libraries wouldn't be so bad. It works well in eg Rust and Haskell.)


As the other commenter pointed out, this isn't restricted to strongly-typed functional languages.

Clojure has core.async, which implements "goroutines" without any special support from the language. In fact, the `go` macro[1] is a compiler in disguise: transforming code into SSA form then constructing a state machine to deal with the "yield" points of async code. [2]

core.async runs on both Clojure and ClojureScript (i.e. both JVM and JavaScript). So in some sense, ClojureScript had something like Golang's concurrency well before ES6 was published.

[1] https://github.com/clojure/core.async/blob/master/src/main/c...

[2] https://github.com/clojure/core.async/blob/master/src/main/c...


> something like Golang's concurrency

That's wildly overselling it. Closure core async was completely incapable of doing the one extremely important innovation that made goroutines powerful: blocking.


Assuming "blocking" refers to parking goroutines, then blocking is possible.

  (let [c (chan)]
    ;; creates channel that is parked forever
    (go
      (<! c)))
The Go translation is as follows.

  c := make(chan interface{})
  // creates goroutine that is parked forever
  go func() {
    <-c
  }()


No, blocking refers to calling a function that blocks. Core.async can't handle that because macros are actually not capable of handling that, you need support from the runtime.

Call a function that blocks in go, the routine will park. Do that in Clojure and the whole thing stalls.


Assuming "function that blocks" means "the carrier thread must wait for the function to return" and "the whole thing" means the carrier thread, then core.async doesn't really have this issue as long as e.g. a virtual thread executor is used.

There is a caveat where Java code using `synchronized` will pin a carrier thread, but this has been addressed in recent versions of Java.[1]

[1] https://openjdk.org/jeps/491


The post I was replying to included explicit mention of ClojureScript, where this does not exist. As it did not for Java for most of core.async's existence. And of course, for virtual threads, that's very much "special support from the language"!


> where this does not exist

Because JavaScript runs an event loop on a single thread. It's akin to using GOMAXPROCS=1 or -Dclojure.core.async.pool-size=1 . There may be semantic differences depending on the JavaScript engine, but in the case of web browsers, the only function that could possibly stall the event loop is window.alert. As for Node.js, one would have to intentionally use synchronous methods (e.g. readFileSync) instead of the default methods, which use non-blocking I/O.

When using core.async in ClojureScript, one could use the <p! macro to "await" a promise. So there is no use for Node.js synchronous methods when using core.async (or using standard JavaScript with async/await).

I would call this "making use of the platform" rather than "special support from the language". The core.async library does not patch or extend the base language, and the base language does not patch or extend the platform it runs on.


Can you elaborate? As far as I'm aware if you pull from an empty nchannel it wikl be blocking ubtik it gets a value.


And if you’re craving even more telecoms history after that (as I was when I read it a few years ago) Arthur C Clarke’s “How the World Was One” goes into the history of undersea cables and other telecoms technologies https://en.m.wikipedia.org/wiki/How_the_World_Was_One


Potentially also of interest is this [0] article which describes the attempts to build a nuclear powered boring machine that worked by melting rather than cutting the rock.

[0] http://atomic-skies.blogspot.com/2012/07/those-magnificent-m...


Focuses more on the historical side of undersea cables, but I can’t pass up the opportunity to recommend Arthur C Clarke’s “How the World was One” which is an incredible book on the history of telecommunications infrastructure, including the history and economics of undersea cables.


If you aren’t paying attention you just might end up trading on a real crash too.


Which I guess is why it has to be so low you don't care.


Can Linux not trivially do the same thing as windows with LD_PRELOAD? If so why is this more of an issue on Linux than Windows? Is it really less a technical challenge and more just a matter of Linux getting less support from upstream developers?


LD_PRELOAD is too global to be useful, it's hard to scope it to one process (and not child processes). macOS is better in the sense that it clears DYLD_* variables when the dynamic linker has done its work and the process starts. (Although that can also be painful when you want to run a shell script and set DYLD_* outside)


You can compile binaries with additional relative library paths in to them that will take priority over /usr/lib64


How? Maybe this should be better documented & recommended. I suppose at some point you're just statically linking with more steps - though for a problem like this it might be worth it.


See the ELF rpath, which can be set by the linker. This can be modified after using patchelf.



The other comments have already covered the how, but I'd like to add that the mechanism used extensively in Nix [1].

[1] https://nixos.org/


$ORIGIN has been pretty well-known and documented for a very long time


You can set it in the environment for a single process.


I was thinking/wondering this myself. Not to reinvent the wheel - more toss an idea around, but a 'venv for LD_PRELOAD' sounds like it'd deal with this pretty handily

Not... in a way I'd use as a distribution/release maintainer. Probably as an administrator [of my LAN]


Such things already exist. Eg. Appimage or even docker.


and even that has been managed to be split between snap appimage and flatpak :D

(sorry not meant to offend, long time linux day-to-day user here, but it was just ironic for me to point out fragmentation of fragmentation ^^)


Right, but I don't really want to get into a distribution model - the hack suits me fine :)

More an exercise in curiosity than anything

Flatpak (or Snap, ew) probably deals with it fine today, Steam's there


That's Nix with extra steps.


I specifically said I'm not really trying to solution this, lol. More toying with the LD_PRELOAD aspect than anything

Nix is neat, and I don't think I've used it enough to be too critical - but in some ways it feels like 'extra steps'

I wanted to make a 'reproducible' installation (ala kickstart, not strictly binary)... but it felt very much like distribution work; declaring dependencies and the like


Oh, nix is an extra mile. A lot could be improved, but that's what I'm using to deal with dependencies.


Gotcha, I don't feel so floundering now!

I plan to spend more time with it, I see a lot of merit

The amount of control is great, but the docs could use some work. For my simple goals (install Sway, Ansible, some other things) it was a broadsword when I need a butter knife


What sold me on nix is home-manager and flakes: I can easily bootstrap my environment anywhere nix is available.


There are tools which overwrite linked libraries, eg: chrpath.


It can be done by setting rpath to origin, even post compilation using the patchelf tool. Works great with C shared libraries. Perhaps ABI issues with C++ shared libs introduces other problems.


With the warning that rpath!=runpath, both are called rpath, and which you get depends on your linker and whether you also pass -Wl,--disable-new-dtags

Runpath is the default, and also the one that is non-transitive and overridden by environment variables.


Yes linux _can_, the machinery is there, but culturally the common distros do not. And the defaults do not. On windows I can literally drop a DLL next to an executable and it will pick it up. On linux I have to do a wrapper script to set LD_PRELOAD, or mutate the binary's rpath to get it to load.

It's not really a question of capability, but a question of culture and defaults that makes linux hard to support.

Debian for example goes through great pains (or used to at least) to unbundle shared libraries such as openssl from projects like chromium.


This sounds like it's an interaction with the GPU driver though, which could also happen on windows...


Context for anyone who like me was desperately lost: https://superliminal.com/cube/2x2x2x2/


As the person who made that video and just discovered this post through youtube analytics, I can give more information if you would like.


The https certificate “expiration” date is basically just a “fallback to treating this website as http” date. The site is still perfectly accessible and arguably still more secure than an http only site, you just have to click the scary button saying you know what you’re doing and proceed to the website treating it as though it was compromised which isn’t a big deal for the static pages you’re describing.


Besides the certifacte expiration date there are also expiration dates in the protocol itself as newer clients/servers will refuse to use older SSL/TLS versions or ciphers.

But even with "just" certificate expiration the user experience is not even close to "fall back to HTTP". Browsers won't even give you the choice to override certificate check at all with HSTS.

Then there is the fact that the move from HTTP to HTTPS changes all URLs. If only we would have had StartTLS for HTTP - and no, there is no security issue with StartTLS as you will need something like HSTS preloading anyway if you actually want to guarantee security.

Lack of backwards compatibility is absolutely a concern that the security community seems to care little about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: