Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep. I feel like there’s a weird war between static and dynamic linking. Distro maintainers love dynamic linking because they can upgrade one dynamic library to fix a security issue, rather than pushing security patches to everything it depends on. Software houses love static linking because they (we) want to know exactly what version of every dependency our software uses when it runs. Docker forces an entire userland to be statically distributed - but everything it does should be possible using static linking + some tooling to easily ship a binary.

Playing this turf war out through tooling seems silly. Linux should support both without needing to rely on Docker and friends.



Well, except those groups don't particularly need dynamic or static linking, respectively, either.

Distro maintainers definitely loved dynamic linking 25 years ago, when they relied on a machine under someone's desk at e.g. Pixar for infrastructure and on the kindness of random universities for file hosting. It isn't really as true of distro maintainers today. You can update a library to fix a security issue and let a build farm handle the upgrade for pretty cheap (either using $your_favorite_public_cloud, or using a handful of relatively cheap physical servers if you prefer that). You can then upload the results to a CDN for free / cheap, who can get the modified binaries to your users quickly, without you being, as they used to say, "slashdotted." (There's also better technical tooling for things like binary diffs these days.) The cost of storage is also much cheaper, so you can save the intermediate build products if you want - generally you just need to rebuild one .o file and one .a file in a library and you can continue all the downstream builds from the linking step onwards, without recompiling anything else.

Software houses, meanwhile, are quite capable of shipping trees of compiled binaries. You can rebuild a bunch of shared libraries if you want, and ship them all together (Facebook, for instance, has a wonderfully terrifying system where they deploy a squashfs image and they have a FUSE filesystem to mount it: https://engineering.fb.com/2018/07/13/data-infrastructure/xa...). You don't actually need to statically link things to avoid OS dependencies; you've got $ORIGIN-relative rpaths and other means of finding dependencies relative from a binary. Or you can use a system like Nix where each version of a dependency has its own unique path, so you can co-install all sorts of things "systemwide" without them actually conflicting with each other, and the final binary was built against the paths to the specific exact dependencies you told it to use.

To be clear, I agree that this is a silly tooling turf war - I just want to encourage people to explore more tooling options and not feel like they're constrained by which "side" they fall on. There's a lot of options out there!


Distro builders and users also love dynamic linking because it saves so many resources. One of the things that really bother me with containerised desktop apps like snap.


Yep, that is really only when there is no maintainer for the package and you still need the software. Hope it does not become a reason for maintainers to throw in the towel on packages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: