Except from a management and maintenance perspective...this is a nightmare. When a security vulnerability drops somewhere, everywhere needs to be patched ASAP.
Distros (and the people who run most scales of IT org) want to be able to deploy and verify that the fix is in place - and its a huge advantage if it's a linked library that you can just deploy an upgrade for.
But if it's tons and tons of monolithic binaries, then the problem goes viral - every single one has to be recompiled, redeployed etc. And frequently at the cost of "are you only compatible with this specific revision, or was it just really easy to put that in?"
It's worth noting that docker and friends also while still suffering from this problem, don't quite suffer from it in the same way - they're shipping entire dynamically linked environments, so while not as automatic, being able to simply scan for and replace the library you know is bad is a heck of a lot easier then recompiling a statically linked exe.
People are okay with really specific dependencies when it's part of the business critical application they're supporting - i.e. the nodejs or python app which runs the business, that can do anything it wants we'll keep it running no matter what. Having this happen to the underlying distributions though?
(of note: I've run into this issue with Go - love the static deploys, but if someone finds a vulnerability in the TLS stack of Go suddenly we're rushing out rebuilds).
This is conflating static linking with how the distribution handles updates. If a language is always statically linking dependencies (like Go or Rust), the distribution will have to rebuild everything that depends on a patched package whether or not they are using the language's native tools or some import into the distro package system.
What I'm specifically suggesting is:
* Distributions package *binaries*, but not the individual libraries that those binaries depend on.
* Distributions mirror all dependencies, so that you can (in principle) have a completely offline copy of everything that goes into the distribution. Installing a binary uses the language-specific install tools to pull dependencies, targeting the distribution's mirror.
* Enough dependency tracking to know what needs to be rebuilt if there's a security update.
* Any outside dependencies (e.g openssl) will continue to depend on whatever the distribution packages.
* Dependencies are not globally installed, but use whatever isolation facilities the language has (so e.g, a venv for python, whatever npm does)
As I see it, this all doesn't matter though as soon as "security update" enters the picture.
The problem here is upstream dev's saying "my dependency needs are absolute". And a security update ruins that: because as soon as one happens, now no matter what we're going to be replacing libraries anyway. Even your prosposal includes this: we're going to strip out openssl librares and use distro ones.
At which point everything might break anyway, because whether a security hole can be fixed at all depends on which versions of a library it affects and how. Not to mention problem's like finding the issue in one version, but it's changed enough that it's not clear whether a different version is impacted the same way.
Why is this an issue? Simply recompile and download each package. If the distro worries that the maintainers would take too low just fork and recompile the packages themselves. These days its really not that big of a problem in terms of disk space or network traffic. And if some packages are large its often because of images resources which can be packaged separately. It seems like a lot less effort then trying to guess if dynamicly linked library will work with every package in every case after the update.
It's "whoever turns up to do the work" but I would point out that distros generally have more people in the process who can pick up the work.
The issue is one way or another it needs to happen ASAP: so either the distro is haranguing upstream to "approve" a change, or they're going to be going in and carrying patches to Cargo.toml anyway - so this idea of "don't you dare touch my dependencies" lasts exactly as long as until you need a fix in ASAP.
Probably most of these tiny crates have 1 or 0 maintainers. Chances are that they will not be quick to fix a vulnerability.
And even if they are, for rust software that doesn't come from debian, there is no way to ensure it all gets rebuilt and updated with the fix.
Also, projects are generally slow (taking several months) to accept patches. When a distribution has fixed something and the users notice no issue, the upstream project if downloaded and compiled would be a different matter entirely.
Distros (and the people who run most scales of IT org) want to be able to deploy and verify that the fix is in place - and its a huge advantage if it's a linked library that you can just deploy an upgrade for.
But if it's tons and tons of monolithic binaries, then the problem goes viral - every single one has to be recompiled, redeployed etc. And frequently at the cost of "are you only compatible with this specific revision, or was it just really easy to put that in?"
It's worth noting that docker and friends also while still suffering from this problem, don't quite suffer from it in the same way - they're shipping entire dynamically linked environments, so while not as automatic, being able to simply scan for and replace the library you know is bad is a heck of a lot easier then recompiling a statically linked exe.
People are okay with really specific dependencies when it's part of the business critical application they're supporting - i.e. the nodejs or python app which runs the business, that can do anything it wants we'll keep it running no matter what. Having this happen to the underlying distributions though?
(of note: I've run into this issue with Go - love the static deploys, but if someone finds a vulnerability in the TLS stack of Go suddenly we're rushing out rebuilds).