Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is unclear to me what the author's point is. Its seems to center on the example of DPDK being difficult to link (and it is a bear, I've done it recently).

But its full of strawmen and falsehoods, the most notable being the claims about the deficienies of pkg-config. pkg-config works great, it is just very rarely produced correctly by CMake.

I have tooling and a growing set of libraries that I'll probably open source at some point for producing correct pkg-config from packages that only do lazy CMake. It's glorious. Want abseil? -labsl.

Static libraries have lots of game-changing advantages, but performance, security, and portability are the biggest ones.

People with the will and/or resources (FAANGs, HFT) would laugh in your face if you proposed DLL hell as standard operating procedure. That shit is for the plebs.

It's like symbol stripping: do you think maintainers trip an assert and see a wall of inscrutable hex? They do not.

Vendors like things good for vendors. They market these things as being good for users.



> Static libraries have lots of game-changing advantages, but performance, security, and portability are the biggest ones.

No idea how you come to that conclusion, as they are definitively no more secure than shared libraries. Rather the opposite is true, given that you (as end user) are usually able to replace a shared library with a newer version, in order to fix security issues. Better portability is also questionable, but I guess it depends on your definition of portable.


I think from a security point of view, if a program is linked to its library dynamically, a malicious actor could replace the original library without the user noticing, by just setting the LD_LIBRARY_PATH to point to the malicious library. That wouldn't be possible with a program that is statically linked.


And unless you're in one of those happy jurisdictions where digital rights are respected, that malicious threat actor could range from a mundane cyber criminal to and advanced persistent threat, and that advanced persistent threat could trivially be your own government. Witness, the only part of `glibc` that really throws a fit if you yank it's ability to get silently replaced via `soname` is DNS resolution.


You act as though the sales pitch for dynamically loaded shared libs is the whole story.

Obviously everything has some reason it was ever invented, and so there is a reason dynamic linking was invented too, and so congratulations, you have recited that reason.

A trivial and immediate counter example though is that a hacker is able to replace your awesome updated library just as easily with their own holed one, because it is loaded on the fly at run-time and the loading mechanism has lots of configurability and lots of attack surface. It actually enables attacks that wouldn't otherwise exist.

And a self contained object is inherently more portable than one with dependencies that might be either missing or incorrect at run time.

There is no simple single best idea for anything. There are various ideas with their various advantages and disadvantages, and you use whichever best services your priorities of the moment. The advantages of dynamic libs and the advantages of static both exist and sometimes you want one and sometimes you want the other.


A hacker can easily replace your shared library with their own malicious version or intercept calls into one as needed. As the number of distinct binary blobs for an application increases, the surface area for attack vectors increases making security a nightmare. Every piece also needs to be individually signed and authenticated adding more complexity to the application deployment.

As the gp mentioned, Static libraries have a lot of advantages by having only one binary to sign, authenticate and lockdown/test/prove the public interface. The idea is extended into the "Unikernel" approach where even the OS becomes part of the single binary which is then deployed to bare-metal (embedded systems) or a Hypervisor.


Knowing what code runs when i invoke an executable or grant it permissions is a fucking prerequisite for any kind of fucking security.

Portability is to any fucking kernel in a decade at the ABI level. You dont sound stupid, which means youre being dishonest. Take it somewhere else before this gets okd school Linus.

I have no fucking patience when it comes to eirher Drepper and his goons or the useful idiots parroting that tripe at the expense of less technical people.

edit: I don't like losing my temper anywhere, especially in a community where I go way back. I'd like to clarify that I see this very much in terms of people with power (technical sophistication) and their relationship to people who are more vulnerable (those lacking that sophistication) in matters of extremely high stakes. The stakes at the low end are the cost and availability of computing. The high end is as much oppressive regime warrantless wiretap Gestapo shit as you want to think about.

Hackers have a responsibility to those less technical.


pkg-config works great in limited scenarios. If you try to do anything more complex, you'll probably run into some complex issues that require modifying the supplied .pc files from your vendor.

There's is a new standard that is being developed by some industry experts that is aiming to address this called CPS. You can read the documentation on the website: https://cps-org.github.io/cps/ . There's a section with some examples as to why they are trying to fix and how.


`pkg-config` works great in just about any standard scenario: it puts flags on a compile and link line that have been understood by every C compiler and linker since the 1970s.

Here's Bazel consuming it with zero problems, and if you have a nastier problem than a low-latency network system calling `liburing` on specific versions of the kernel built with Bazel? Stop playing.

The last thing we need is another failed standard further balkanizing an ecosystem that has worked fine if used correctly for 40+ years. I don't know what industry expert means, but I've done polyglot distributed builds at FAANG scale for a living, so my appeal to authority is as good as anyone's and I say `pkg-config` as a base for the vast majority of use cases with some special path for like, compiling `nginx` with it's zany extension mechanism is just fine.

https://gist.github.com/b7r6/316d18949ad508e15243ed4aa98c80d...


Have you read the rationale about CPS? It gives clear examples as to why it doesn't work. You need to parse the files and then parse all the compiler and linker arguments in order to understand what to do with those to properly consume them.

What do you do if you use a compiler or linker that doesn't use the same command line parameters as they are written in the pc file? What do you do when different packages you depend on have conflicting options, for example one depending against different C or C++ language versions?

It's fine in a limited and closed environment, it does not work for proper distribution, and your Bazel rules prove it as it is not working in all environments clearly. It does not work with MSVC style flags, or handles include files well (hh, hxx...). Not saying it can't be fixed, but that's just a very limited integration, which proves the point of having a better format for tool consumption.

And you're not the only one who has worked in a FAANG company around and dealt with large and complex build graphs. But for the most part, FAANGs don't all care about consuming pkg-config files, most will just rewrite the build files for Blaze / Bazel (or Buck2 from what I've heard). Very few people want to consume binary archives as you can't rebuild with the new flavor of the week toolchain and use new compiler optimizations, or proper LTO etc.


Yeah, I read this:

"Although pkg-config was a huge step forward in comparison to the chaos that had reigned previously, it retains a number of limitations. For one, it targets UNIX-like platforms and is somewhat reliant on the Filesystem Hierarchy Standard. Also, it was created at a time when autotools reigned supreme and, more particularly, when it could reasonably be assumed that everyone was using the same compiler and linker. It handles everything by direct specification of compile flags, which breaks down when multiple compilers with incompatible front-ends come into play and/or in the face of “superseded” features. (For instance, given a project consuming packages “A” and “B”, requiring C++14 and C++11, respectively, pkg-config requires the build tool to translate compile flags back into features in order to know that the consumer should not be built with -std=c++14 ... -std=c++11.)

Specification of link libraries via a combination of -L and -l flags is a problem, as it fails to ensure that consumers find the intended libraries. Not providing a full path to the library also places more work on the build tool (which must attempt to deduce full paths from the link flags) to compute appropriate dependencies in order to re-link targets when their link libraries have changed.

Last, pkg-config is not an ideal solution for large projects consisting of multiple components, as each component needs its own .pc file."

So going down the list:

- FHS assumptions: false, I'm doing this on NixOS and you won't find a more FHS-hostile environment

- autotools era: awesome, software was better then

- breaks with multiple independent compiler frontends that don't treat e.g. `-isystem` in a reasonable way? you can have more than one `.pc` file, people do it all the time, also, what compilers are we talking about here? mingw gcc from 20 years ago?

- `-std=c++11` vs. `-std=c++14`? just about every project big enough to have a GitHub repository has dramatically bigger problems than what amounts to a backwards-compatible point release from a decade ago. we had a `cc` monoculture for a long time, then we had diversity for a while, and it's back to just a couple of compilers that try really hard to understand one another's flags. speaking for myself? in 2025 i think it's good that `gcc` and `clang` are fairly interchangeable.

So yeah, if this was billed as `pkg-config` extensions for embedded, or `pkg-config` extensions for MSVC, sure. But people doing non-gcc, non-clang compatible builds already know they're doing something different, price you pay.

This is the impossible perfect being the enemy of the realistic great with a healthy dose of "industry expertise". Do some conventions on `pkg-config`.

The alternative to sensible builds with working tools we have isn't this catching on, it won't. The alternative is CMake jank in 2035 just like 2015 just like now.

edit: brought to us by KitWare, yeah fuck that. KitWare is why we're in this fucking mess.


If someone needs a wrapper for a technology, that modifies the output it provides (like meson and bazel do), maybe there is an issue with said technology.

If pkg-config was never meant to be consumed directly, and was always meant to be post processed, then we are missing this post processing tool. Reinventing it in every compilation technology again and again is suboptimal, and at least Make and CMake do not have this post processing support.


This was the point of posting the trivial little `pkg.bzl` above: Bazel doesn't need to do all this crazy stuff in `rules_cc` and `rules_foreign_cc`: those are giant piles of internal turf wars within the Blaze team that have spilled onto GitHub.

The reason why we can't have nice things is that nice things are simple and cheap and there's no money or prestige in it. `zig cc` demonstrates the same thine.

That setup:

1. mega-force / sed / patch / violate any build that won't produce compatible / standard archives: https://gist.github.com/b7r6/16f2618e11a6060efcfbb1dbc591e96...

2. build sane pkg-config from CMake vomit: https://gist.github.com/b7r6/267b4401e613de6e1dc479d01e795c7...

3. profit

delivers portable (trivially packages up as `.deb` or anything you want), secure (no heartbleed 0x1e in `libressl`), fast (no GOT games, other performance seppeku) builds. These are zero point zero waste: fully artifact cached at the library level, fully action cached at the source level, fully composable, supporting cross-compilation and any standard compiler.

I do this in real life. It's a few hundred lines of nix and bash. I'm able to do this because I worked on Buck and shit, and I've dealt with Bazel and CMake for years, and so I know that the stuff is broken by design, there is no good reason and no plan beyond selling consulting.

This complexity theatre sells shit. It sure as hell doesn't stop security problems or Docker brainrot or keep cloud bills down.


The only exception to this general rule (which, to be clear, I agree with) is when your code for whatever links to LGPL licensed code. A project I'm a major contributor of does this (we have no choice but to use these libraries, due to the requirements we have, though we do it via implib.so (well, okay, the plan is to do that)), and so dynamic linking/DLL hell is the only path we are able to take. If we link statically to the libraries, the LGPL pretty much becomes the GPL.


Sure, there are use cases. Extensions to e.g. Python are a perfectly reasonable usecase for `dlopen` (hooking DNS on all modern Linux is...probably not for our benefit).

There are use cases for dynamic linking. It's just user-hostile as a mandatory default for a bunch of boring and banal reasons: KitWare doesn't want `pkg-config` to work because who would use CMake if they had straightforward alternatives. The Docker Industrial complex has no reason to exist in a world where Linus has been holding the line of ABI compatibility for 30 years.

Dynamic linking is fine as an option, I think it's very reasonable to ship a `.so` alongside `.a` and other artifacts.

Forcing it on everyone by keeping `pkg-config` and `musl` broken is a more costly own goal for computing that Tony Hoare's famous billion dollar mistake.


Couldn't agree more with you the whole reason docker exists is to avoid having to deal with dynamic libraries we package the whole userland and ship it just to avoid dealing with different dynamic link libraries across systems.


Right, the popularity of Docker is proof of what users want.

The implementation of Docker is proof of how much money you're expected to pay Bezos to run anything in 2025.


yes DLL hell is the issue with dynamic linking -- how many versions of given libraries are required for the various apps you want to install? -- and then you want to upgrade something and it requires yet another version of some library -- there is really no perfect solution to all this


You reconcile a library set for your application. That's happening whether you realize it or not, and whether you want to or not.

The question is, do you want it to happen under your control in an organized way that produces fast, secure, portable artifacts, or do you want it to happen in some random way controlled by other people at some later date that will probably break or be insecure or both.

There's an analogy here to systems like `pip` and systems with solvers in them like `uv`: yeah, sometimes you can call `pip` repeatedly and get something that runs in that directory on that day. And neat, if you only have to run it once, fine.

But if you ship that, you're externalizing the costs to someone else, which is a dick move. `uv` tells you on the spot that there's no solution, and so you have to bump a version bound to get a "works here and everywhere and pretty much forever" guarantee that's respectful of other people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: