Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can anyone tell why systemd developers run fast and loose with what they believe and bully everyone with a stick made out of their ideas?


In this case, I think the upstream maintainer's response -- "Upstream systemd will do X, distros who want to are free to do Y" -- is legitimate. Consider the reverse: If systemd requires a writable /run/lock, then distros who want to be more safe won't really be able to (or will have to implement a much more intrusive patch).

Looking from the outside, it looks more that this is a failure of the Debian systemd package maintainer to follow Debian's rules. (Though since I'm not a part of that community, I recognize that there may be cultural expectations I'm not aware of.)


> "Upstream systemd will do X, distros who want to are free to do Y"

Yes this is a good response from upstream. I can work with that, but in that case, even this response didn't get reflected to mailing list discussion, or drowned out instantly.

My question was more general though, questioning systemd developers' behavior collectively (hence the projects' behavior) through time.


The systemd developers have a long history of reinventing the wheel and trying to force it on everyone. We only put up with them because they do some difficult work that nobody wants to do.


Speak for yourself, then. I’ve been using Linux since 2004, and the systemd components finally made system management easy. No more arcane init scripts. Handling of service dependencies. Proper timers. Simple configuration files. Administration knowledge that immediately carries over between all systems equally.

As a user, systemd has improved my productivity tremendously.

The kind of bad mouthing developers that work on solutions for complex problems, code that runs on billions of machines, reflects more of your own fragile ego than them.


> The systemd developers have a long history of reinventing the wheel and trying to force it on everyone. We only put up with them because they do some difficult work that nobody wants to do.

> As a user, systemd has improved my productivity tremendously.

Both can be true at the same time. Particularly in the beginning, there was a long string of really important things that used to Just Work that were broken by systemd. Things like:

1. Having home directories in automounted NFS. Under sysv, autofs waited until the network was up to start running. Originally under systemd, "the network" was counted as being up when localhost was up.

2. Being able to type "exit" from an ssh session and have the connection close. Under systemd, closing the login shell would kill -9 all processes with that userid, including the sshd process handling the connection -- before that process could close the socket for the connection. Meaning you type "exit" in an interactive terminal and it hang.

It's been a while since I encountered any major issues with systemd, but for the first few years there were loads of issues with important things that used to Just Work and then broke and took forever to fix because they didn't happen to affect the systemd maintainers. If you didn't encounter any of these, it's probably because your use cases happened to overlap theirs.

Yes, systemd and journalctl have massively simplified my life. But I think it could have been done with far less disruption.


My favorite systemd bug is when your network black-holes (not disconnected, SYN packets work but nothing else will come back). The entire system will hang.


Obligatory XKCD: https://xkcd.com/438

There's no need to be rude. While I'm not anti-systemd; it didn't change my life tremendously, either.

People tend to bash init scripts, but when they are written well, they both work and port well between systems. At least this is my experience with the fleet I manage.

Dependencies worked pretty well in Parallel-SysV, too, again from my experience. Also, systemd is not faster than Parallel-SysV.

It's not that "I had to learn everything from scratch!" woe either. I'm a kind of developer/sysadmin who never whines and just reads the documentation.

I wrote tons of service files and init scripts during Debian's migration. I was a tech-lead of a Debian derivative at that time (albeit working literally underground), too.

systemd and its developers went through a lot phases, remade a lot of mistakes despite being warned about them, and took at least a couple of wrong turns and booed for all the right reasons.

The anger they pull on themselves are not unfounded, yet I don't believe they should be on the receiving end of a flame-war.

From my perspective, systemd developers can benefit tremendously by stepping down from their thrones and look eye to eye with their users. Being kind towards each other never harms anyone, incl. you.


Everything is a slight modification of something someone else has already done. However it does take effort to make things that work well together. Systemd may not have any novel tricks but it sure does synergize well. Standardization also goes a long way towards simplifying things for a lot of folks. Otherwise everyone constantly reinvents everything and certain integrations are going to invariably be broken or not work well. Modularity is great but it's not free.


You think systemd could be a psyop? Gain influence by paying devs to do the ugly but necessary work, but then sow loads of dissent at the same time...


I’ve long thought a similar thought regarding the browser wars. The project(s) being psyops are left as an exercise to the reader.


That has been their method since the beginning; why would they change from a tactic which works for them?

The central problem with systemd is that they don't want to let you go about your business, they want you to conform to their rule.


The central problem with Linux is that it forces you to conform to everything being a files, including device and IO. I’ll stick to MSDOS where I can go about business poking memory myself to control the hardware. /s My point being is that contracts between systems can improve our lives. We take the process, file, and protected memory models for granted today. However, I’m sure people hated the migration to protected systems as well because it “didn’t let them go about their business”


Linux is 34 years old, and some of the Unix-ism borrowed are even older. There is genuine cruft that has downsides. Different relative priorities of backwards comparability, maintainability and the various issues the legacy issues cause are reasonable.

Systemd basically arose out of a frustration at the legacy issues so the whole project exists as a modernizing effort. No wonder they consider backwards compatibility low priority.


Because that's what they've always done, and it continues to work for them?

Systemd doesn't work for me, but it has taken over most Linux distributions, so clearly it's got something people want that I don't understand. That was the case for PulseAudio too.


Did you read the same article?

   * Their is an option for the old behavior.
   * It is a security issue and better solutions to replace exist.
   * FHS isn't maintained.
I think everyone involved would prefer updates to the applications, which fix the issue. Debian opted - for now - for reliability for its users, which fits in their mission statement. On Arch /run/lock is only writeable for the superusers, which improves security. As user I value reliability and security and that legacy tools remain usable (sometimes by default, sometimes by a switch).


> It is a security issue

The "security issue" expressed is that someone creates 4 billion lock files. The entire reason an application would have a path to create these lock files is because it's dealing with a shared resource. It's pretty likely that lock files wouldn't be the only route for an application to kill a system. Which is a reason why this "security issue" isn't something anyone has taken seriously.

The reason is much more transparent if you read between the lines. Systemd wants to own the "/run" folder and they don't like the idea of user space applications being able to play in their pool. Notice they don't have the same security concerns for /var/tmp, for example.


they don't like the idea of user space applications being able to play in their pool

i think that is somewhat reasonable. but then systemd should have its own space, independent of a shared space: /var/systemd/run or /run/systemd/ ?


> then systemd should have its own space, independent of a shared space

This would go contrary to an unstated goal: making everyone else to dance to systemd's tune, for their own good.



> On Arch /run/lock is only writeable for the superusers, which improves security.

Does it? That means anyone who needs a lock gets superuser, which seems like overkill. Having a group with write permissions would seem to improve security more?


no that isn't what it means at all

a global /run/lock dir is an outdated mechanism not needed anymore

when the standard was written (20 years ago) it standardized a common way programs used to work around not having something like flock. This is also reflected in the specific details of FHS 3.0 which requires lock files to be named as `LCK..{device_name}` and must contain the process id in a specific encoding. Now the funny part. Flock was added to Linux in ~1996, so even when the standard was written it was already on the way of being outdated and it was just a matter of time until most programs start using flock.

This brings is to two ways how this being a issues makes IMHO little sense:

- a lot of use cases for /var/lock have been replaced with flock

- having a global writable dire used across users has a really bad history (including security vulnerabilities) so there have been ongoing affords to create alternatives for anything like that. E.g. /run/user/{uid}, ~/.local/{bin,share,state,etc.}, systemd PrivateTemp etc.

- so any program running as user not wanting to use flock should place their lock file in `/run/user/{uid}` like e.g. pipewire, wayland, docker and similar do (specifically $XDG_RUNTIME_DIR which happens to be `/un/user/{uid}`)

So the only programs affected by it are programs which:

- don't run as root

- don't use flock

- and don't really follow best practices introduced with the XDG standard either

- ignore that it was quite predictable that /var/lock will get limited or outright removed due to long standing efforts to remove global writable dirs everywhere

i.e. software stuck in the last century, or in this case more like 2 centuries ago in the 2000th

But that is a common theme with Debian Stable, you have to fight even to just remove something which we know since 20 years to be a bad design. If it weren't for Debians reputation I think the systemd devs might have been more surprised by this being an issue then the Debian maintainers about some niche tools using outdated mechanisms breaking.


> software stuck in the last century

OK, but suppose you have a piece of software you need to run, that's stuck in the last century, that you can't modify: maybe you lack the technical expertise, or maybe you don't even have access to the source code. Would you rather run it as root, or run it as a user that's a member of a group allowed to write to that directory?

The systemd maintainers (both upstream and Debian package maintainers) have a long history of wanting to ignore any use cases they find inconvenient.


most very old software will depend on many other parts so you anyway often have to run it in a vm with a old Linux kernel + distro release or similar

and if not, you can always put it in a container in which `/var/lock` permissions are changed to not being root-only. Which you probably anyway should do for any abandon ware.


It is my opinion that three things are true:

1) A piece of software can be complete.

2) It is virtuous when a piece of software is complete. We're freed to go do something else with our time.

3) It's not virtuous to obligate modifications to any software just because one has made changes to the shape of "the bikeshed".


That’s true and compatibility is a grace. Software shall not need every month an update. I sign of its quality.

Is this case, usage of /var/lock was clumsy for a long time. And not cleaning up APIs creates something horrible like Windows. API breaks should be limited, to the absolute minimum. The nice part here is, that we can adapt and patch code on Linux usually.

On the other side Linux (the kernel), GLIBC/STDLIBC++, Systemd and Wayland need to be API stable. Everybody dislikes API-Instability.


No, I didn't read the whole article. I follow debian-devel directly. Watched all of it unravel, step by step. I know the resolution since the day it posted to debian-devel.

This was a general question to begin with.

> Their is an option for the old behavior.

The discussion never centered on an option for keeping old behavior for any legitimate reason. The general tone was "systemd wants it this way, so Debian shall oblige". It was a borderline flame-war between more reasonable people and another party which yelled "we say so!"

> It is a security issue and modern solutions to replace exist.

I'm a Linux newbie. Using Linux for 23 years and managing them professionally for 20+ years. I have yet to see an attack involving /var/lock folder being world-writeable. /dev/shm is a much bigger attack surface from my experience.

Migration to flock(2) is not a bad idea, but acting like Nero and setting mailing lists ablaze is not the way to do this. People can cooperate, yet some people love to rain on others and make their life miserable because they think their demands require immediate obedience.

> FHS isn't maintained.

Isn't maintained or not improved fast enough to please systemd devs? IDK. There are standards and RFCs which underpin a ton of things which are not updated.

We tend to call them mature, not unmaintained/abandoned.

> On Arch /run/lock is only writeable for the superusers. As user I value reliability and the legacy tools are usable.

I also value the reliability and agree that legacy tools shall continue working. This is why I use Debian primarily, for the same last 20+ years.


I mean /var/lock was kinda on the way of being super seeded when FHS3 was written 20 years ago. We known it is bad design since a similar amount of time.

If FHS hadn't been unmaintained for nearly 2 decades I'm pretty sure non-root /var/lock would most likely have been deprecated over a decade ago (or at least recommended against being used). We know that cross user writable global dirs are a pretty bad idea since decades, if we can't even fix that I don't see a future for Linux tbh.(1)

Sure systemd should have given them a heads up, sure it makes sense to temporary revert this change to have a transition period. But this change has be on the horizon for over 20 year, and there isn't really any way around it long term.

(1): This might sound a bit ridiculous, but security requirements have been changing. In 2000 trusting most programs you run was fine. Today not so much, you can't really trust anything you run anymore. And it's just a matter of time until it is negligent (like in a legal liability way) if you trust anything but your core OS components, and even that not without constraints. As much as it sucks, if Linux doesn't adept it dies. And it does adopt, but mostly outside of the GPG/FSF space and also I think a bit to slow on desktop. I'm pretty worried about that.

> > FHS isn't maintained. > Isn't maintained or not improved fast enough to please systemd devs? IDK.

more like not maintained at all for 20+ years in a context where everything around it had major changes to the requirements/needs

they didn't even fix the definition of /var/lock. They say it can be used for various lock files but also specify a naming convention must be used, which only works for devices and also only for such not in a sub-dir structure. It also fails to specify that it you should (or at least are allowed to cleared the dir with reboot, something they do clarify for temp). It also in a foot note says all locks should be world readable, but that isn't true anymore since a long time. There are certain lock grouping folders (also not in the spec) where you don't need or want them to be public as it only leaks details which maybe an attacker could use in some obscure niche case.

A mature standard is one which has fixes, improvements and clarification, including wrt. changes in the environment its used in. A standard which recognizes when there is some suboptimal design and adds a warning, recommending not to use that sub-optimal desing etc. Nothing of the sort happened with this standard.

What we see instead is a standard which not only hasn't gotten any relevant updates for ~20 years but didn't even fix inconsistencies in itself.

For a standard to become mature it needs mature, that is a process of growing up, like fixing inconsistencies, clarifications, and deprecation (which doesn't imply removal later one). And this hasn't happen for a long time. Just because something has been used for a long time doesn't mean it's mature.

And if you want to be nit picky even Debian doesn't "fully" comply with FH3, because there are points in it which just don't make sense anymore, and they haven't been fixed them for 20 years.


> In 2000 trusting most programs you run was fine.

Yes. This is why Microsoft didn't decide to base Windows XP on the NT kernel and Windows 95 was nothing more than a (arguably very) pretty coat of paint on top of Windows 3.11.

It's also why multi-user systems with complicated permissions systems that ran processes in isolated virtual address spaces never got built in the decades prior to NT. All those OS researchers and sysadmins saw no reason to distrust the programs other users intended to run.


well the primary author does work for Microsoft

a company that considers "consent" to be a dirty word


Probably because everyone lets them keep getting away with it.

Any time there's systemd criticism there's always a quick rebuttal "But it was too hard writing anything in any other init system before so stop complaining".

So there's enough pass being given from the start to Systemd and the developers because it always has been forced upon us.


In this case I assume their "fear" is that unprivileged users can exhaust resources (inodes, filesystem space) in an important tmpfs breaking the system. The proper solution for backward compatibility would probably be something like make /run/lock its own mountpoint, but they fixed it in their system (Fedora) so now it's no longer their problem. Just be thankful their software is portable to such strange niche operating systems like Debian. /s


Funny part is that it used to be a separate file system, before Luca decided to kill it. https://lwn.net/Articles/1041948/


First before venting I'll say this: thanks to stuff like hypervisors, VMs, non-systemd distros, minimal immutable Linux distro like Talos (made to run Kubernetes with a minimum number of executable) and OCI containers where, by definition, PID 1 is not systemd, there's thankfully a real way out of systemd, even on a Linux stack.

And I think more people should look into being, once again, 100% systemd-free.

> Can anyone tell why systemd developers run fast and loose with what they believe and bully everyone with a stick made out of their ideas?

Because the goal is to take control of Linux. That's why systemd is PID1. That's why Poettering works for Microsoft.

The real question is: why was that ultra-convoluted xz backdoor attempt only working on Linux systems that did have systemd? People shall try to wag the dog saying "but it's because this and that made is so that xz was loaded by OpenSSH, it's got nothing to do with systemd". It's got everything to do with systemd.

And the other question is: how many backdoors are operational, today, on systems that have systemd?

Systemd is Microsoft-level bloat, running as PID 1, spreading its tentacles everywhere in Linux distros, definitely on purpose.

Poettering is moreover an insufferable bully, as can be seen once again.

From TFA:

> So what do you recommend how to go on from here? Change Debian policy (as asked in #1111839), revert the change in systemd, find a Debian wide solution or let every package maintainer implement their own solution?

I suggest Debian just drops systemd once and for all. Debian can still be made systemd-free but it's a hassle. Just make Debian systemd free once again.

Meanwhile you'll find me running systemd-less distros on VMs and running containers giving the PID 1 finger to systemd.

I can't wait to switch my Proxmox to FreeBSD's bhyve hypervisor (need to find the time to do it).

But most of all: I cannot wait for the day a systemd-less hypervisor Linux like Proxmox comes out.

It's coming and people who write stuff like: "Don't use Docker, use systemd this and systemd that" are misguided.

systemd is to me the antithesis of what Linux stands for.

I hope Debian gets pissed enough at some point to fully drop systemd.

P.S: one of my machine runs this: https://www.devuan.org/ and honestly it's totally fine. So yup: power to all those running systemd-less distros, BSDs, etc.


[flagged]


Oh, so can we hit people on the nose just because what we say is right?

That's a new one.

What if they are also right, and we start fighting?

Maybe people can discuss instead of bullying each other?

Being able to code doesn't give license to being rude.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: