It's quite unfortunate that this article mixes up what's necessary for podman quadlets with coreOS concepts.
With quadlets, the only thing required is to drop a `.container` file in the right place and you end up with a container properly supervised by `systemd`. And this of course also supports per-user rootless containers as described in [1].
I agree, and I think the author was unfortunately using coreOS because it's uncommon for cloud providers to have coreOS images nowadays, and therefore a good opportunity for him to slip in a referral code for VULTR.
Is coreOS even maintained any more? I wouldn't expect it to be very secure if the most recent VM images were built in ~2020.
Would love another writeup just using Ubuntu or some other bog-standard Linux distro.
> With quadlets, the only thing required is to drop a `.container` file in the right place and you end up with a container properly supervised by `systemd`.
Is it? He defines a .network file in that butane config without it won't work. Not really obvious. I'm sure this has a use-case and it's nice to have but personally I'm not convinced. You can switch on user-namespaces in docker-daemon or even run docker itself rootless - I guess if you are in Redhat land and use podman anyway it's an alternative but for instance where is this thing logging into? journalctl --user? Can I use a logshipper like loki with this? Is there something like docker compose config that shows the fully rendered configuration? I personally don't see the point and it feels like overly complicated.
.network is only required if you need a network, just like you define networks in docker compose for some containers to have one shared private network.
yeah spend some time on the docs for this and it's pretty straight forward - the article and the repo kind of omits this but it's also for a different usecase. Was just irritated when I wrote that comment. It's really some oci container to systemd shim system that uses podman.
I started using quadlets for new system designs a month ago and I feel like I'm neck deep in it now.
My conclusion is that there is absolutely no reason to stop using docker-compose if your developers are comfortable running one command, on one file, in one git root.
Quadlets are basically docker compose, in systemd. They've finally done it, systemd has it all and now it even has docker compose. ;)
That's really all it is in practice. I'm going to continue using it because I'm a RHEL kinda guy, but don't make it up to be magic.
Yes but when is someone going to add logic on top of this to make it a full blown distributed container orchestrator?
Could it be done with systemd and dbus? Can dbus be distributed among several systems like mmc on Windows? I have no idea, just some questions that popped into my head lately.
This might be nice, I've seen a couple of teams that instead of using a local `kind` or other local cluster to test their continers they make a docker compose and then do a bunch of work turning that into kube manifests, then maintain them separately.
I'm in a similar situation and I've been looking at quadlets a lot.
The approach is quite different from docker-compose and not really a substitute. It makes your individual containers into systemd services in an easier way than creating a unit file that calls `docker run`. But you still have to manually define networks in .network files, and configure all your dependencies in unit file syntax.
If you're very familiar with writing in systemd unit files, or really really want to use systemd to manage all container-related objects individually instead of having your container daemon do most of the work and a single compose file per group of related objects, you should consider switching. But in my experience there's little to be gained, a LOT to be lowt, and a lot of work to do the switch.
One of podman's selling points is that it doesn't need a daemon, and it runs without root privileges.
I'm not familiar enough with the lower level details to know how it works, but it certainly feels less like you are making "a ton of changes to the system" compared to docker
This. Run podman without a daemon and with host networking and the only "weird" configuration you'll see is the overlay fs. Everything else is quite often an unnecessary overhead.
What worse is , it screws up the firewall rules.
Podman avoid that so , quadlets should be fine?
Podman supposed to be drop-in replacement for docker but - last try (4 months ago) of podman to run our development docker containers fails to build so i think Podman is still far away from docker replacement.
> Podman supposed to be drop-in replacement for docker but - last try (4 months ago) of podman to run our development docker containers fails to build so i think Podman is still far away from docker replacement.
I'd be curious what failed to build under podman. I have been using podman as a replacement for docker for the last 3 years and haven't found any blocker. Sometimes you can't reuse a docker-compose file shared by a third party project straight away without adaptation but if you know the difference between docker and podman you can build and run anything that also run on docker.
1. Most of the issues I had that forced me to adapt a docker-compose was because I was using podman rootless and most people build docker-compose file with docker running as root in mind. mostly to have access to privileged ports. I guess running podman under root would have solved this but one of the reason I switched to podman in the beginning was for the rootless capability. In a way this wasn't much different than modifying a deployment to work with docker in rootless mode.
2. part of the appeal of using podman is its compatibility with kubernetes yaml file so you tend to quickly switch away from docker-compose anyway. Also for self hosting, the systemd approach was more elegant, even before the quadlets support.
3. One would argue that docker-compose != docker/moby engine.
4. docker-compose has introduced breaking changes in its history which meant adapting your compose file or add flags at runtime such as `docker-compose disable-v2`
> 4. docker-compose has introduced breaking changes in its history
...until the point when they resigned any notion of versioning. You can't have a breaking change if you don't promise a stable behavior, see? /s
Not many people noticed that the top-level
version "3.9"
has no effect anymore. "It is only informative", the current spec says. Your old docker-compose.yaml files spew errors as soon as they go out-of-sync with the master branch (classy!) of the spec repo.
Tbh, with the changes in the last few versions, you can't reuse some compose files even between different versions of docker, so... they're actually pretty comparable there.
In recent versions of Ubuntu, Debian, and Arch /usr/bin/iptables is an iptables-compatible interface to nftables. That's what docker is using on those systems, and it works fine. You can manage those rules with /usr/bin/nft.
That's not the case (anymore). I run a NixOS based router with nftables (no iptables installed at all), and podman works just fine. It simply adds its NAT rules to nftables (unless you tell it not to).
As far as I know, this was introduced with the new networking stack (netavark).
Here's a redacted version: https://gist.github.com/dbrgn/137da9e9ad342d536d1e452fba3e9d... Maybe it's useful as reference. It includes multiple network interfaces, a firewall, VLANs, DNS and ad blocking (plus two network services). (This version of the config does not yet make use of podman, I'm still in the process of setting everything up.)
I'm also using nix flakes, to keep the setup reproducible.
If you want to get started, I can recommend the following:
1. Install nixos. That will only take a few minutes, and you end up with a system in which you have a "/etc/nixos/configuration.nix" file. Now you can edit the config file, run "nix-rebuild switch", and the changes have been applied. Every change results in a new entry in the bootloader menu, so you can always rollback.
I suggest you start by just getting plain podman running the docker.io/hello-world container to reduce the complexity and simplify debugging if anything goes wrong. It's been about a decade since I last touched windows but if wsl2 is 1:1 with Linux the official podman guide should be straight forward.
It's always easier to start with the bare minimum and build from there, and you will get a better understanding of the tools you're using.
I don't know what `vscode devcontainers` is but to run podman on wsl2 I simply installed a fedora wsl2 image (by importing the fedora container image if my memory is correct).
> Is there a reason for all that noise and complexity?
To make it simple at the point of use. If developers had to configure firewalls and bind mounts for Docker to work, it never would have taken off as much as it did.
A safe heuristic is that whenever you introduce an abstraction to any tech stack, you can assume that it makes shortcuts that you wouldn't have to if you implemented the underlying parts yourself (w/ zero guarantee that it makes those shortcuts well). The latter meaning: short-term harder, long-term easier. Invert for any abstraction.
Related to Docker, I finally bit down and tried to do a simple deployment stack myself using systemd and OS-level dependencies instead of containers. I'll never go back. The simplicity of the implementation (and maintenance of it) made Docker irrelevant—and something I look at as a liability—for me. There's something remarkably zen about being able to SSH into a box, patch any dependency issues, and whistle on down the road.
The syntax and examples in the article assumes usage of SystemD as service manager. Does it work on distros without SystemD too? Docker-compose does.
I also do not understand separation of services to different files. Is it supposed to be more convenient? With docker-compose, the whole application stack is described in one file, before your eyes, within single yaml hierarchy. With quadlets, it's not.
Lastly, I do not understand the author's emphasize on AutoUpdate option. Is software supposed to update without administrator supervision? I guess not. What are the rules for new version matching: update to semver minor, patch version, does it skip release candidates etc?
Updating your custom registry with new upstream dep versions after testing in CI with the all services you care about is fine. But the OP seems to just blindly pull the newest wordpress images from upstream or am I missing something? How is this meant to work reliably?
I guess given wordpress's security record taking breaking your site from time to time is preferable to your site being broken into from time to time.
I think you're mixing up some things. If you run the image "docker.io/wordpress:6.3.1", then the container will be updated when the image with that tag (6.3.1) is being re-built (which is a best practice, because that's the only way how you get security updates for the libraries in the base image). The tag is just a pointer to the latest image hash.
Many Docker images also provide "semantic version tags". Wordpress does too, so if you run the image "docker.io/wordpress:6.3", you will get the latest 6.3.x version.
It's up to you (and the image publisher) to decide when to auto-update, and when manual intervention is necessary.
Of course this requires trusting the publisher of that image. But even if you build your own images, you still trust the base image. It's turtles all the way down.
But it's basically similar to running a "update" of you distros package manager automatically on the fly. (okay it's better due it having a smaller surface and somewhat better per package update schema controls)
And some people argue that you must not do so as it might unexpected subtle break your system.
And other say you must because (especially security) updates must be done.
And the truth is probably in between. (Like auto updates with self test and rollback, which in complex systems isn't trivial at all.)
Anyway especially for using local user space toolings on my computer I 100% will enable it. I mean iff it stops working I can fix it but if not (the normal case) it's low maintenance. Perfect.
> But it's basically similar to running a "update" of you distros package manager automatically on the fly.
Which is a thing now. My openSUSE MicroOS/Aeon machines default to running transactional-update every day, and updates take effect on the next reboot. Given that MicroOS is allegedly to SUSE as CoreOS is to Red Hat, I suspect the latter has similar defaults.
> Running containerized workloads in systemd is a simple yet powerful means for reliable and rock-solid deployments.
They say its reliable and rock-solid. Isn't that enough for you? /s
---
Honestly, the amount of companies who: Don't understand the problems, forgot history, and think we're innovating into new territory because of hyped up branding is utterly baffling in the whole container space.
I'm not saying they're all bad, and better common tools are a good thing. But i see so many companies operating at [required complexity level] + 1 in the hopes to no longer be bothered by simpler problems.
If you run only the software you wrote, then yes, it is useful feature. Otherwise I won't trust automated pulls of whatever other devs put into their public images, nor won't I trust them for following image versioning properly and not introducing some addition in minor version that would automatically expose my files to the internet if not configured explicitly. There is too much trust I don't want to put in auto-updates.
As for the SystemD dependency, in this case the quadlets can not even be compared to docker-compose, nor be a replacement to it. Docker-compose always was independent of init system, where as quadlets are strictly tied to SystemD-based distros. E.g. users of Alpine, Gentoo won't be able to replace their compose stacks with quadlets.
Quadlet is specific to systemd. It's actually just a systemd generator that looks for related files in /etc/containers/systemd or the user's ~/.config/containers/systemd to generate units from that can then be started as services. When you create or edit a file here, you then do 'systemctl daemon-reload' which re-invokes all systemd generators, quadlet included.
They're both declarative manifests describing how to run one or more images on your system.
You use .container for a single container, .kube for all-in-one pods, .network for networks, and .volume for volumes. It has all the stuff it's just broken down in a more (imho) sysadmin friendly way where all the pieces are independent and can be independently deployed.
For anything other than a hello world type project a compose file will fall over kinda quick. I would much prefer to (ahem) compose smaller things together and systemd is great for that.
That might work for whatever you are doing, but the truth is that root-ful containers are not appropriate for a lot of applications, and docker as a layer to your container runtime is rough sometimes. I don't think docker wants to continue to develop this anyway - they have had enough problems trying to be profitable so instead it is time to focus on docker desktop and charging for docker hub image hosting.
I feel like we are kind of in this weird limbo where I know I need to move to a podman focused stack, but the tooling just isn't there yet. I guess that is what makes Quadlets interesting, but idk if a single tool will really emerge. There is also podman-compose floating around. I still feel like I should hold of on some stuff until a winner emerges. So at home I'm still on docker with compose files. Although I will be moving to kubernetes for... reasons.
There's already a clear winner, docker compose. Podman is barely used in prod and podman compose is barely used in dev. Docker also has buildkit and rootless configs so I'm not sure they are just letting it decay.
It will depend on the complexity of the stack, the number of environments you are deploying into, the number of devs on the team etc. It can work, and if it is working for you in this manner don't change what isn't broken, but in my experience for sophisticated systems with many services, envs, and developers it becomes unmanageable pretty quickly.
In what way? Docker-compose files are composable. I can specify several compose files that layer functionality or have different behavior and tie it together with make. You can also set defaults and override with environment variables using bash syntax.
For my home server, I have a flat 2507 line docker-compose file that automatically configures and boots all of my 85 containers. I still have some complexity: .env files in /opt/<container>/, a systemd process that automatically runs
docker-compose -f /<dir>/docker-compose.yaml -d
on boot, and it's only a little irritating to have use absolute paths for everything instead of relative. But, after having to update all of my services manually for 3 years, I will never be able to go back.
Yeah, she's a big girl, but having one flat file and one systemd process is infinitely better than juggling 85 of each. I have the systemd process start after docker.service, and most of my containers have a "depends_on" argument so they don't all try to boot at once. All of the containers also push logging to a Splunk instance, which adds 9 lines per container, which increases the file by 765 lines.
I’ve got a similar probably-too-large docker-compose on my home server.
If those 9 lines are identical you can probably simplify quite a bit with extension fields and yaml anchors. [0]
You would put
logging: *default-logging
As a single line under each container, and then define it elsewhere. The example on the docs page is for logging, but you can also simplify other fields too, like “depends on”.
Heh, Splunk is way overkill for what I need, but they have a free 10GB Dev license that you have to renew every 6 months, and building a custom COVID dashboard during the first few months of 2020 kept me sane. Here's my home/lab "prod":
Have you used splunk's free tier? It's easy to setup, 500mb/day ingest free, and it's pretty easy to use (easier than grafana imho.) If they're using it professionally then why not at home?
For fun, pretty much. 20 of them are Matrix related (synapse/dendrite, bridges, and bots), 16 for media services (Plex, audiobookshelf, etc), stuff for management/monitoring, stuff for Fediverse, etc.
I've been kinda partial to helm charts (on a k8s cluster). Standing up services is not awful. Have you used helm or similar? What do you think of these kind of tools?
I don't manage one large compose file, but helm charts are where my home stuff is headed. Mostly for ingress controller functionality - it's the reverse proxy configuration that will get you.
Quadlets are very much a welcomed integration but last time I tried to create an users Quadlet in .config/containers/systemd/ with a linuxserver.io image I ended up with all sort of files owned by strange UID & GID.
So I had to add --userns keep-id to my container unit what caused all sort of problem because of podman apparently.
So you always end up with the kind of investigation & fiddling that shouldn't be necessary after 10 years of docker & containers.
Thank it looks great ! and yes, I believe it is the policy of linuxserver.io not to test or support officially podman.
I have been trusting the plan but I notice that after 10 years of container industry standard etc. we have to search for podman friendly images to enjoy integration with the common Linux service manager...
Now if container-based Linux distributions are the future I'm starting to wonder if we are not gonna soon see RedHat & co. packaging docker images in RPMs to make sure guarantee things work together & people don't badly mess up the security...
Fun fact, OpenSUSE actually already does that for some common server software (LDAP, dovecot, etc); they're quadlet/systemd unit files packaged up as RPMs though, I don't think they actually include the container image.
Overall this looks really good but it also obfuscates whether how & where this really integrates with systemd.
Maybe everything is this easy & good. Maybe this is an /etc/systems/system/WordPress.quadlet file, part & parcel to everything else in the systemd-verse. But it doesn't say clearly whether it is or isn't. It's an acontextual example.
I think it's powerful tech either way, but so much of the explanation is missing here. It focuses on the strengths, on what is consistent, but isn't discussing the overall picture of how things slot together.
In many ways I think this is the most interesting frontier for systemd. It's supposedly not a monolith, supposedly modular, it so far that has largely meant that components are modular, optional. You don't need to run the pretty fine systemd-resolvd, for example. But what k8s has done is make handling resources modular, and that feels like the broad idea here. But it seems dubious that systemd really has that extensibility builtin; it seems likely that podman quadlet is a secondary entirely unrelated controller, ape-ing what systemd does without integrating at all. It seems likely that's not a podman-quadlet fault: it's likely a broad systemd inflexibility.
Could be wrong here. But the article seems to offer no support that there is any integration, no support that this systemd-alike integrates or extends at all. Quadlet seem to be a parallel and similar-looking tech, with deep parallel, but those parallels from what I read here are handcrafted. Jt's not quadlet that fails here to be great, ut systemd not offering actual deep integration options.
Quadlet uses SystemD Generators [1] feature. Generators are a way to convert non-systemd-native configuration extensions (like containers, volumes and networks in case of quadlets) into regular systemd-native configuration like service unit-files. The quadlet generator converts the podman extension for systemd into regular service file that you can examine yourself. Non-root podman container services are in /run/user/<uid>/systemd/generator. Here is a blog post that describes the design in detail (slightly dated): [2]
After a `systemctl daemon-reload` an `oxidized` service springs into being.
[root@xoanon ~]# systemctl status oxidized
● oxidized.service - Oxidized
Loaded: loaded (/etc/containers/systemd/oxidized.container; generated)
Active: active (running) since Sat 2023-09-23 09:53:11 UTC; 2 days ago
Process: 221712 ExecStopPost=/usr/bin/rm -f /run/oxidized.cid (code=exited, status=219/CGROUP)
Process: 221711 ExecStopPost=/usr/bin/podman rm -f -i --cidfile=/run/oxidized.cid (code=exited, status=219/CGROUP)
Process: 221713 ExecStartPre=/usr/bin/rm -f /var/local/oxidized/pid (code=exited, status=0/SUCCESS)
Main PID: 221799 (conmon)
Tasks: 8 (limit: 98641)
Memory: 169.0M
CGroup: /system.slice/oxidized.service
├─libpod-payload-b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390
│ ├─221801 /run/podman-init -- oxidized
│ └─221803 puma 3.11.4 (tcp://127.0.0.1:8888) [/]
└─runtime
└─221799 /usr/bin/conmon --api-version 1 -c b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390 -u b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata -p /run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/pidfile -n systemd-oxidized --exit-dir /run/libpod/exits --full-attach -l passthrough --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/oci-log --conmon-pidfile /run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390Learn more at:
Sep 26 00:03:05 xoanon oxidized[221803]: I, [2023-09-26T00:03:05.807594 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 01:03:15 xoanon oxidized[221803]: I, [2023-09-26T01:03:15.083603 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 02:03:24 xoanon oxidized[221803]: I, [2023-09-26T02:03:24.414821 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 03:03:33 xoanon oxidized[221803]: I, [2023-09-26T03:03:33.677828 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 04:03:42 xoanon oxidized[221803]: I, [2023-09-26T04:03:42.983589 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 05:03:52 xoanon oxidized[221803]: I, [2023-09-26T05:03:52.297830 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 06:04:01 xoanon oxidized[221803]: I, [2023-09-26T06:04:01.637348 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 07:04:10 xoanon oxidized[221803]: I, [2023-09-26T07:04:10.935352 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 08:04:20 xoanon oxidized[221803]: I, [2023-09-26T08:04:20.199651 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 09:04:29 xoanon oxidized[221803]: I, [2023-09-26T09:04:29.553178 #2] INFO -- : Configuration updated for /192.168.89.5
During the daemon-reload, systemd invoked /usr/lib/systemd/system-generators/podman-system-generator, which read the files in /etc/podman/systemd and synthesized a systemd service for each of then, which it dropped into /run/systemd/generator, which is one of the directories from which systemd loads unit files.
Far from being a parallel service control mechanism (á la Docker), this is proper separation of concerns: the service is a first-class systemd service like any other; the payload of the service is the podman command that runs the container. We can introspect this a bit to examine the systemd unit that was generated:
> you’ll see a WantedBy line. This is a great place to set up container dependencies. In this example, the container that runs caddy (a web server) can’t start until Wordpress is up and running.
Either this must be some systemd weirdness that I thankfully haven't had to deal with until now, or I'm misunderstanding something.
Did I understand correctly you don't specify which services you need but rather which ones depend on your service? So if your service doesn't start you'll need to check the configuration files of all other services to figure out which dependency is preventing it from starting?
This is the mechanism by which one unit can ask to be added to the Wants= of another when it is installed.
i.e., when you run 'systemctl enable whatever.service', it will be symlinked into '/etc/systemd/system/multi-user.target.wants'. And 'systemctl show multi-user.target' will show 'whatever.service' in its Wants property.
During bootup of a headless system, the 'default' target is usually multi-user.target, so what we've done here is ensure that whatever.service will be started before the machine finishes booting.
I think it's a bit of a Podman quirk. From what I understand, podman used (and is probably still able) to generate systemd .service files. These files do have Requires and After commands, to state which other services they expect. However, Podman has since moved to using the .container file for systemd "units", which was meant to represent an transient, disposable instance but in practice reproduces a lot of what .service specifies. Probably because people didn't want to do the work twice, they tacked on an [Install] section to .container to make it behave like a service, which currently accepts only the keywords Alias, RequiredBy and WantedBy (I've not seen any documentation on how these differ) according to https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
This are standard systemd service file syntax and standard systemd directives. Quadlet forwards everything but the [container] section directly to the generated service file.
Systems scans all of the unit files initially, and topologically sorts them to find the best start ordering for all services. Unit files are rescanned only when you run systems daemon-reload.
One of its main design goals is a fast system startup, to do that it does need know the dependency ordering of all services.
It's not how you would normally specify it, but it is an option: the normal usage for it in systemd is enabling and disabling which services start on boot: an enabled service usually gets set up as a dependency of the multi-user target which is what systemd starts on boot. (And you can get a list of dependencies from systemd if you want to debug anything: the WantedBy stuff just turns into some symbolic links in the filesystem if you want to inspect things manually)
I don't know why it's being used in that way for these containers. It'd be easier to just add a Wants line on Caddy.
You can define Before, After, or WantedBy to define the dependency order. Systemd them makes sure to start services in the right order, it starts them by default but you can also configure services not to start if nothing depends on them.
I have been having a blast using quadlets on my tiny home server. Feels like I'm learning how to use systemd which is a nice bonus. My workflow consists of just connecting vscode over ssh and editing files as needed, which works well since everything is owned by my user.
I'm happy with docker-compose, but it seems RedHat really doesn't want me to use such simple and easy to use thing that is not as tightly coupled with other RH-specific parts as the replacement they promote.
> My container deployments are often done at instance boot time and I don’t make too many changes afterwards. I found myself using docker-compose for the initial deployment and then I didn’t really use it again.
I used a very similar approach in the (now EOL'ed, gonna be replaced by full K8s) infra at $DAYJOB. My main reason to stick with docker-compose is because developers are familiar with it and can then easily patch/modify the thing themselves. Replacing with something systemd will add a dependency over the people that know systemd (which are not usually application developers in your average HTTP API shop)
I would add that this is the use case of a server running some services. But during development you may want to restart and/or rebuild a container multiple times, maybe this is still better done with docker-compose.
> The original Quadlet repository describes Quadlet this way:
> What do you get if you squash a Kubernetes kubelet?
> A quadlet
So it's based on reinterpreting the root "kuber-" which ultimately means something"to do with "turn", as "cube", and then metaphorically reducing a cube to a square.
They squashed kubelet, not K8s. By that, they probably mean that they got rid of the kubelet and delegated its functionality to systemd (which is already there besides the kubelet if the node uses systemd based distro). Note that quadlet is also capable of creating services based on K8s manifest.
It seems like the coolest part of this isn't the systemd part necessarily (unless you're a systemd fan). It's that the combination of Quadlets + coreos gives:
- Node starts up with a container daemonized
- Automated updates of the registry image
- A nice single arg to pass to your cloud provider CLI userdata that launches the OS + container (vultr-cli in this case)
I guess you can do something similar with any linux userdata but the script wont be as clean. Has anyone built something lite to launch docker containers on boot in ubuntu (this is particularly helpful for cloud provider CLIs) without writing the whole script manually? Something nicer than `apt update && apt install -y docker && docker run --rm --restart-always ...` that includes the registry autoupdate.
Maybe I'm not seeing something here. You have docker compose run by systemd and your compose file has restart always in it. What problem is getting solved by quadlets, or is it just a ergonomics thing?
I never understood the appeal of docker-compose (you can accomplish roughly the same thing by having a Shell script that calls docker client, but skipping the Python clown fiesta with dependencies, environments etc.)
Quadlets seems not exactly a replacement for the function docker-compose was supposed to perform though, or am I wrong? It seems like its target audience is administrators who are supposed to run containers as systemd services (questionable choice, but probably there are people who want that)...
How would you suggest a bash script handle configuring all the different images, their ports, and ensuring services are spun up in the correct dependency graph (parallel where possible), and are exposed to each other as a reliable host name over a subnet without polluting the host network?
And then how is that bash script extendable so it's not a custom script every time?
docker-compose is slow and not quite parallel (because it's written in Python and uses requests internally). So your yearning for speed optimization is kind of misplaced. If you are using docker-compose, you probably don't care about speed anyways. And, the way I understand it's typically used is to create some slice of the system a developer is working on, so unless we are talking about many minutes difference, the speed gains are inconsequential. Also, because you are using it to deploy just the relevant part of the system, the setup won't be complicated -- it's counterproductive to do that in a completely local system and especially because you want to work with as few components as possible during such deployments.
So, how would I go about that in Shell? I don't see a problem. Can you point to a specific problem? All these settings in your example easily translate into docker commands.
> docker-compose is slow and not quite parallel (because it's written in Python and uses requests internally).
This hasn't been true for quite some time. Docker compose v2, written in Go, was released in 2020; v1 finally officially stopped receiving security updates this summer.
> I never understood the appeal of docker-compose (you can accomplish roughly the same thing by having a Shell script that calls docker client, but skipping the Python clown fiesta with dependencies, environments etc.)
Basically the appeal is "one command > everything running" when you have multiple services working together, which sure, you could do with imperative shellscripting, but for people who don't spend their daily time writing shellscripts, something declarative like YAML is usually easier to get started with, especially when you only revisit it sometimes.
Python people generally have either installed by default in their OS, or already using because of some other system. I wonder how many developers don't have Python at all on their systems already?
> Basically the appeal is "one command > everything running"
But I can accomplish this with Shell script... And no need to deal with Python, its broken dependency management, poor piping / I/O in general, bugs in the docker-compose itself... What do I win by having to suffer all these problems?
> who don't spend their daily time writing shellscripts,
Do you write docker-compose scripts daily? Seriously? Why? My impression was that you write that once and edit very infrequently (like maybe once a month or less). So, in terms of time investment it doesn't seem to make much of a difference. Also, I see no value in imperative vs declarative approaches here. It's actually hard to understand what is going to happen when you use declarative style because you need to rely on and have a very deep knowledge of the imperative aspect of the system interpreting your declarations to be confident of the end result.
> Python people generally have either installed by default in their OS,
Being one of "Python people" I have Python 3.7 thru 3.12 built from respective heads of cPython project installed on my work laptop. I would hate to have to add more to support a tool with dubious (or as is my case extraneous) functionality.
Also, being one of those "Python people" who deals with infra, I had to import docker-compose code into my code and deal with it as dependency, both with its CLI and its modules. And... it's not good code. Well, like the wast majority of Python is a piece of garbage. Particular feature of docker-compose that stands out is that it was written by "Go people" with poor command of Python, so it's "Go written in Python" kind of program.
Also, people who write docker-compose don't care about how it interacts with other packages, and this shows in how they define their dependencies (very selective versions of ubiquitous libraries, eg. requests) that will almost certainly not play well with other libraries you'd use in this context (eg. boto3). I had plenty of headache trying to use this tool and had so far sworn never to use it in my own infra projects because of its dependency issues. If I ever use it (as in to deal with someone else's problems) I install it in its own environment. Which is yet another problem with its use because then you'd have to switch environments just to call it, and then you forget to switch back and things start behaving weirdly.
Why? What do I stand to gain from using this? Marginal speed improvement in some cases and a lot of headache trying to debug it in other cases?
docker-compose offers an alternative. It doesn't offer any genuinely new functionality. It may work for you if you don't know how to use the other alternative, or the other alternative is inconvenient for you for some other reason, but if it's not, it's just an extra tool that you don't need.
When putting multiple containers together docker-compose will handle setting up the networking and hostname lookup so that the services can communicate with static names and ports.
But I can do that with Shell too. Convenience may depend on how well you know either tool, but I claim that if Python is unnecessary for this task (and you cannot escape having a Shell), then why bother with Python? The gain, if any, seems not worth the trouble.
nice to see more podman/systemd ties but --user I find even harder to let tabs on what containers are running, especially as you check on other users, remembering the socket mount is quite unpleasant. I'm really a fan of tilt and just using k8s yaml I'll probably need to deploy anyway. but I don't see why you couldn't make that reconciliation loop drive podman instead. but then docker bought tilt so...
because inin files have no or very limited support for nesting
there are also some issues with keys being all string by default making some things like linting harder and allowing more less standard ways to get something done, e.g. in json true is true in init files all of true, 1, yes, y and others might be true depending on the application (but then yaml no problem is worse)
also there is no clear single standard for init files
through this is where toml comes from it took the general layout ideas behind init files but give it a strict standard and strict string vs. bool vs. float types and a bit more nesting capabilities and fixes some string distance escape issues, etc
I'm not sure this is the criticism you think it is. Wow, so you basically have to add quotes to get strings in some ambiguous situations?
Yeah sure you could probably improve YAML by getting rid of these weird pitfalls, but that is a minor improvement. The alternative isn't something like TOML, because YAML is optimized for hierarchical configuration. It's every vendor implementing a different syntax such as Hashicorp with their HCL [0].
If that site lists all the complains that nitpickers managed to put together, it sounds like YAML is virtually perfect.
Also, to underline how silly and futile these nitpicking complains are, some YAML parsers already explicitly address silly things like the Norway problem.
This has literally never been a problem for me despite writing considerable amounts of YAML.
Frankly, this is a phantom benefit: the first thing people do in brackety-languages is define an indent standard. And it's not like a brackety language magically saves you from mis-nesting things if a bracket ends up misplaced.
I don't know if it's yaml or jinja, but variations of https://mastodon.communick.com/@raphael/111059057995356737 keeps me wishing that we dropped all these formats and just adopted some type of lisp that could be embedded in all languages.
So is JSON, and all mainstream config formats actually.
However, that stopped being a problem with JSON schema, which, despite the name, works on yaml too.
Blink. I beg your pardon, JSON is strongly typed per RFC 8259 standard:
> JSON can represent four primitive types (strings, numbers, booleans, and null) and two structured types (objects and arrays).
The type system does not align well with any other type system out there (float/int ambiguity, no timestamps, etc.) but it's still better than any coercion.
I don't understand how people go through the effort of writing an article about containers but start out with a basic incorrect statement. They don't disconnect from the OS, and in fact are dependent on the OS.
Say what you want about docker compose, but when I see the amount of scaffolding necessary for this, with so many catch words like butane or ignition, I’m happy with my good ol’ docker composé file where everything is neatly organized. In one glance I can see what is deployed, depends on what and what net / volume is used.
I was confused by this article in the beginning, it does a pretty bad job at drawing a distinction between the pure quadlet example at the start and the example of using CoreOS to build and launch a VM that starts containers.
The basic usage of podman quadlets is putting an `app.container` in `/etc/containers/systemd/` containing something like the first snippet and then starting the unit. For someone familiar with systemd, this seems very very nice to work with.
I'm still not clear whether quadlets are a feature of Podman or systemd...
The reliance on systemd is an issue on its own. Much has been said about its intrusion in all aspects of Linux, and I still prefer using distros without it. How can I use this on, say, Void Linux? Standalone Podman does work there, but I'm not familiar if there were some hacks needed to make it work with runit, and if more would be needed for this quadlet feature.
I mean the best part about open source and Linux is that you have choice. Do you want to run an OS devoid of SystemD? Fine. Will you be going against the tide and leaving a large part of the ecosystem behind? Yup.
I’ve chosen to embrace systemd and learn it as it is the defacto standard it seems rather than fight what I think is a futile war against it. That being said. I won’t force you to use it if you don’t. But I do not see quadlets using systemd as a failing.
This is such a bizarre comment. I run systemd a number of Linux machines currently but does that mean they are failing? Is taking advantage of systemd's features a failure? They run and do their function so in what sense are they failing or what does failure mean?
By my calculations, considering much of the world runs on RH/Ubuntu/Debian, all of which use systemd, things depending on systemd are far from being a failure, cos they'll run on the majority of systems.
"quadlets" are podman using systemd's extension mechanism [systemd.generator(7)] to create systemd services that invoke podman to run containers, based on the files you drop into /etc/containers/systemd.
Yes! Can we talk about projects consistently deciding to invent new nomenclature for their entities or products. Is it an effort to lock you in to their system?
It is not entirely obvious - can I use quadlets in an ad-hoc fashion to spin up a project? The example makes it seem like it is exclusively for long running services.
I also miss the simplicity of compose, but for me running rootless is worth the tradeoff. It also forced me to rethink how I used containers and I realized many times a simple podman_run.sh is enough. It also helped me understand containers better because there was less magic going on, sometimes limitations can be good.
> I also miss the simplicity of compose, but for me running rootless is worth the tradeoff.
To setup/tear down software dev environments deployed locally, the root/rootless discussion isn't really relevant. Ease of deployment and ease of use are critical though, and Docker is above all a development experience victory.
Relevant to who? The damage done with container escape is bigger on my machine than any production server I have access to. And there are a lot more packages running in my dev environment than on production servers. When it comes to security or convenience I always choose the first but I know most people wont.
Podman-compose isn't as good as docker but it exists, and you can run docker-compose with podman as the runtime. I don't need it as my environments are simple and even wrapping the commands in shell scripts would be overkill. But the option is there.
> The damage done with container escape is bigger on my machine than any production server I have access to.
It's worth pointing out that if you're running on Fedora/RHEL then containers are confined to the container_t domain, with a unit per-container MCS label. SELinux policy will prevent a process that has broken out of its namespaces from being able to read/write files from the host or from other containers, or being able to kill or read the memory of or (I'm assuming, haven't checked) ptrace processes from the host or other containers.
> Relevant to who? The damage done with container escape is bigger on my machine than any production server I have access to.
If you are concerned that your dev machine is vulnerable but for any reason you decided to not do anything about it, them you might be happy to learn that it's possible to configure Docker to not run as root.
> If you are concerned that your dev machine is vulnerable but for any reason you decided to not do anything about it
Why would you assume I'm not doing anything about it? Podman is one piece in my hygiene, not allowing npm scripts is another. It does make some things harder and most devs I work with don't even know it's possible and should be done. Assuming you aren't vulnerable and waiting for a problem to appear before solving it is doing it backwards if you ask me. Your kind of self-confidence is what usually gets people.
I could also point docker-compose to the podman socket (it's the default for the podman compose command), if that was something I needed. Pods do it for me these days, which was my initial point. Even though compose is cool it's not really needed and wouldn't add that much for me these days. I've been using podman so long that I don't see the point in going back to docker, changing the default when what I'm using was built to fix that issue to begin with.
What point are you trying to make? I can live without docker.
FWIW podman 4's netavark has solved most of the pain points I encountered with podman's rootless networking. Containers can actually find each other now.
While you're technically correct, docker compose uses yet another process supervisor (the docker daemon) while systemd is already capable of doing that. This is probably what the author meant by 'external dependency' - not the need to install compose separately. Quadlet delegates the supervision to systemd daemon, eliminating this duplication of supervisor functionality.
Kubelets in K8s also have similar duplication of supervisor functionality. Perhaps this is a good place to mention the Aurae runtime [1], which is designed to replace systemd, docker daemon, kubelet, etc on dedicated worker nodes. Sadly, its chief designer Kris Nova passed away recently in an accident. I wish the rest of the team the strength of carry her legacy forward.
Not quite, docker-compose is shipped separately by Docker as docker-compose-plugin. The plugin can then be reached with the subcommand "docker compose".
However, I agree with your sentiment. It's basically a part of any modern Docker installation now. Calling it an external dependency "like watchtower" is not a fair comparison.
Yeah … this response is devoid of substance. What’s all this nonsense about “soy”? Why not critique the approach with what is better about your approach or docker-compose or whatever it is you use and talk about the merits.
It probably seems like I wrote the above as a one-off shit post to bag on OP for the lulz. But I did have a solid engineering basis for writing it.
The tools being better or worse isn't the issue I was getting at. Quadlets may well be an improvement on several fronts. But every tool / dependency / service / etc increases the cognitive burden of a project. That is one more component to have to learn for people who interact with it. One more expansion of the overall system to keep in mind (if such a feat is still possible.) One more increase in complexity - and that's quite a bad thing because complexity is harder to maintain.
When you always chase the latest and greatest things. You may end up with a collection of shiny tools that specialize in everything just the way you want it. But push it too far and you end up with designs that are unrecognizable to anyone with the skills you need. OSes, package managers, scripting languages, DBs, dev-ops, and cloud infrastructure approaches that no one recognizes. And when people want to be productive they're not going to be impressed by the dazzling number of obscure technologies being used. It will seem more like a red flag than anything.
There's a nice contrast you can make between web technology and templeos. Terry Davis built this operating system called templeos and part of his goals for building the OS were to keep it within a certain number of lines - lets say 25k. That means the entire kernel, file system drivers, graphics, editor, terminal, and compiler -- all have to fit within 25k. In other words every line had to count. Terry Davis hated bloated software and wanted to write an OS that had low resource usage. So templeos runs in 32 bits -- specifically using 32 bit register operations because they're faster. He built an entire language from scratch and all the tooling to run it. Templeos can build its entire kernel inside itself using his tooling and it does so within seconds. His tools don't even have a linker.
Now Terry knew something many programmers today don't: it's that when you keep things simple, when you include only what you need, and design in the simplest way possible, you can actually achieve better results. Terrys code is fast, it builds instantly, it has a low memory foot print, and its easy to maintain. Compare it to the web today. A multitude of ways to build, pack, combine, distribute, minimize, respond, push, pull... The web was never meant to be rocket science but somehow trying to keep up with modern web development feels like getting teeth pulled. If terry had of built the web it would look shitty like windows 95 but pages would load instantly, use almost no data, wouldn't need 8 gb of RAM to run chrome, and probably would encourage regular users to write code.
What? If you use containers this simplifies the stack rather than complicating it. Instead of kubernetes and other popular tools it's just rootless podman and systemd which is already there.
I'm using a similar setup and this excites me. It's too new to be in my repository but as soon as I get an extra hour I'll compile it and take it for a spin.
Depends what you mean by simplifying. Does it have registries and the ecosystem of OCI containers? The killer feature of podman for me is that it works with docker which like or not most people are using.
Feel like I've fallen out the back of the wardrobe into Narnia: the way we want to run containers on a machine is part of systemd? A thing that nobody understands, that isn't present on most machines.
It's not present on most machines? All Linux boxes I've seen used systemd. It's the most used init system.
If you want to say that there are machines that don't have systemd which therefore cannot use quadlets, that's like arguing that something made for Linux is useless because "Linux is not present on most machines".
With quadlets, the only thing required is to drop a `.container` file in the right place and you end up with a container properly supervised by `systemd`. And this of course also supports per-user rootless containers as described in [1].
[1]: https://www.redhat.com/sysadmin/quadlet-podman