Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or we could just ditch Docker for one of the alternatives, like Podman that doesn't need root, nor a daemon.


Comparing the shortcomings of rootless podman (https://github.com/containers/libpod/blob/master/rootless.md) and rootless docker, they seem almost the same. So this argument may not count, the daemon argument however applies.


I wonder if we will ever get rid of the ludicrous limitation of the privileged ports. It's a mechanism that only provided some sense of security in the 80s.

The W3C[1] says "if you connect to a service on one of these ports you can be fairly sure that you have the real thing, and not a fake which some hacker has put up for you." Well, in 2019 computers aren't mainframes run by institutions and hackers can be root of their own system and run whatever they want on port 22.

It's such an incovenience that I'm sure it caused countless services to be unnecessarily run as root.

[1]: https://www.w3.org/Daemon/User/Installation/PrivilegedPorts....


Looks like starting with Linux 4.11 you can:

sysctl net.ipv4.ip_unprivileged_port_start=443

( https://stackoverflow.com/questions/413807/is-there-a-way-fo... )


Yep. I only half care about rootless. I definitely care about the daemon.

It sucks.

It flies in the face of traditional Linux process management where child processes are child processes.

(Unless you want an init system, where you need a daemon. But docker is a sucky init system.)

Docker breaks even the most basic things.

    $ time docker run some heavy computation
Oh wait, that doesn't work.


For the trivial case a child process would work. But ultimately docker does try to be closer to an init system or maybe screen since you can detach/attach to processes. Since reparenting to arbitrary processes is not possible in linux it's also not possible to retain the parent-child relationship for spawned containers.

If you want the fork-exec model then docker is indeed the wrong tool for the job.


The thing is, I don't even really like Docker as an init daemon. I have my gripes about Systemd but I see no downsides to not having a long-running daemon for a container engine. Really, whether you need root or not isn't even the most important issue; you can do sudo or suid or whatever with any container engine; Docker just has it be an implicit, unintuitive behavior.

I used to use systemd+rkt for simple container setups when I didn't need all of Kubernetes. Never noticed any downsides versus using Docker, but on the flip side, I had far fewer issues with my containers not properly starting at boot.


I read an article (can't find it now) that said from a previous project the Docker authors concluded they wanted a daemon so they didn't have to do things like file locks, etc. around image management.

Don't know if that accounts for the whole reason or not.


Sounds like it might've been this article?

https://jpetazzo.github.io/2017/02/24/from-dotcloud-to-docke...?


Oh yeah, I read that - surrounding the original authors at dotCloud about their experiences trying to do it in multiple processes IIRC. That said, it seems the problem may be somewhat solved at this point.


The reason boiled down to "it's easier to write a daemon to handle it all". But it's definitely not better than the alternative, nor is the alternative impossible.


In that case you can use LXC or even runc directly. I wanted to (for a long time) decouple the systemd dependency from rkt, because they had the perfect model for it. Unfortunately we've all migrated to arguing about containerd vs cri-o/podman.


I don't like their idea of what a docker-compose replacement should be. And reading issues and limitations about podman pod commands is very discouraging. I would love to hear what others are using and their experiences though. I avoid anything Kubernetes because of a personal bias.


Would you be willing to elaborate on the reason why you avoid kubernetes?


I'm not the OP, but...

Here's the scene: Most of the web projects I work on will never have a billion users. They might have 5, or 10. One or two have thousands. Several of them have 1 (me).

Docker-compose works for me. I set up a container for my backend, a container for whatever's serving the static resources for the frontend, and a container for whatever databases are needed (Postgres, Redis, whatever). The databases get a filesystem volume mount that I can snapshot off the disk with a nightly cron job.

I have a script that will transform a brand shiny new $5/mo DigitalOcean Ubuntu image into a machine with nginx+LetsEncrypt for SSL termination, and with Docker and docker-compose installed (and the Docker port firewalled off, natch). From there, I run "docker-compose up -d" and my project fires up and goes. Maybe I have to edit a line or two in the nginx.conf that my script put in place.

To deploy, I do a local build on my laptop (or Jenkins for a few projects where it makes sense) via a script that pushes the built containers to Docker Hub, and runs docker-compose pull on the host.

This has served me beautifully.

I've looked at Kube more than once. It looks cool for things dramatically bigger than what I'm working on. For something that isn't massive scale, it's bloody complicated. If one of these projects ever gets to the point where a $40/mo DigitalOcean box can't handle the load, I'll probably look at it again. Until then, though, it feels like a very expensive (time-wise) premature optimization.


I like the sound of how you have that set up, do you know any good open source repositories that are designed the way you describe that I could look at for learning purposes?

(I mean, projects that set up containers for backend, database, and front-end servers and push them to digitalocean etc.. I can imagine how each piece works, but I'd love to see how a coherent and manageable project in that style is organized as a whole.)


Hmmmm... I can't say I've ever looked a whole lot. Based on the replies in this thread though, I should probably just take the scripts I've got, make sure there's nothing sensitive in there, and throw them up on Github. Maybe I'll strip my SSH pubkey out of the too, so that we don't end up with a bunch of servers that I can log into :D


That would be cool! (And that's a yes for stripping your SSH key ;)


As a tangent... super curious about your username. I've been in the local Radarsat ground terminal and worked on some barely-related projects...


Haha awesome not often people comment on it.. a previous employer did some work on Radarsat-1 so the name was floating around in my circles a couple of decades ago when I was starting to make music and post on forums, I just started using it without much thought and it stuck


Seconded, I work on a lot of little toy projects, and host them all on a super cheap vps. Only thing i'd mainly say is different is I have a docker-compose project consisting of jwilder/nginx-proxy, a bind server, and a log processing server. Each project I have to add the virtual network of that composes. But then its all handled.


Out of curiosity, have you ever used Traefik? At work we're using Docker Swarm (because we needed more power than a single server could give us but Kubernetes seemed excessive) with Traefik and it works beautifully.


Hey Unicornfinder. Please join us at the community forum if you haven't already. And, thanks for the mention! We hope to get to know you more, on the forum. https://community.containo.us


Like I said it's a personal bias, mainly about Google. The only time I had to use it was with RH's cloud and if it wasn't for their good documentation I would have dropped the client. Everytime I looked under the hood it reminded why I hate being a developer around 35% of the time.


That’s a rather bizarre reason to avoid a pretty solid system.


But also a pretty solid testament to the credit of RedHat's documentation, which I'll echo myself. My OpenShift experience is limited to 2017, but I've never heard anything but positive things about OpenShift's documentation, and I heard it has moved a lot closer to mainline Kubernetes since.


As a huge proponent of kubernetes for the enterprise world (a big part of my day job), I won’t touch it for small to medium sized projects.

The cost of running a managed kubernetes is too expensive to justify for the benefits in these cases. And if you choose to self-manage to cut the money cost, it ends up being significantly more expensive from a time and sanity perspective.


What are the costs of a managed Kubernetes? I don't know if I understand what you mean, because most managed Kubernetes that I know are practically not, or just nominally more expensive in terms of resource cost. Like GKE, AKS, you don't pay for manager nodes, so it's actually a cost savings compared to running your own Kubernetes with kubespray or another method that leaves managing the control plane nodes up to you - managed Kubernetes is actually cheaper, if you're building for high availability.

Are you referring to the cost of migration (since most Kubernetes adopters are probably also learning K8s for the first time as well?)


What are my options to replace Docker Compose? I dont want to introduce a chaotic mess by using kubernetes. Or to dedicate brain power to learn what they changed every week. Their readme really confuses me with podman play, kompose, k8s.


There is an implementation of docker compose for podman in development: https://github.com/muayyad-alsadi/podman-compose


Except that then you lose MacOS and Windows compatibility which is somewhat important to Docker.


Now that systemd-nspawn also has oci support, I wonder if podman/cri-o are going to switch to nspawn rather than runc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: