Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do the same as you using Caddy.

To avoid downtime try using:

    health_uri /health
    lb_try_duration 30s
Full example:

    api.xxx.se {
      encode gzip
      reverse_proxy api:8089 {
        health_uri /health
        lb_try_duration 30s
      }
    }
This way, Caddy will buffer the request and give 30 seconds for your new service to get online when you're deploying a new version.

Ideally, during deployment of a new version the new version should go live and healthy before caddy starts using it (and kills the old container). I've looked at https://github.com/Wowu/docker-rollout and https://github.com/lucaslorentz/caddy-docker-proxy but haven't had time to prioritize it yet.



That's neat, I wonder if there's a way to do that with nginx?

edit: closest I found is this manual way, using Lua: https://serverfault.com/questions/259665/nginx-proxy-retry-w...


If I understand you correctly, you do a sort of blue green deploy? Load balancing between two versions while deploying but only one most of the time?

How do you orchestrate the spinning up and down? Just a script to start service B, wait until service B is healthy, wait 10 seconds, stop service A, and caddy just smooths out the deployment?


Thanks for that. Didn't know this is a thing in Caddy. Seems low effort so I'll probably do that for now. I omitted it but I'm actually using caddy-docker-proxy. It's awesome, makes the config section be part of each project nicely. Haven't seen docker-rollout though. Seems like it could be promising.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: