This is great. We use haproxy at my work and I like it, it does it's job, but quirks like dns resolution only at startup, having to reload on config changes and no seamless reloading stop me from loving it.
It still requires explicit action. However, the old way had a little dance between the old process and the new process: the new process tells the old process to start shutting down, the old process stops listening for new connections, then the new process starts listening for new connections. That left a gap where connections got rejected.
The new technique is for the old process to use a Unix socket to seamlessly transfer ownership of the listening sockets to the new process. At no point are the listening sockets closed, so no connections are rejected.
It's still a (potentially) new haproxy binary starting up and parsing the (potentially) changed haproxy config because the user requested a graceful restart.
The new process listen to connections before the old process stop listening. The problem is that the old process can still have new connections queued up. They are lost when its sockets are closed.
I, too, am wondering about that. The only alternative I can see to reloading is doing it automatically every file change, which means everything would break if I saved before everything was ready. I am perplexed.
It certainly does not automatically reload on configuration file change.
This simply means you can have hitless reloads - change your configuration, reload HAProxy, and you will drop zero incoming connections during the reload time. Other methods previously existed to do this without having to first drain traffic, but they were both unwieldy and still tended to have a performance impact.