Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> However, we are reaching the end of life for the libraries and code that support HTTP/1.1

What libraries are ending support for HTTP/1.1? That seems like an extremely bad move and somewhat contrived.



HTTP versions less than 2 have serious unresolvable security issues related to http request/response smuggling and stream desynchronization.

https://http1mustdie.com/


I have an alternative...

Rather than throwing HTTP/1.1 into the garbage can, why don't we throw Postel's Law [0] into the garbage where it belongs.

Every method of performing request smuggling relies on making an HTTP request that violates spec. A request that sends both Content-Length and Transfer-Encoding is invalid. Sending two Content-Lengths is invalid. Two Transfer-Encoding headers is allowed -- They should be treated as a comma-separated lists -- so allow them and treat them as such, or canonicalize them as a single header if you're transforming it to something downstream.

But for fuck's sake, there's literally no reason to accept requests that contain most of the methods that smuggling relies upon. Return a 400 Bad Request and move on. No legit client sends these invalid requests unless they have a bug, and it's not your job as a server to work around their bug.

[0] Aka, The Robustness Principle, "Be conservative in what you send, liberal in what you accept."


If you're using a reverse proxy, maybe. I don't think it's sufficient to kill a whole version of HTTP because of that.


There is an argument HTTP/2 was created by CDNs for CDNs (reverse proxies)


For sure it was not created for things like the web of things. Around 2015 I had so much hope for it to be usable for embedded devices (like using compression with preshared ducts), but at least at the time the complexity of http/2 were overwhelming, with the actual improvements underwhelming.


I wonder too, for a DNS query do you ever need keepalive or chunked encoding? HTTP/1.0 seems appropriate and http2 seems overkill


DNS seems like exactly the scenario where you would want http2 (or http1.1 pipelining but nobody supports that). You need to make a bunch of dns requests at once, and dont want to have to wait a roundtrip to make the next one.


ok multiple requests makes sense for keepalive (or just support a "batch" query, it's http already why adhere so tightly to the udp protocol)

http/1.0 w/keepalive is common (amazon s3 for example) perfectly suitable simple protocol for this


Keepalive is not really what you want here.

For this usecase you want to be able to send off multiple requests before recieving their responses (you want to prevent head of line blocking).

If anything, keep alive is probably counter productive. If that is your only option its better to just make separate connections.


makes sense but I still would prefer to solve that problem with "batch" semantics at a higher level rather than depend on the wire protocol to bend over backwards


The problem with batch semantics is you do have to know everything up front. You cant just do one request and then 20 ms later another.

For DNS this might come up in format parsing. E.g. in html, First you see <script> tag, fire off the DNS request for that, and go back to parsing. Before you get the DNS result you see an <img> tag for a different domain and want to fire off the DNS result for that. With a batch method you would have to wait until you have all the domain names before sending off the request (this might get more important if you are recieving the file you are patsing over the network and you dont know if the next packet containing the next part of the file is 1ms away or 2000ms).


clearly dns requests ought to be batched in this scenario, but we can imagine a smarter mechanism than http2 multiplexing to do it

the problem with relying on the wire protocol to streamline requests that should've been batched is that it lacks the context to do it well




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: