Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



Strong disagree. Requiring TLS has made the web less private, not more, because now every request has to hit origin servers and can't be proxied at network edges like I used to do with Squid. Every single connection now gives away metadata about my browsing habits. It doesn't matter that the contents of those connections are encrypted: https://kieranhealy.org/blog/archives/2013/06/09/using-metad...

I run TLS on all of my sites, but it's one more thing making hosting harder for an average person and one more thing driving people into walled gardens. Even "professional" sites now fall apart if not maintained as evidenced by the submission we're commenting on. Even automatic LetsEncrypt is still a moving part that has the possibility to fail in <x> number of ways that plain-files-in-a-directory-behind-httpd never could. Snowden showed us that the Internet is a giant spy machine, and our response was to get Google Analytics running over TLS and to make sure every independent site now comes with a built-in expiration date. Cosmic irony.


> now every request has to hit origin servers and can't be proxied at network edges like I used to do with Squid

And presumably you ran your own DNS servers as well? And ensured your proxy wasn't in turn being monitored?

And what's stopping you from middleboxing your own traffic and re-encrypting it with a root certificate? Your technically complex privacy solution is still technically possible.

> making hosting harder for an average person

The average person isn't running squid. And the average person uses a service to host their website which will take care of the certificate for them.

> Even automatic LetsEncrypt is still a moving part that has the possibility to fail in <x> number of ways that plain-files-in-a-directory-behind-httpd never could.

In all my life, I've never had downtime due to an automatic certificate renewal failing. And I get an email when my certificates are almost ready for renewal. But I have had downtime because the httpd logs filled up the disk. And because my box had gotten popped because I was on an insecure, outdated kernel. And once because I was FTPing plain-files-in-a-directory at the same time as someone else was FTPing those same plain-files-in-a-directory.

The web never has been some impervious concrete thing that has been made complicated and brittle with HTTPS. We live in an era where HTTPS is fewer than five commands that you run a single time. And if you truly need anonymity to the point where you don't want to reveal the origins you're connecting to, use Tor.


> And presumably you ran your own DNS servers as well? And ensured your proxy wasn't in turn being monitored?

I'm not defending anything else in the GP's comment (I think they're 100% wrong) but they meant running Squid as a transparent caching proxy. In the dial-up days I ran Squid as a transparent proxy on a Linux box that shared the dial-up connection for the network. You'd set Squid to have a large cache with a long expiry. That way all the styles, scripts, and images for pages would get cached on the proxy. Subsequent page loads, even days later, would be super fast since many of the resources would be loading from Squid's cache.

I even had some scripts that I'd run every week in the middle of the night (modem speaker disabled obviously) to load the homepage to a bunch of sites I visited regularly to freshen the cache. Unlike a browser cache the Squid cache worked across machines and was much easier to set long expiries and just cache way more content.

It improved browsing significantly especially since even with a 56k modem I rarely got better than a 28.8k connection. While for me personally this kind of thing is no longer necessary it's not the worst thing if you've got a slow or unreliable connection. With TLS everywhere setting up such a transparent caching proxy is much more difficult since you need a literal MITM proxy to make it work right.


I miss the days of browsing offline. That stuff is completely broken these days, right?


Unless you want to install a cert on every device you own so you can decode HTTPS in transit, yes.


> In all my life, I've never had downtime due to an automatic certificate renewal failing.

Do you have a WAF? Can I register a fake domain, point it at your IP and send it ACME requests? Getting a bunch of 404s from LE will wind up getting them blocked by your WAF.

Then when your server wants to renew its certs, it can't because LE got blocked. This exact scenario happened to one of my personal domains a few months ago. I hope you check the email you registered with LE.


It’s easier for random individuals to do https due to software like caddy than companies that have to worry about the entire certificate lifetime.

You can still MITM yourself and cache data. Literally nothing is stopping you from resigning your own traffic. Companies do this all the time within their own network.


>because now every request has to hit origin servers

That's not even close to true.


Yeah, no. Not all traffic matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: