Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Gfycat has been down for two days due to an expired SSL certificate (reddit.com)
96 points by thorum on May 20, 2023 | hide | past | favorite | 36 comments


If you're looking to archive/download/whatever the images on gfycat.com, you can do the following:

0. This process only works in Chrome(-based) browsers

1. Open each of the following domains:

- gfycat.com

- api.gfycat.com

- weblogin.gfycat.com

- thumbs.gfycat.com

2. You will see a scary HTTPS warning. This is fine.

3. Type "thisisunsafe" into the page (no need to select anything or click and input field). This overrides any and all HTTPS errors that aren't technical issues.

4. After doing this for every domain, the HTTPS errors will be ignored for the rest of the session and the site will work again. If anything is still broken, hit F12 for the dev tools and check the network tab to see what domains failed and maybe try the above again.

Two days of downtime starting on a week day is not a good sign. Back up any images you want to keep!


https://chromium.googlesource.com/chromium/src/+log/refs/hea... and here's where this is defined in the source code. It's a way to bypass the SSL errors due to the HSTS headers.


> Type "thisisunsafe" into the page

great tip


Does anyone know of a similar workaround on Firefox for HSTS?


I'm a little surprised by this. They're using an AWS Certificate which means the entire certificate lifecycle should be fully automated[1]. Assuming they use DNS validation, I speculate that somebody deleted the validation CNAME record and then the doom-and-gloom renewal emails went to an unmonitored mailbox. Then they ignored it so long it ended up on HN.

Gfycat publishes an HSTS header, so they're under _hard_ downtime too.

1: https://docs.aws.amazon.com/acm/latest/userguide/managed-ren...


Most likely someone stopped paying the AWS bill. If you `curl -k` to avoid the HSTS problem you see that CloudFront and/or lambda aren't working either.


Works just fine for me when I bypass HSTS...


An RFC draft I’ve been working on tries to address this somewhat:

https://datatracker.ietf.org/doc/draft-todo-chariton-dns-acc...



"Gfycat is currently owned by Snapchat." (Wikipedia.)

Edit: That appears to have been incorrectly removed by someone who was probably reading my comment.

News of Snap Inc acquiring Gfycat can be seen here:

https://www.linkedin.com/company/gfycat/about/

> Gfycat (acquired by Snap Inc)

https://finance.yahoo.com/news/meta-fights-overturn-uk-order... (Reuters, april 2022)

> Snap later acquired Gfycat, a competitor to Giphy.

https://www.adweek.com/social-marketing/meta-loses-tribunal-... (oct 2022)

> and Snap, which owns Gfycat.


This has been removed. It seems that whoever added it confused Gfycat with Giphy (afaik they have no relation)

https://en.m.wikipedia.org/wiki/Special:MobileDiff/115601663...


Incorrectly removed.


Oh you’re right, I missed this line in the cited article:

> Snap later acquired Gfycat, a competitor to Giphy.


It was restored


Huh, I thought it was Facebook's. The source on Wikipedia has a surprisingly helpful title:

> Meta fights to overturn UK order to sell Giphy


I tried disabling certificate validation so I could browse the site again. It looks like there's still text and some generic images there, but I wasn't able to load a couple of user-uploaded content images that I already had the URLs for. The site appears to be hollowed out from the inside, regardless of the cert problems.


From other discussions on Reddit, it seems that the upload function has been broken since March.


Upload worked for me, I think in April some time, but my gif never made it into the search, I can only direct link its page. Thought it was some untrusted new account issue


I picked a few images randomly from their API and I can view them just fine.


Do we really need full encryption and signing for cat videos?


do you want foreign code executing in your browser instead of a cat video? if the answer is no, then you need encryption.

EDIT: going beyond security, people have forgotten that this crap happened https://arstechnica.com/tech-policy/2014/09/why-comcasts-jav...


Encryption won't help here. They're just hosting content created by others.



Strong disagree. Requiring TLS has made the web less private, not more, because now every request has to hit origin servers and can't be proxied at network edges like I used to do with Squid. Every single connection now gives away metadata about my browsing habits. It doesn't matter that the contents of those connections are encrypted: https://kieranhealy.org/blog/archives/2013/06/09/using-metad...

I run TLS on all of my sites, but it's one more thing making hosting harder for an average person and one more thing driving people into walled gardens. Even "professional" sites now fall apart if not maintained as evidenced by the submission we're commenting on. Even automatic LetsEncrypt is still a moving part that has the possibility to fail in <x> number of ways that plain-files-in-a-directory-behind-httpd never could. Snowden showed us that the Internet is a giant spy machine, and our response was to get Google Analytics running over TLS and to make sure every independent site now comes with a built-in expiration date. Cosmic irony.


> now every request has to hit origin servers and can't be proxied at network edges like I used to do with Squid

And presumably you ran your own DNS servers as well? And ensured your proxy wasn't in turn being monitored?

And what's stopping you from middleboxing your own traffic and re-encrypting it with a root certificate? Your technically complex privacy solution is still technically possible.

> making hosting harder for an average person

The average person isn't running squid. And the average person uses a service to host their website which will take care of the certificate for them.

> Even automatic LetsEncrypt is still a moving part that has the possibility to fail in <x> number of ways that plain-files-in-a-directory-behind-httpd never could.

In all my life, I've never had downtime due to an automatic certificate renewal failing. And I get an email when my certificates are almost ready for renewal. But I have had downtime because the httpd logs filled up the disk. And because my box had gotten popped because I was on an insecure, outdated kernel. And once because I was FTPing plain-files-in-a-directory at the same time as someone else was FTPing those same plain-files-in-a-directory.

The web never has been some impervious concrete thing that has been made complicated and brittle with HTTPS. We live in an era where HTTPS is fewer than five commands that you run a single time. And if you truly need anonymity to the point where you don't want to reveal the origins you're connecting to, use Tor.


> And presumably you ran your own DNS servers as well? And ensured your proxy wasn't in turn being monitored?

I'm not defending anything else in the GP's comment (I think they're 100% wrong) but they meant running Squid as a transparent caching proxy. In the dial-up days I ran Squid as a transparent proxy on a Linux box that shared the dial-up connection for the network. You'd set Squid to have a large cache with a long expiry. That way all the styles, scripts, and images for pages would get cached on the proxy. Subsequent page loads, even days later, would be super fast since many of the resources would be loading from Squid's cache.

I even had some scripts that I'd run every week in the middle of the night (modem speaker disabled obviously) to load the homepage to a bunch of sites I visited regularly to freshen the cache. Unlike a browser cache the Squid cache worked across machines and was much easier to set long expiries and just cache way more content.

It improved browsing significantly especially since even with a 56k modem I rarely got better than a 28.8k connection. While for me personally this kind of thing is no longer necessary it's not the worst thing if you've got a slow or unreliable connection. With TLS everywhere setting up such a transparent caching proxy is much more difficult since you need a literal MITM proxy to make it work right.


I miss the days of browsing offline. That stuff is completely broken these days, right?


Unless you want to install a cert on every device you own so you can decode HTTPS in transit, yes.


> In all my life, I've never had downtime due to an automatic certificate renewal failing.

Do you have a WAF? Can I register a fake domain, point it at your IP and send it ACME requests? Getting a bunch of 404s from LE will wind up getting them blocked by your WAF.

Then when your server wants to renew its certs, it can't because LE got blocked. This exact scenario happened to one of my personal domains a few months ago. I hope you check the email you registered with LE.


It’s easier for random individuals to do https due to software like caddy than companies that have to worry about the entire certificate lifetime.

You can still MITM yourself and cache data. Literally nothing is stopping you from resigning your own traffic. Companies do this all the time within their own network.


>because now every request has to hit origin servers

That's not even close to true.


Yeah, no. Not all traffic matters.


Is there any actual benefit to auto-expiring SSL certs? Presumably if your SSL cert is leaked, you would like to have the ability to revoke it without waiting a full year or whatever. On the flip side, if your SSL cert isn’t leaked it benefits quite precisely nobody to have it become invalid just because a caesium-133 atom has gone through however many transitions.

The only benefit I’ve heard voiced is that it forces organizations to develop processes for cycling said certs - but that’s absolutely bogus. Imagine if S3 auto-deleted all buckets after a number of days and said “This is a good thing! Us auto-deleting these buckets forces you to set up processes to deal with us auto-deleting these buckets! You’re welcome.”


Scenario: you register foo.com and generate a certificate. You get bored and let the domain lapse. A decade later, a bank re-registers the domain. At best, they can only revoke your cert after it's used in a MITM attack, and a single fraudulent bank transfer from a rich person can net you a million bucks.


Is it impossible to query for (then revoke) existing certs for a domain you have just acquired? If not, that seems like the system failure here. Otherwise you have a ~year of MITM to worry about for any domain you obtain, regardless of expirations.


the custom probably comes from the era when people had to pay for SSL certificates, but it's still good security hygiene.

it's easy to revoke a certificate that you know was compromised, but what if you don't know it was? a temporal security gap could expose your eternal private key.

with a short shelf-life it doesn't give attackers a lot of room to maneuver.

otherwise, they could wait for an opportunity to chain it with another exploit and really ruin your day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: