Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cisco subdomain private key found in embedded executable (groups.google.com)
141 points by janvdberg on June 20, 2017 | hide | past | favorite | 62 comments


The domain in question (drmlocal.cisco.com) resolves to 127.0.0.1. This is a handy and above all cross-platform/cross browser way for a website to talk to a locally installed application (that's running a web server, bound to the loopback interface) by using CORS and/or jsonp.

If the site that wants to make use of this is using SSL, then the locally running web browser also needs to be using SSL and it needs to have a publicly trusted cert.

The other alternative (installing a local CA) seems to be worse.

An option would be for the locally installed app to ask a server for a cert to use while it's running, but that means that the app needs to phone home which it otherwise would not have to.

So I guess we're back to proprietary browser extensions if this technique is frowned upon.


Back in 2011, the HBO Go application on iOS contained a similar certificate for 127.0.0.1; I believe in that case a certificate for the bare IP address or just "localhost" that they got from Verisign. It was noticed by jan0 from the iOS jailbreak community, who had written isslfix, with the goal of hacking in a fix to a bug found in Apple's SSL certificate verification. He had some logging and noticed this extremely weird certificate get checked.

https://github.com/jan0/isslfix

The file was embedded in the binary as a base64-encoded PKCS12, but was encrypted with a password. They thought they were clever and used a format string to cobble together multiple parts that they then ran through a base64 decide to get the password. When jan0 told me about it, I spent a couple hours reversing the logic until I worked out the password... only for jan0 to tell me after that he had simply used one of my tools to hook the decryption itself and it only took him a few minutes ;P.

(I scavenged my stuff and found a copy!) The password to this file--which is the one directly taken from the binary, and so this is the original password (!!)--is "AmsterdamIsC0ld". (I have given talks about "the fallacy of trying to secure content inside of a binary you give to your attacker", and this password with that 0 for the O always manages to get some laughs ;P.)

http://test.saurik.com/hackernews/localhost.der

As for why this absolutely should be frowned upon: this doesn't provide any actual security... if everyone has the private key then anyone on your computer can man in the middle attack this connection. This is no better than just using HTTP to connect to this local service: the only reason you are even using SSL here is to bypass a security check in the browser (that an SSL website can't access a non-SSL website).

And if you want to argue "it is a local computer, why is a man in the middle attack relevant", you need only see the work people in jailbreak communities for random devices do to escalate privileges through various otherwise protected and sandboxed applications... made all the more obvious in your own description as it is the security of this website that is being undermined now (as maybe returning weird stuff from its API will mess with the user's site account or network settings or just break the website and give you arbitrary JavaScript execution).

I mean, in this case, you can't even guarantee the connect is to localhost! You start by swapping out the DNS to redirect drmlocal.cisco.com to some other IP address, and now you can get that website running on one computer--which you might not even have physical access to!--to talk to your forged copy, and trick the user into interacting with it, possibly letting you steal authorization keys or doing any of the other aforementioned described "bad stuff".

I will argue a local CA is absolutely "better", even if the optics are worse: you can generate the key for the CA, use that key to generate a key for just the one host, destroy the private key for the CA, and then install the public key. You now have a CA installed that can effectively only ever be used for the one host.

Of course, that causes a problem that you are training users to install CAs. If you dislike that, you can either use a proprietary browser or lobby browsers and OS developers to provide support for installing a CA limited to a hostname pattern: the fact that these do not exist is the real problem here and is downright criminal, as we know people want to do stuff like this, and it isn't even a ludicrous concept.


> As for why this absolutely should be frowned upon: this doesn't provide any actual security...

I was under the impression that some of the newer APIs only work under SSL (e.g. geolocation, https://developers.google.com/web/updates/2016/04/geolocatio... ). This would mean they'd need this even if they were not requiring security. The specific instance is probably to tick a checklist for the content provider though.


...and the reason that those APIs require SSL is so you can't access them via man-in-the-middle attacks of other websites that you have authorized access to those APIs for. The attack here is that I can use those APIs on your computer if you try to access this service, as I can forge the insecure DNS responses to get you to connect to my instead of localhost and then forge the SSL verification to make the hostname valid. I don't even need you to try to use the service as I can hijack an existing HTTP connection you make and give you an iframe or a redirect. If I access those APIs, I am then working off of Cisco's authorizations; if they have used any of these APIs and the user authorized them I control them, and if the user hasn't they will be asked to authorize Cisco, not me, which might seem legitimate.


If that is true Verisign should be blacklisted for signing a certificate for localhost or an IP address. Both of these things are not allowed.


FWIW, my post provides the certificate and private key for anyone to verify themselves; the certificate is definitely made out to "localhost" ;P.

subject=/C=US/ST=Florida/L=Melbourne/O=AuthenTec/OU=Terms of use at www.verisign.com/rpa (c)05/CN=localhost

issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3


This (issuing/signing certs these/similar hostnames) was permitted in the past. It is no longer permitted.


Plex does this by generating a private key at install then issuing a real ssl cert for it. That way the private key never is seen by Plex or anyone else. Really cool technique.

https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


The company i used to work for wanted to basically replicate what plex does with their certs for our IoT devices. Unfortunately the price implied by Digicert was far too high for a startup.

The other solutions proposed like abusing Lets Encrypt, etc never really seemed right. I am still not sure really what the best solution is for IoT devices that doesn't have an astronomical cost.


I can see three solutions.

1. Use HTTP. It's not forbidden yet. I think it's the most compatible solution.

2. Use http://127.0.0.1. It seems that this is standards-compatible solution, but browser support is not good enough, so if you can dictate browser choice for your user, it's the best future-proof solution.

3. Use your server as a proxy. When your localhost application wants to talk to your site, it talks to server and server retransmits this data to the browser. Quite cumbersome solution IMO, but should work, unless you must be able to work offline.


Yes. But it requires them to have a CA certificate which requires special contracts with a CA.

Now, Cisco could certainly do that, but for smaller companies, that‘s probably not feasible


Not necessarily. Plex does not hold the private key for their intermediate certificate authority ('Plex Devices High Assurance CA2'). They use a DigiCert API to sign certificates under the 'plex.direct' domain.


Which means: special deal with digicert. Or can you link me to a product page where anybody can sign up?


A company i used to work for tried to get something similar to what Digicert has with Plex, and the initial cost was very high.


Let's Encrypt.

There's no reason not to use LE. You need to make sure that your device is accessible under the given subdomain for a certain period, and it may not always be trivial. But there are ways to do that.


> You need to make sure that your device is accessible under the given subdomain for a certain period, and it may not always be trivial.

I tried to set up Let's Encrypt for this purpose for a personal project (to be given away as a gift), and concluded it was basically impossible as things are.

You don't want your users to have to figure out how to open their firewall enough for an http server to be publicly reachable just to use your device, especially when it has no other need for it. It would need to be reachable every 60-90 days to renew the certificates, which is frequently enough to mean "always".

The ACME protocol does have a DNS-based challenge mechanism, but it requires putting a token in a TXT record at an administrative sub-domain of your actual domain (e.g., _acme_challenge.example.com). As far as I can tell, none of the free DNS providers support or enable this (nor does certbot, without writing your own hooks). I don't want to eat a domain registration fee in perpetuity for a gift.

Sadly, there's no ACME challenge that can be satisfied solely by the actual thing you're trying to assert, which is that you control which IP address a given domain name points to.


Let’s encrypt has tight limits on the amount of certificates it issues per domain. Presumably to stop this exact scheme


You might qualify for the Public Suffix List (https://publicsuffix.org/list/), which also helps browsers isolate your subdomains, and Let's Encrypt has a trial program for organizations that need more than 2000 subdomains: https://docs.google.com/forms/d/e/1FAIpQLSfg56b_wLmUN7n-WWhb...


If it is local and communication never leaves the machine, why use TLS?

1Password got heat for the opposite in the past (use of ws:// between the browser extension and the Mac Client). 1Password correctly pointed out that there isn't really a point to "secure communications" if the webserver and the web client are on the same machine.

If you can't trust your own machine to intercept your connection then TLS is not going to save you. What I bet happened was some project manager said "There needs to be a green check" or some security person said "You NEED TLS" and nobody spoke up that it didn't make sense.

edit: I'm not a deleter, but I misunderstood your post and I realized it right after I pressed enter.

Did not realize you were talking about being on a remote webpage and being connected locally.


Yeah. The webpage is remote and using ssl and now wants to talk to the locally installed app.

I agree that SSL doesn’t make sense there and honestly, in my opinion it would make sense for browsers to treat localhost as a secure origin.

But they don’t and so we’re back to square one which is that a secured origin can’t make requests to a non secured origin.

Update: newer browsers do in-fact do this. It’s also part of the WHATWG spec: https://github.com/w3c/webappsec-mixed-content/commit/349501...

Of course there’s still IE and Edge that are of different minds, but it’s nice to see movement


Interesting; intuitively, I would say that this should be fine for mobile browsers, but desktop browsers shouldn't be doing this—PC operating systems are still built on a fundamentally "mutually-untrustworthy multitenant time-sharing system" paradigm.

Any random port on localhost (or, indeed, on any other IP assigned to the loopback interface—that's 127/8 in IPv4, you know!) can be a socket opened by a process running on your PC that was started by someone who 1. has limited shell access to your machine, but 2. who isn't you. Privileging such ports would give such attackers an easy escalation path from this limited shell access to administrative access, assuming that you yourself are a [local or domain] administrator.

Mobile devices can ignore this because they're (so far) inherently single-user devices. But anything allowing multiple parallel user sessions—even a Chromebook, or an iPad in Classroom mode—shouldn't make the assumption that local ports are safe. That's how a teacher gets their password phished by their students.


This technique is already the equivalent of a proprietary browser plugin, except for that it's running completely outside of the browser sandbox.

The proper fix here is to stop with the DRM bullshit and just use HTML5 <video> like everyone else.


In my case I’m using this to allow people to read data from a locally connected (via USB) barcode scanner.

Remind me again how <video> would be of help there?


If you already need a native driver then you might as well make the whole thing native.


Can you elaborate on use html5 <video> like everyone else? I am not familiar with this technique.


It looks like browsers will allow you to talk to localhost from a secure page without SSL https://bugzilla.mozilla.org/show_bug.cgi?id=903966. The other way assuming you don't need to transfer much is you can reverse proxy it out to your own server and back to your application. (Where the SSL decoding is done by the remote server)


Chrome doesn’t.


https://chromium.googlesource.com/chromium/src.git/+/130ee68... ?

I just saw that from the first ticket but I guess it hasn't actually been merged there's a bug though.

https://bugs.chromium.org/p/chromium/issues/detail?id=607878


JOSM, an editor for OpenStreetMap, installs its own certificate, which is limited to https://localhost:8112. This seems pretty safe to me - if someone can MITM that, you already have bigger problems.


Presumably that certificate isn't actually signed by any relevant CA, right? If so it seems rather irrelevant in the context.


> The domain in question (drmlocal.cisco.com) resolves to 127.0.0.1.

So if someone just MITMs your clear text DNS to make drmlocal.cisco.com resolve to something else, they get your cisco.com cookies and plausible chance to impersonate Cisco?

Think targeted attacks.


>So I guess we're back to proprietary browser extensions if this technique is frowned upon.

It's not. You just need to not fuck it up.

drmlocal.cisco.com isn't really an acceptable way of doing that because it's in the cisco.com scope, why not just have drmlocalcisco.com instead?


If private key is accessible to the wide public, it's considered compromised and CA must revoke the certificate. Domain doesn't matter.


I actually need to do this. Is there a best practice for handling the certificate? It needs to be installed on the machine for the browser to trust it. All I can think of is to make a new certificate for every machine at install time and binding an obscure domain name to the loopback interface, so it would be highly unlikely to overlap with a real site.


The Plex approach, pointed out above by diafygi above might suit you.

  https://news.ycombinator.com/item?id=14595269
You need a wildcard cert (not hard), and a DNS resolver (daemon) to respond back with custom IP address. Sounds pretty simple in concept. :)


A wildcard won’t help much. If you want to do this correctly, then each client needs to generate a unique certificate and private key.

The name in the certificate, the custom resolver helps with, but now you’re stuck with that uniquely generated certificate which you now need signed by a CA.

And this is the step that’s problematic: that host is not connectable from the outside, so there’s no way for, say, let’s encrypt to do http validation (even if their usage quota would allow to do this at scale, which it doesn’t), so you’d have to do domain validation which would work, but would require your clients to ship their private keys to the server which is a big nono again.

You could also do some crazy polling approach in order to have your server relay between the DNS validation by let’s encrypt and your client (which, again, usually is behind a NAT and firewall, so it has to poll the server to have the possibility to send the response to the ACME challenge), but this is a huge amount of work, and, again, doesn’t work because of per-domain quotas.

So now you’re stuck with a custom solution with a CA which involves you having a trusted signed CA certificate (good luck getting that one) and providing these clients with an API or having a CA providing a custom API for these clients to use.

None of these things are widely available and as they are custom solutions, I’m sure CAs will require a lot of money for, provided they even talk to you at all if you are a small company.


On Windows, I would just put the certificate in the trusted root at install time. No need to use a CA. That's what Fiddler does. It seems safe, since the certificate is one-off per machine, and never leaves the machine.


As it's a local installation, it will be blocked by user's limited permissions, by locally installed security products and by paranoid users regularly culling their certificates (believe me, been there, done that).

It also won’t work if the user is using Firefox or any other solution that works outside of the OS certificate store. Messing with a Firefox profile is frowned upon by Mozilla and even if you did that, it still doesn’t solve the problem for other things that bypass the OS certificate store.


You could use DNS validation method for letsencrypt. But letsencrypt have pretty strict limits, so it won't work for a lot of users without their special permission.


You can do what Cisco did with the domain name. Just don't create a CA-trusted cert for it. Instead, as you suppose, generate a unique cert on each machine and install it to the appropriate/needed trust stores on that machine.


Another option that comes to mind is a keyless SSL configuration, similar to what Cloudflare does:

https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...


Microsoft was proposing to prohibit localhost connections in a new version of Edge. They, rightly, were shouted down from that position.


If somebody MitMs your connection to 127.0.0.1, you're hopelessly lost anyway. I don't really see a reason to protect requests that do not leave your machine.


The issue is remote sites that are using SSL and want to do an Ajax request to a local site.


For whatever it's worth, the team behind this stupidity has been let go a while back...

Throwaway account for obvious reasons.


Used to work for CSCO myself. Was amazed they (internally) didn't have centralised PKI, so we rolled our own as an acquisition.

This is the Sky UK middleware project isn't it? I know a couple of people who work there who said it's a bit of a clusterfuck. There were some good things at CSCO when I was there, but loads of WTF moments.


I certainly hope damage control does a better job than throwing another team under the bus whether or not they were let go. This speaks to systemic issues in my opinion.


What impact does this sort of compromise cause? In context, it seems to be a poorly architected loopback to make an appliance gizmo work. So on the face of it, it sorta seems a bit harmless (well, as much as any internet appliance is harmless...)

I'd imagine that could allow an adversary to compromise DRM for the SKY perhaps? (Based on the domain name.) But there seemed to also be concern that improperly set up cookies for other cisco.com domains may allow this to compromise them; do Cisco devices put sensitive things in cookies where that could happen?

EDIT: I am not in any way, shape, or form a network or security 'guy'. I just read the thread and wasn't horribly alarmed by the discussion; seems like a reasonable but bad exposure on the device.


At a glance, cisco.com has an SSO cookie set for .cisco.com, so given an attacker is on your LAN, they could have used this cert to MITM your connection to drmlocal.cisco.com and insert a script to steal your cisco SSO cookie. That would give the attacker to your cisco account (I have no idea what a cisco account actually entails).


I'm curious, what is the current best practise for shipping LAN-only type embedded devices that needs to be configurable from a browser now that everyone is pushing for https only and marking http not safe?

Some may not even have a local DNS name but only use an RFC 1918 ip address. I don't think the current CA system can accomodate that? Obviously you don't want to hand anyone who asks for it a cert and a private key for "10.0.0.1".


I wonder how many people had to approve the decision to send publicly trusted private keys to users. It seems like such an obvious security mistake that I didn't think people would make it.


How do security procedures for handling, distributing, and revoking keys work in practice, especially at big companies that release a lot of software? For some companies getting this right would have to be critical, but I would be nervous if everything hinged on keeping a small 4K private key file secure.


Well then you should be very nervous, most of the internet's security is centered around keeping 4kb files secure.


IIRC, with many CAs all you have to do to generate a trusted certificate is show you control a subdomain. Once you control the subdomain, you get your cert signed by a CA, and then you can send keys anywhere you want.

In terms of who was _supposed_ to approve the decision, there is a policy in place for that. Did they get that approval? Who knows, but I think in theory there isn't a mechanism to enforce it once you control the subdomain.

Never underestimate the potential to make security mistakes.


If you read that thread or do a DNS lookup, it turns out drmlocal.cisco.com points to 127.0.0.1.

I wonder what they're actually using it for...


Probably communication with the browser over a kind of web API.


I am more interested in how does an embedded app get the cert in the first place. That cert should be accessed by automation and restricted to a set of people only.


I can easily see it as being a "stop gap" solution to get a deadlined proof of concept working, and then some other team getting assigned to take the POC to production and not being smart enough to remedy a very obvious security hole.


It's automatically embedded by the build script ;)


lol, yes, except the acl is wrong :)



Odd that they requested a new cert after theirs was revoked... As if to say "Eh, we'll hide it better next time."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: