This is one of those false voyeur OS internet tennets designed to get people to publish their stuff.
Obscurity is a fine strategy, if you don't post your source that's good. If you post your source, that's a risk.
The fact that you can't rely on that security measure is just a basic security tennet that applies to everything: don't rely on a single security measure, use redundant barriers.
Truth is we don't know how the subdomain got leaked. Subdomains can be passwords and a well crafted subdomain should not leak, if it leaks there is a reason.
I once worked for a company which was using a subdomain of an internal development domain to do some completely internal security research on our own products. The entire domain got flagged in Safe Browsing despite never being exposed to the outside world. We think Chrome's telemetry flagged it, and since it was technically routable as a public IP (all public traffic on that IP was blackholed), Chrome thought it was a public website.
But who said that all passwords or shiboleths should all be encrypted in transit?
It can serve as a canary for someone snooping your traffic. Even if you encrypt it, you don't want people snooping.
To date of my subdomains that I never publish, I haven't had anyone attempting to connect with them.
It's one of those redundant measures.
And it's also one of those risks that you take, you can maximize security by staying at home all day, but going out to take the trash is a calculated risk that you must take or risk overfocusing on security.
It's similar to port knocking. If you are encrypting it, it's counterproductive, it's a low effort finishing touch, like a nice knot.
Truth is we don't know that the subdomain got leaked. The example user agent they give says that the methodology they're using is to scan the IPv4 space, which is a great example of why security through obscurity doesn't work here: The IPv4 space is tiny and trivial to scan. If your server has an IPv4 address it's not obscure, you should assume it's publicly reachable and plan accordingly.
> Subdomains can be passwords and a well crafted subdomain should not leak, if it leaks there is a reason.
The problem with this theory is that DNS was never designed to be secret and private and even after DNS over HTTPS it's still not designed to be private for the servers. This means that getting to "well crafted" is an incredibly difficult task with hundreds of possible failure modes which need constant maintenance and attention—not only is it complicated to get right the first time, you have to reconfigure away the failure modes on every device or even on every use of the "password".
Here are just a few failure modes I can think of off the top of my head. Yes, these have mitigations, but it's a game of whack-a-mole and you really don't want to try it:
* Certificate transparency logs, as mentioned.
* A user of your "password" forgets that they didn't configure DNS over HTTPS on a new device and leaves a trail of logs through a dozen recursive DNS servers and ISPs.
* A user has DNS over HTTPS but doesn't point it at a server within your control. One foreign server having the password is better than dozens and their ISPs, but you don't have any control over that default DNS server nor how many different servers your clients will attempt to use.
* Browser history.
Just don't. Work with the grain, assume the subdomain is public and secure your site accordingly.
Something many people don't expect is that the IPv6 space is also tiny and trivial to scan, if you follow certain patterns.
For example, many server hosts give you a /48 or /64 subnet, and your server is at your prefix::1 by default. If they have a /24 and they give you a /48, someone only has to scan 2^24 addresses at that host to find all the ones using prefix::1.
Assuming everyone is using /48 and binding to prefix::1, that's a 2^16 difference with scanning the IPv4 address space. Assuming a specific host with only one IPv6 /24 block and delegating /64, this is a 2^12 difference. Scanning for /64 on the entire IPv6 space is definitely not as tiny.
AWS only allows routing /80 to EC2 instances making a huge difference.
It doesn't mean that we should rely on obscurity, but the entire space is not tiny as IPv4 was.
IPv6 address space may be trivial from this perspective, but imagine trying to establish two-way contact with a user on a smartphone on a mobile network. Or a user whose Interface ID (64 bits) is regenerated randomly every few hours.
Just try leaving a User Talk page message on Wikipedia, and good luck if the editor even notices, or anyone finds that talk page again, before the MediaWiki privacy measures are implemented.
> This reliance on "security through obscurity" can produce resultant weaknesses if an attacker is able to reverse engineer the inner workings of the mechanism. Note that obscurity can be one small part of defense in depth, since it can create more work for an attacker; however, it is a significant risk if used as the primary means of protection.
"The product uses a protection mechanism whose strength depends heavily on its obscurity, such that knowledge of its algorithms or key data is sufficient to defeat the mechanism."
If you can defeat the mechanism, that's not very impactful if it's one stage of a multi-round mechanism. Especially if vulnerating or crossing that perimeter alerts the admin!
People consistently misuse the Swiss cheese security metaphor to justify putting multiple ineffective security barriers in place.
The holes in the cheese are supposed to represent unknown or very difficult to exploit flaws in your security layers, and that's why you ideally want multiple layers.
You can't just stack up multiple known to be broken layers and call something secure. The extra layers are inconvenient to users and readily bypassed by attackers by simply tackling them one at a time.
So according to you, a picket fence or a wire fence is just a useless thing that makes things less usable by users?
Security does not consist only of 100% or 99.99% effective mechanisms, there needs to be a flow of information and an inherent risk, if you are only designing absolute barriers, then you are rarely considering the actual surface of relevant user interactions. A life form consisting only of skin might be very secure, but it's practically useless.
The saying is "security by obscurity is not security" which is absolutely true.
If your security relies on the attacker not finding it or not knowing how it works, it's not actually secure.
Obscurity has its own value of course, I strongly recommend running any service that's likely to be scanned for regularly on non-standard ports wherever practical simply to reduce the number of connection logs you need to sort through. Obscurity works for what it actually offers. That has nothing to do with security though, and unfortunately it's hard in cases where a human is likely to want to type in your service address because most user-facing services have little to no support for SRV records.
Two of the few services that do have widespread SRV support are SIP VoIP and Minecraft, and coincidentally the former is my day job while I've also run a personal Minecraft server for over a decade. I can say that the couple of systems I still have running public-facing SIP on port 5060 get scanned tens of thousands of times per hour while the ones running on non-standard ports get maybe one or two activations of fail2ban a month. Likewise my Minecraft server has never seen a single probe from anyone other than an actual player.
> If your security relies on the attacker not finding it or not knowing how it works, it's not actually secure.
Every branch of the military would like to talk to you and inform you that sometimes, the enemy not finding the target, or not knowing how the target works, can be extremely, actually secure. Like, still alive secure. I'd argue that's a rather effective security measure in certain situations.
Then there's compartmentalization, need to know, and then all of the security clearance levels...
Leaking classified documents can be considered treason, which is one of very few non-violent crimes you can commit that could result in the death penalty.
The Fed seems to think security through obscurity is a pretty fucking alright thing, seeing as how they use it everywhere.
It's become an anti-cliche. Security via obscure technique is a valid security layer in the exact same way a physical lock tumbler will not unlock when any random key is inserted and twisted. It's not great but it's not terrible and it does a fine job until someone picks or breaks it open.
I don’t think that analogy works well, a subdomain that is not published is more like hiding the key to the front door in the garden somewhere… does a fine job of keeping the house secure until someone finds it…
Why not use letters and packages which is the literal metaphor these services were built on?
It's like relying on public header information to determine whether an incoming letter or package is legitimate.
If it says: To "Name LastName" or "Company", then it's probably legitimate. Of course it's no guarantee, but it filters the bulk of Nigerian Prince spam.
It gets you past the junk box, but you don't have to trust it with your life.
So many thoughts on that, but from my perspective - obscurity is ok, but you can not depend on it at all.
Great example is port knocking - it hides your open port from random nmap, but would you leave it as the only mechanism preventing people getting to your server? No. So does it make sense to have it? Well maybe, it's a layer.
Kerckhoffs' principle comes to my mind as well here.
So while I agree with you on that's obscurity is fine strategy, you can never depend on it ever.
>obscurity is fine strategy, you can never depend on it ever.
Right, I'm arguing that this is a property of all security mechanisms. You can never depend on a single security mechanism. Obscurity is no different. You cannot depend only on encryption, you cannot depend only on air gaps, you cannot depend only on obscurity, you cannot depend only on firewalls, you cannot depend only on user permissions, you cannot depend only on legal deterrents, you cannot depend only on legal threats, etc..
As long as you don't go into "nah, I have another protection barrier, I don't need the best possible security for my main barrier" mode...
Or in other words, if you place absolutely zero trust in it, consider it as good as broken by every single script kid, and publicly known, then yeah, it's fine.
But then, why are you investing time into it? Almost everybody that makes low-security barriers is relying on it.
Depends on the context and exposure. Sometimes a key under a rock is perfectly fine.
I used to work for a security company that REALLY oversold security risks to sell products.
The idea that someone was going to wardrive through your suburban neighborhood with a networked cluster of GPUs to crack your AES keys and run a MITM attack for web traffic is honestly pretty far fetched unless they are a nation-state actor.
They could also just cut and tip both ends of the Ethernet cable I have running between my house and my outbuilding too. I probably wouldn't notice if I'm asleep.
Metaforgotten, but this is a very standard attack surface, you don't need to imagine such a close tap, just imagine that at any point in the multi node internet an attacker has a node and snoops the traffic in its role as a relaying router.
One of my favorite patterns for sending large files around is to drop them in a public blob storage bucket with a type 4 guid as the name. No consumer needs to authenticate or sign in. They just need to know the resource name. After a period of time the files can be automatically expired to minimize the impact of URL sharing/stealing.
Wouldn't the blob storage host be able to see your obscure file?
I suppose if it's encrypted, no. Like the pastebin service I run, it's encrypted at rest. It doesn't even touch disks, so I mean, that's a decent answer to mine own question.
No, it's a very sensible slogan to keep people from doing a common, bad thing.
Obscurity helps cut down on noise and low effort attacks and scans. It only helps as a security mechanism in that the remaining access/error logs are both fewer and more interesting.
I definitely see it's value as a very naive recommendation to avoid someone literally relying on an algorithmic or low entropy secret. Literally something you may learn on your first class on security.
However on more advanced levels, a more common error is to ignore the risks of open source and being public. If you don't publish your source code, you are massively safer, period.
I guess your view on the subject depends on whether you think you are ahead of the curve by taking the naive interpretation. It's like investing in the stock market based on your knowledge of supply and demand.
making things obscure and hard to find is indeed a sound choice, as long as its not the single measure taken. i think people tout this sentence because its popular to say it, without thinking further.
you dont put an unauthenticated thing in a difficult to find subdomain and call it secure. but your nicely secured page is more secure if its also very tedious to find. its a less low hanging fruit.
as you state also there is always a leak needed. but dns system is quite leaky. and often sources wont fix or wont admit its even broken by their design.
strong passwords are also insecure if they leak, so you obscure them from prying eyes, securing it by obscurity.
A lot of the pushback I'm seeing is that people are assuming that you always want to make things more secure. That security is a number that needs to go up, like income or profit, as opposed to numbers that need to go down, like cost and taxes.
The possibility that I'm adding this feature to something that would otherwise have been published on a public domain does not cross people's mind, so it is not thought of an additional security measure, but a removal of a security feature.
Similarly it is assumed that there's an unauthenticated or authentication mechanism behind the subdomain. There may be a simple idempotent server running, such that there is no concern for abuse, but it may be desirable to reduce the code executed by random spearfishing scanners that only have an IP.
This brings me again to the competitive economic take on the subject, that people believe that this wisdom nugget they hold "that security by obscurity" is a valuable tennet, and they bet on it and desperately try to find someone to use it on. You can tell when a meme is overvalued because they try to use it on you even if it doesn't fit, it means they are dying to actually apply it.
My bet is that "Security through obscurity" is undervalued, not as a rule or law, or a definite thing, but as a basic correlation: keep a low profile, and you'll be safer. If you want to get more sales, you will need to be a bit more open and transparent and that will expose you to more risk, same if you want transparency for ethical or regulation reasons. You will be less obscure and you will need to compensate with additional security mechanisms.
But it seems evident to me that if you don't publish your shit, you are going to have much less risk, and need to implement less security mechanisms for the same risks as compared to voicing your infrastructure and your business, duh.
So we're all agreeing here. It's ok to hide stuff from sight, but hiding stuff from sight isn't actually security and can't replace at the very least, having password protection.
Depending on one's threat model, any technique can be a secure strategy.
Is my threat model a network of dumb nodes doing automatic port scanning? Tucking a system on an obscure IPv6 address and never sharing the address may work OK. Running some bespoke, unauthenticated SSH-over-Carrier-Pigeon (SoCP) tunnel may be fine. The adversaries in the model are pretty dumb, so intrusion detection is also easy.
But if the threat model includes any well-motivated, intelligent adversary (disgruntled peer, NSA, evil ex-boyfriend), it will probably just annoy them. And as a bonus, for my trouble, it will be harder to maintain going forward.
It's a bit more complex than that as well. You might have attackers of both types and different datapoints that have different security requirements. And these are not necessarily scalars, you may need integrity for one, privacy for the other.
Even when considering hi sophistication attackers, and perhaps especially with regards to them, you may want to leave some breadcrumbs for them to access your info.
If the deep state wants my company's info, they can safely get it by subpoenaing my provider's info, I don't need to worry about them as an attacker for privacy, as they have the access to the information if needed.
If your approach to security is to add cryptography everywhere and make everything as secure as possible and imagine that you are up against a nation-state adversary (or conversely, that you add security until you satisfy a requirement conmesurate with your adversary), then you are literally reducing one of the most important design requirements of your system to a single scalar that you attempt to maximize while not compromising other tradeoffs.
A straightforward lack of nuance. It's like having a tax strategy consisting of number go down, or pricing strategy of price go up, or cost strategy of cost go down, or risk strategy of no risk for me, etc...
Obscurity as a single control does not work. That's what the phrase hints at. In combination with other controls, it could be part of an effective defense. Context matters though.
This is one of those false voyeur OS internet tennets designed to get people to publish their stuff.
Obscurity is a fine strategy, if you don't post your source that's good. If you post your source, that's a risk.
The fact that you can't rely on that security measure is just a basic security tennet that applies to everything: don't rely on a single security measure, use redundant barriers.
Truth is we don't know how the subdomain got leaked. Subdomains can be passwords and a well crafted subdomain should not leak, if it leaks there is a reason.