You can base64 encode an image, split to TXT records and send over Internet. Useful in certain circumstances. Like when one of the communicating parties is under severe censorship.
You could also securely hash a username and password together with something like argon2ID, and then authenticate users by seeing if the base64'ed TXT record exists. No need to hit an overloaded database, just dig and you'll even have the benefits of local caching with per-record TTL's!
But should you do crazy things like this? Absolutely not!
DNS is notoriously prone to MITM, injection, cache poisoning, DoS, etc. DANE and DNSSEC are horrible bodges that don't actually do anything useful or in a secure way.
Even though it's the foundation of almost everything we try to do securely, including the basis for TLS DV certificate (totally fungible, regardless of a hundred or so certificate authorities, including many located in authoritarian regimes!) validation:
DNS is absolutely and irredeemably broken forever, from a security perspective, and can never be fixed. As tempting (and easy) as it is to hack on it or treat it as an ultra-fast and extensible UDP remotely-accessible lookup database, just don't. (It just needs to die in a fire, probably along with SMTP.)
Unfortunately, even if someone came up with some system that could credibly replace it, that system would inevitably have a LOT of privacy and censorship trade-offs, so DNS is what we're stuck with.
Just stay very aware of the risks of encoding anything security-related inside DNS and try to minimize your reliance on it as best you can.
I am just stopping by to say that this is actually a thing. It is called hesiod and works great in small, maybe air-gapped networks.
As a side note, anything security related exists in the reality of uncertainty. It is expected that sharing properly secured secrets is reasonably safe, but day after day we discover "we didn't know". Sometimes simplicity for a particular application is worth certain amount of risk.
Sometimes, you need to take the server out of its box, out of the bunker, and plug it to both the power distribution network, and of course... a LAN...
Hesiod was actually a thing. I'm not aware of anyone who has run it in production in the last 20 years, perhaps you are? But, in any event, password technology was immature and insecure then (i.e. RMS got upset when passwords were introduced, or remember how easy it was to crack 3DES in /etc/passwd so /etc/shadow came to be).
It's even worse now, even on an air-gapped network, because the underlying insecurity is still within DNS. DNS would make a great highly-scalable authn DB for non-UNIX accounts (not in the Hesiod sense, but more in the modern web-app sense), except for all of the other high-scalable and secure authn DB's that aren't built on top of woefully insecure tech.
> It is expected that sharing properly secured secrets is reasonably safe
As you know, though, this is why Diffie and Hellman invented public key exchange -- because sharing secrets, even properly secured, is actually not reasonably safe at all in most circumstances. Even if you secure the secret, it's the communication of those secrets during the sharing where everything breaks down. (https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exc...)
This is of course because a secret is not a secret as soon as you share it with someone else. Instead, DH designed their key exchange to have a private key, that only you know, and a public key that you can share. Each party can derive a shared key using the other's public key, but only they each know their private key.
> Sometimes, you need to take the server out of its box, out of the bunker, and plug it to both the power distribution network, and of course... a LAN...
Even though perfect security is obviously impossible, it's still worth striving for. Relying on DNS for more than absolutely necessary is choosing to rely on technology that began without any thought to security and ended up with a history of massive, Internet-wide vulnerabilities. (See elsewhere in the thread for a great Wired article on Dan Kaminsky's successful attack on all of DNS.)
> DANE and DNSSEC are horrible bodges that don't actually do anything useful or in a secure way.
Adoption is extremely poor, usability is horrible, and the approach used is quite dated, but I'm not sure DANE and DNSSEC are insecure. Did you have a reference on the latter?
I agree with the idea that one should not trust DNS with any information one does not want public. I'm not totally convinced DNS is irreparably broken though. What are your thoughts on DNS over HTTPS?
DNS over HTTPS secures the connection between the client and their resolver. It doesn't improve anything else. It's still vulnerable to tampering at the still insecure connection between the resolver and the authoritative DNS server.
I think that a lot of offensive tools to tunnel IP over DNS actually overcame these limitations in real time, at the expensive of throughput [1]. It obviously does require agreeing on some sort of protocol on both sides though.
> the current fleet of IT workers don’t really grok anything beyond A and PTR.
Part of this, though is also who is "in control" of the server.
Most of the times, DNS is on the other side of the bastion, managed by Network Ops, and out of reach of Joe Developer. Perhaps a reasonable situation, fat finger DNS and Bad Things can happen. However, Joe Developer has carte blanche access to things like HTTP servers and with that they were allowed to go hog wild.
So, the innovation in the HTTP space exploded as it was a safer place to dabble to the point that every solution was viewed through the lens of HTTP.
In the end, devs don't know DNS because they don't need to know DNS, and even if they did, the Powers in NetOps weren't going to let them have their grubby fingers on it anyway.
Yep. I was speaking from a netops/sysadmin standpoint.
My belief is that TXT and HINFO saw declining use within an org as Microsoft Windows DNS Server usage grew[1][2][3].
1. Windows DNS Server hides those records behind a sub-menu item.
2. Windows DNS Server attracted noobs (a good thing, I suppose). Heck, these days, we give low/middle-tier IT workers DNS server access (via DnsAdmins group), which is crazy in my mind, but nonetheless common.
3. Crusty, old admins were better at typing yy in vi and changing the record type when creating new records in BIND (old DNS software).
This is pretty good for bypassing captive portals on public Wi-Fi access points. Sometimes you can use it to get Internet for free without paying. These days most are more clever and will block everything other than the default gateway until you sign on.
Four years ago I managed to use it on a plane to read mails over SSH in a terminal. It was so slow I had the impression to be in a spaceship to another planet...
That's not that typical anymore. For one thing, it poisons the client's dns cache, and it can be hard to go to the actual destination later. For another, if DNS was the access control mechanism, that's pretty weak, many people can figure out how to get around a DNS block.
It's more typical now to return the A records, and then route all IPs to a portal server until you login. Logged in sessions get to go forth to the internet.
It's always amusing to see DNS "hackery"[1] like this, and always makes me go back to DNS Toys (https://www.dns.toys/), which generated a huge discussion on HN a year ago [2]
---
[1] well, it's not really hackery if you're being pedantic, since it's doing what the spec allows it to do
It's always amusing when someone discovers DNS TXT records. ClamAV has been using them to announce the latest versions for more years than I care to remember.
clamav.net, like most domains, doesn't enable DNSSEC. Further, as designed, local resolvers don't validate DNSSEC, they just ask the recursive resolver to; a MITM between the local and the recursive can lie.
So when wikipedia says DNSSEC can protect, that's the permissive can. Like things can happen. But don't rely on it.
No, stub resolvers are supposed to, and often do validate DNSSEC signatures. DNSSEC is designed so that validation should happen whenever any DNS data is received over the network.
Wrong (EDIT: oops, I am wrong) current.cvd.clamav.net is (EDIT:) NOT currently DNSSEC-signed.
Just that their dnsquery() via freshclam daemon is not using val_res_query() when pulling in the version number, so it is unverified DNS querying going on … over there.
The best-resourced, most widely respected security teams on the Internet tend strongly not to enable DNSSEC or advocate for its adoption, mostly because it doesn't solve meaningful problems.
It would still be prone to DoS though. The request is unencrypted so a MITM could just not respond to those requests. This would effectively block clients from being able to update/get new definitions.
You don't even need an intentionally evil man in the middle: I can't imagine wanting to block something critical like AV updates on ordinary DNS TTLs, much less the long tail of DNS resolvers that have subtly broken caching strategies of one kind or another and sometimes get the TTLs wrong.
An hour or two may be a huge difference in preventing a viral spread, but at least in my experience is it is tough to rely on DNS propagation below the hour line. Seems like an odd technical choice to me.
"So a man in the middle could prevent updates from happening, and freshclam wouldn't even throw a warning?"
And yet it "works" and as the OP mentioned for a long time. Often we get so conditioned to a security response we forget that basic security often relies upon a "simple" and inexpensive solution. Using DNS in this way is a best effort scenario that offloads work to servers designed for this purpose and for an open source project so you use what you have.
Oh, and there is a failover to https if the record is over three hours old.
well, that would be the fault of clamav if they did not do the proper DNSSEC verification and validation of their ‘current.cvd.clamav.net’ hostname.
Digging into the code of freshclam, source of libfreshclam.c, dnsquery() function call, it is painfully evident that freshclam daemon does not do basic DNSSEC when performing res_query().
Instead, freshclam should be calling `val_res_query()`.
Timely. I've been noticing on flights that the in-flight wifi uses a squid proxy to block you until you pay - but most of the time, you'll get whatever data from the DNS Forwarder even if you haven't paid yet.
I've been noodling on how to build a simple proxy off DNS to test on my next flight.
A word of caution, don't try this in a corporate environment. Many corporate firewalls will generate security alerts on high DNS request rates. PaloAlto PAN's specifically have alerts for this. I believe Fortigate may as well. Most of the DLP appliances will detect this too. Splunk also has a module to detect this based on query logs. I don't have a horse in the race, just trying to save a few here from a paddlin'.
Edit: seems like others have recommended it already. I got it working in a hotel room once after giving up on the utterly broken ToS acceptance page for the WiFi.
If they do DPI on port 53 traffic to only allow DNS, then it's necessary.
The neat thing about iodine is that it even works when you don't even have access to your home/whatever destination IP address due to the firewall redirecting packets to their server. It uses their resolver as a proxy to access a nameserver that you control, allowing you to exfiltrate data and get an uncensored connection.
This reminds me of a very curious way in the past of distributing small programs/scripts using finger, attaching it base64 encoded in the .plan
% finger xabi
Login: xabi Name:
Directory: /home/xabi Shell: /usr/bin/zsh
On since Mon Jul 17 11:20 (CEST) on pts/0 from xxx.xxx.xxx.xxx
1 second idle
No mail.
Plan:
Latest version of my code (base64 encoded):
-------- 8< ----------- 8< -----------------
SGVsbG8gd29ybGQK
-------- 8< ----------- 8< -----------------
I have a vague memory about a security talk where they used TXT records to deliver a payload to a machine and they had to write such code that the rows returned in the TXT records could be run in any order because the order the TXT records are returned is not deterministic.
OP, this is trivial to detect. DNS command and control is a thing malware/attackers use which among other things include TXT records,DoH, long A or AAAA records and many other creative ways, my fav right now being using a CNAME chain to encode information (no single request is too large or suspicious.
In my experience, bypassing censorship does not mean doing unusual things like this but things like browser extenstions that stego your message in legitimate requests .
And encryption wouldn't help much either if this approach became popular enough. It's pretty rare to request TXT records in "normal" end user traffic so it's reasonable to either fully block TXT lookups or flag them as suspicious.
Hmm, isn't this how like GitHub / other services can check whether you own a domain. What are the advantages of this over other ways of sharing information like a TXT file or a database?
If only we could put TCP port numbers in DNS to avert an ipv4 availability crunch and effectively expand the address space to 48bits… one can dream. Apparently impossible.
If TXT-records are proof enough when ownership is to be provided for TLS certs. Then, why not just put the TLS-data into the "trusted" TXT-records and skip the multi-billion-dollar-BS-CA-biz all together?
If you're asking "Why do we need CA's when they already control the DNS record for that domain" then the answer is that DNS doesn't natively even have any cryptography involved, your DNS server can serve any information it wants, and this is common practice in IT environments.
Effectively speaking MITM'ing dns is relatively easy and common as it's the equivalent to HTTP.
So you don't know that the answer you are receiving it actually from the owner of that domain. If they sent you a certificate you don't know if that's an attackers certificate or the owner's certificate.
The CA system is a (very imperfect) method of verifying ownership by having a trusted third party do the ownership verification. This way the certificate the owner gives you is effectively "notarized" so to speak.
tl;dr - DNS has no built-in signing or encryption, and is "MiTM'd" by design. It's common practice for your DNS server to be set to your company's DNS server, your ISP, etc. And those can send any response they want, and you have no way to authenticate if it's been modified or not.
The way that we deal with CAs now developed so much after these issues were disclosed.
It is actually adding to my argument. The NSA and any other government entities REALLY WANT to control these certificates. However, our interaction with CAs became much more secure now because we learned and developed things like CT logs. Major browsers are removing entire CAs from their trust store if shady stuff happens ASAP. You can’t do the same with TLDs. This argument is made frequently on here, why would you even want to propose to regress into stuff like DANE…? DNS servers are such a bad trust anchor, if you could even call it a trust anchor at all.
If you want to discuss further, I ask you to stay on topic instead of name calling.
But should you do crazy things like this? Absolutely not!
DNS is notoriously prone to MITM, injection, cache poisoning, DoS, etc. DANE and DNSSEC are horrible bodges that don't actually do anything useful or in a secure way.
Even though it's the foundation of almost everything we try to do securely, including the basis for TLS DV certificate (totally fungible, regardless of a hundred or so certificate authorities, including many located in authoritarian regimes!) validation:
DNS is absolutely and irredeemably broken forever, from a security perspective, and can never be fixed. As tempting (and easy) as it is to hack on it or treat it as an ultra-fast and extensible UDP remotely-accessible lookup database, just don't. (It just needs to die in a fire, probably along with SMTP.)
Unfortunately, even if someone came up with some system that could credibly replace it, that system would inevitably have a LOT of privacy and censorship trade-offs, so DNS is what we're stuck with.
Just stay very aware of the risks of encoding anything security-related inside DNS and try to minimize your reliance on it as best you can.