Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Use DNS TXT to share information
143 points by danradunchev on July 17, 2023 | hide | past | favorite | 91 comments
dig +short TXT youpay.govorenefekt.com @1.1.1.1 | fold -s

You can base64 encode an image, split to TXT records and send over Internet. Useful in certain circumstances. Like when one of the communicating parties is under severe censorship.



You could also securely hash a username and password together with something like argon2ID, and then authenticate users by seeing if the base64'ed TXT record exists. No need to hit an overloaded database, just dig and you'll even have the benefits of local caching with per-record TTL's!

But should you do crazy things like this? Absolutely not!

DNS is notoriously prone to MITM, injection, cache poisoning, DoS, etc. DANE and DNSSEC are horrible bodges that don't actually do anything useful or in a secure way.

Even though it's the foundation of almost everything we try to do securely, including the basis for TLS DV certificate (totally fungible, regardless of a hundred or so certificate authorities, including many located in authoritarian regimes!) validation:

DNS is absolutely and irredeemably broken forever, from a security perspective, and can never be fixed. As tempting (and easy) as it is to hack on it or treat it as an ultra-fast and extensible UDP remotely-accessible lookup database, just don't. (It just needs to die in a fire, probably along with SMTP.)

Unfortunately, even if someone came up with some system that could credibly replace it, that system would inevitably have a LOT of privacy and censorship trade-offs, so DNS is what we're stuck with.

Just stay very aware of the risks of encoding anything security-related inside DNS and try to minimize your reliance on it as best you can.


I am just stopping by to say that this is actually a thing. It is called hesiod and works great in small, maybe air-gapped networks.

As a side note, anything security related exists in the reality of uncertainty. It is expected that sharing properly secured secrets is reasonably safe, but day after day we discover "we didn't know". Sometimes simplicity for a particular application is worth certain amount of risk.

Sometimes, you need to take the server out of its box, out of the bunker, and plug it to both the power distribution network, and of course... a LAN...

For quick reference: - https://en.m.wikipedia.org/wiki/Hesiod_(name_service) - https://jpmens.net/2012/06/28/hesiod-a-lightweight-directory...


Hesiod was actually a thing. I'm not aware of anyone who has run it in production in the last 20 years, perhaps you are? But, in any event, password technology was immature and insecure then (i.e. RMS got upset when passwords were introduced, or remember how easy it was to crack 3DES in /etc/passwd so /etc/shadow came to be).

It's even worse now, even on an air-gapped network, because the underlying insecurity is still within DNS. DNS would make a great highly-scalable authn DB for non-UNIX accounts (not in the Hesiod sense, but more in the modern web-app sense), except for all of the other high-scalable and secure authn DB's that aren't built on top of woefully insecure tech.

> It is expected that sharing properly secured secrets is reasonably safe

As you know, though, this is why Diffie and Hellman invented public key exchange -- because sharing secrets, even properly secured, is actually not reasonably safe at all in most circumstances. Even if you secure the secret, it's the communication of those secrets during the sharing where everything breaks down. (https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exc...)

This is of course because a secret is not a secret as soon as you share it with someone else. Instead, DH designed their key exchange to have a private key, that only you know, and a public key that you can share. Each party can derive a shared key using the other's public key, but only they each know their private key.

> Sometimes, you need to take the server out of its box, out of the bunker, and plug it to both the power distribution network, and of course... a LAN...

Even though perfect security is obviously impossible, it's still worth striving for. Relying on DNS for more than absolutely necessary is choosing to rely on technology that began without any thought to security and ended up with a history of massive, Internet-wide vulnerabilities. (See elsewhere in the thread for a great Wired article on Dan Kaminsky's successful attack on all of DNS.)


> DANE and DNSSEC are horrible bodges that don't actually do anything useful or in a secure way.

Adoption is extremely poor, usability is horrible, and the approach used is quite dated, but I'm not sure DANE and DNSSEC are insecure. Did you have a reference on the latter?



This is quite dated, unfortunately.


How so? Has DNSSEC appreciably changed since 2015?


No. The case that post makes has gotten stronger since 2015, not weaker.


I agree with the idea that one should not trust DNS with any information one does not want public. I'm not totally convinced DNS is irreparably broken though. What are your thoughts on DNS over HTTPS?


DNS over HTTPS secures the connection between the client and their resolver. It doesn't improve anything else. It's still vulnerable to tampering at the still insecure connection between the resolver and the authoritative DNS server.


There are a bunch of awkward limitations, tho:

The order of records associated with a name is undefined, so if you spread your data across multiple records, you need to add ordering metadata.

The total size of the records must be less than 64 KiB - they have to fit within a DNS message, which has a limited size.

You can put all your 64K ish data into one TXT record, but it has to be split into strings of up to 255 bytes.

You can invent your own record type to contain raw binary data without the sequence-of-strings requirement.

You can use multiple names (eg, numbers) to get past the size limit and to be explicit about the correct order.


I think that a lot of offensive tools to tunnel IP over DNS actually overcame these limitations in real time, at the expensive of throughput [1]. It obviously does require agreeing on some sort of protocol on both sides though.

[1] https://github.com/yarrick/iodine


Thanks for teaching me about `fold`. I've been a GNU/Linux user for 10 years and most days I can still learn something new.


Heh, I have been using `fmt` to do this job since the 1990s, I didn’t know POSIX specifies a different program for the same purpose

https://man.freebsd.org/cgi/man.cgi?query=fmt

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/f...


It was pretty common for orgs to use TXT and HINFO (“host info”) records up through the 1990s.

I still use them at work to provide hints and more information but the current fleet of IT workers don’t really grok anything beyond A and PTR.

You’re just using DNS as intended. :-P


> the current fleet of IT workers don’t really grok anything beyond A and PTR.

Part of this, though is also who is "in control" of the server.

Most of the times, DNS is on the other side of the bastion, managed by Network Ops, and out of reach of Joe Developer. Perhaps a reasonable situation, fat finger DNS and Bad Things can happen. However, Joe Developer has carte blanche access to things like HTTP servers and with that they were allowed to go hog wild.

So, the innovation in the HTTP space exploded as it was a safer place to dabble to the point that every solution was viewed through the lens of HTTP.

In the end, devs don't know DNS because they don't need to know DNS, and even if they did, the Powers in NetOps weren't going to let them have their grubby fingers on it anyway.


Yep. I was speaking from a netops/sysadmin standpoint.

My belief is that TXT and HINFO saw declining use within an org as Microsoft Windows DNS Server usage grew[1][2][3].

1. Windows DNS Server hides those records behind a sub-menu item.

2. Windows DNS Server attracted noobs (a good thing, I suppose). Heck, these days, we give low/middle-tier IT workers DNS server access (via DnsAdmins group), which is crazy in my mind, but nonetheless common.

3. Crusty, old admins were better at typing yy in vi and changing the record type when creating new records in BIND (old DNS software).


> when creating new records in BIND (old DNS software).

Takes me back to rendering zone files using perl. The times they have changed.


What are some neat things to place in TXT and HINFO records that time seems to have forgotten about?


The iodine protocol allows bi-directional ipv4 traffic over DNS.

https://github.com/yarrick/iodine


This is pretty good for bypassing captive portals on public Wi-Fi access points. Sometimes you can use it to get Internet for free without paying. These days most are more clever and will block everything other than the default gateway until you sign on.


Four years ago I managed to use it on a plane to read mails over SSH in a terminal. It was so slow I had the impression to be in a spaceship to another planet...


:) to be fair SSH doesn't cope well on high latency, high packet loss connections.


mosh[0] helps a lot in these scenarios. It doesn't actually make your connection any faster, but it does make things feel a lot more responsive.

[0] https://mosh.org/


Yes at that point I just wished I had the mosh client installed before boarding the plane ;-)


Don't most of them hijack any DNS traffic to send it to their own portal?


That's not that typical anymore. For one thing, it poisons the client's dns cache, and it can be hard to go to the actual destination later. For another, if DNS was the access control mechanism, that's pretty weak, many people can figure out how to get around a DNS block.

It's more typical now to return the A records, and then route all IPs to a portal server until you login. Logged in sessions get to go forth to the internet.


Ah right, that probably makes more sense. I just noticed they keep messing up HTTPS, but both of those would do that.


It's always amusing to see DNS "hackery"[1] like this, and always makes me go back to DNS Toys (https://www.dns.toys/), which generated a huge discussion on HN a year ago [2]

---

[1] well, it's not really hackery if you're being pedantic, since it's doing what the spec allows it to do

[2] DNS Toys (946 points): https://news.ycombinator.com/item?id=31704789


It's always amusing when someone discovers DNS TXT records. ClamAV has been using them to announce the latest versions for more years than I care to remember.

$ dig +short -t txt current.cvd.clamav.net "0.103.8:62:26972:1689593340:1:90:49192:334"

For anyone interested, Freshclam interprets this as:

Latest ClamAV version: 0.103.8 Latest Main DB version: 62 Latest Daily DB version: 26972 UNIX Timestamp 1689593340

...and then some other version numbers and things I don't remember, one is probably a bytecode DB version 334, f-level 90 maybe.

Anyway, nothing new, works as designed. You can do all kinds of neat tricks with it. DNS has a lot going on that most people don't (ab)use.


That is... interesting that they do not even use HTTPS or any type of signature for that info.

So a man in the middle could prevent updates from happening, and freshclam wouldn't even throw a warning?


Won't DNSSEC prevent MITM attacks in this case?

From https://en.wikipedia.org/wiki/Domain_Name_System_Security_Ex... - "DNSSEC can protect any data published in the DNS, including text records (TXT) and mail exchange records (MX)"


clamav.net, like most domains, doesn't enable DNSSEC. Further, as designed, local resolvers don't validate DNSSEC, they just ask the recursive resolver to; a MITM between the local and the recursive can lie.

So when wikipedia says DNSSEC can protect, that's the permissive can. Like things can happen. But don't rely on it.


No, stub resolvers are supposed to, and often do validate DNSSEC signatures. DNSSEC is designed so that validation should happen whenever any DNS data is received over the network.


That's the opposite of how DNSSEC works in practice.


Wrong (EDIT: oops, I am wrong) current.cvd.clamav.net is (EDIT:) NOT currently DNSSEC-signed.

Just that their dnsquery() via freshclam daemon is not using val_res_query() when pulling in the version number, so it is unverified DNS querying going on … over there.


But there's no chain from the root, or at least that's what I'm getting from this tool [1].

[1] https://dnssec-analyzer.verisignlabs.com/clamav.net


NASTY! You’d be right. I too did not get the ‘ad’ notation in my own dig response record.

This mean, any TXT record can easily be spoofed via a simple transparent MitM packet munging.

https://dnssec-analyzer.verisignlabs.com/current.cvd.clamav....


> Wrong (EDIT: oops, I am wrong) current.cvd.clamav.net is (EDIT:) NOT currently DNSSEC-signed.

When it's better to just delete and replace a comment.


There's no DS record for clamav.net at all. They're not signed.


Related:

https://news.ycombinator.com/item?id=36171696 - "Calling time on DNSSEC: The costs exceed the benefits"


Silly me for expecting an anti-virus company to care about security. The point remains: DNSSEC COULD make this safe to do.


The best-resourced, most widely respected security teams on the Internet tend strongly not to enable DNSSEC or advocate for its adoption, mostly because it doesn't solve meaningful problems.


DNSSEC failures result in denial of service. Turning it on in no way makes the experience safer for an end user.


It would still be prone to DoS though. The request is unencrypted so a MITM could just not respond to those requests. This would effectively block clients from being able to update/get new definitions.


You don't even need an intentionally evil man in the middle: I can't imagine wanting to block something critical like AV updates on ordinary DNS TTLs, much less the long tail of DNS resolvers that have subtly broken caching strategies of one kind or another and sometimes get the TTLs wrong.

An hour or two may be a huge difference in preventing a viral spread, but at least in my experience is it is tough to rely on DNS propagation below the hour line. Seems like an odd technical choice to me.


"So a man in the middle could prevent updates from happening, and freshclam wouldn't even throw a warning?"

And yet it "works" and as the OP mentioned for a long time. Often we get so conditioned to a security response we forget that basic security often relies upon a "simple" and inexpensive solution. Using DNS in this way is a best effort scenario that offloads work to servers designed for this purpose and for an open source project so you use what you have.

Oh, and there is a failover to https if the record is over three hours old.

https://docs.clamav.net/faq/faq-troubleshoot.html


well, that would be the fault of clamav if they did not do the proper DNSSEC verification and validation of their ‘current.cvd.clamav.net’ hostname.

Digging into the code of freshclam, source of libfreshclam.c, dnsquery() function call, it is painfully evident that freshclam daemon does not do basic DNSSEC when performing res_query().

Instead, freshclam should be calling `val_res_query()`.

They are currently using ‘res_query()’.


Yep, it's another "security" solution that is dead on arrival


seems like the remaining solutions are boiled down to just either a private PGP, IPSec, or mutual-TLS data connection by direct IPv4 or IPv6 address.


Signal API is also another solution.


Timely. I've been noticing on flights that the in-flight wifi uses a squid proxy to block you until you pay - but most of the time, you'll get whatever data from the DNS Forwarder even if you haven't paid yet.

I've been noodling on how to build a simple proxy off DNS to test on my next flight.


You might be interested in iodine.

https://github.com/yarrick/iodine


A word of caution, don't try this in a corporate environment. Many corporate firewalls will generate security alerts on high DNS request rates. PaloAlto PAN's specifically have alerts for this. I believe Fortigate may as well. Most of the DLP appliances will detect this too. Splunk also has a module to detect this based on query logs. I don't have a horse in the race, just trying to save a few here from a paddlin'.


A regular proxy on port 53 might work? Is it necessary to actually use DNS?

Otherwise there's https://github.com/yarrick/iodine

Edit: seems like others have recommended it already. I got it working in a hotel room once after giving up on the utterly broken ToS acceptance page for the WiFi.


If they do DPI on port 53 traffic to only allow DNS, then it's necessary.

The neat thing about iodine is that it even works when you don't even have access to your home/whatever destination IP address due to the firewall redirecting packets to their server. It uses their resolver as a proxy to access a nameserver that you control, allowing you to exfiltrate data and get an uncensored connection.


Not a proxy but SoftEther VPN supports connection over DNS and/or ICMP. This is meant for circumventing firewalls.

https://www.softether.org/1-features/1._Ultimate_Powerful_VP...!)


udp2raw also supports ICMP tunneling, as a simpler/leaner option that you could run WireGuard over. It's quite performant compared to DNS.


This sort of escape is how Kaminsky ended up as the guy who broke DNS.

(From the opener to https://www.wired.com/2008/11/ff-kaminsky/ )


RIP Dan Kaminsky


Search for iodine


Actually I turned DNS into a database ;)

https://dyna53.io/


For an overview of interesting uses of DNS, see the following FOSDEM talk [0] and its HN discussion [1].

[0] Bizarre and Unusual Uses of DNS. https://fosdem.org/2023/schedule/event/dns_bizarre_and_unusu...

[1] https://news.ycombinator.com/item?id=34939809


This reminds me of a very curious way in the past of distributing small programs/scripts using finger, attaching it base64 encoded in the .plan

  % finger xabi
  Login: xabi              Name:
  Directory: /home/xabi                Shell: /usr/bin/zsh
  On since Mon Jul 17 11:20 (CEST) on pts/0 from xxx.xxx.xxx.xxx
   1 second idle
  No mail.
  Plan:
  Latest version of my code (base64 encoded):

  -------- 8< ----------- 8< -----------------
  SGVsbG8gd29ybGQK
  -------- 8< ----------- 8< -----------------


I have a vague memory about a security talk where they used TXT records to deliver a payload to a machine and they had to write such code that the rows returned in the TXT records could be run in any order because the order the TXT records are returned is not deterministic.

I couldn't find the talk, but I found this nice article: https://unit42.paloaltonetworks.com/dns-tunneling-how-dns-ca...


OP, this is trivial to detect. DNS command and control is a thing malware/attackers use which among other things include TXT records,DoH, long A or AAAA records and many other creative ways, my fav right now being using a CNAME chain to encode information (no single request is too large or suspicious.

In my experience, bypassing censorship does not mean doing unusual things like this but things like browser extenstions that stego your message in legitimate requests .


what do you mean by CNAME chain? sounds interesting


Lookup <request>.site.com gets you A.com -> B.com -> C.com -> IP. ABC is the response message and the subdomain is the request.


"Like when one of the parties is under severe censorship."

What if the censor hijacks DNS queries. This is also done outside the realm of censorship, e.g., hotel wifi networks.


Relying on DNS is not recommended when under severe censorship


Why, if it is encrypted?


It’s impossibly easy to block resolution of txt records, for instance.


> It’s impossibly easy

So, at minimum, it's of medium difficulty?


Why involve more systems than necessary. DNS is only needed if you don't have an IP address of a destination.


DPI will make short work of your unencrypted DNS records..


And encryption wouldn't help much either if this approach became popular enough. It's pretty rare to request TXT records in "normal" end user traffic so it's reasonable to either fully block TXT lookups or flag them as suspicious.


The quote used is from excellent article here: https://bgr.com/tech/the-internet-isnt-free-it-never-was-and...


Hmm, isn't this how like GitHub / other services can check whether you own a domain. What are the advantages of this over other ways of sharing information like a TXT file or a database?


If only we could put TCP port numbers in DNS to avert an ipv4 availability crunch and effectively expand the address space to 48bits… one can dream. Apparently impossible.


You can they are called SRV records but browsers don’t really support lookups that way.



It doesn’t expand the IPv4 space in any way, shape or form.


Get ready for iodine

https://github.com/yarrick/iodine

Or IP-over-DNS


If TXT-records are proof enough when ownership is to be provided for TLS certs. Then, why not just put the TLS-data into the "trusted" TXT-records and skip the multi-billion-dollar-BS-CA-biz all together?


I'm not sure I 100% understand your question.

If you're asking "Why do we need CA's when they already control the DNS record for that domain" then the answer is that DNS doesn't natively even have any cryptography involved, your DNS server can serve any information it wants, and this is common practice in IT environments.

Effectively speaking MITM'ing dns is relatively easy and common as it's the equivalent to HTTP.

So you don't know that the answer you are receiving it actually from the owner of that domain. If they sent you a certificate you don't know if that's an attackers certificate or the owner's certificate.

The CA system is a (very imperfect) method of verifying ownership by having a trusted third party do the ownership verification. This way the certificate the owner gives you is effectively "notarized" so to speak.

tl;dr - DNS has no built-in signing or encryption, and is "MiTM'd" by design. It's common practice for your DNS server to be set to your company's DNS server, your ISP, etc. And those can send any response they want, and you have no way to authenticate if it's been modified or not.


First, Let's Encrypt exists and is free. Second, DNS-01 uses multiperspective validation, which is fairly complex.

https://letsencrypt.org/2020/02/19/multi-perspective-validat...


Makes sense indeed. It exists and it's called DANE. https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


Yes and then your government controls your “trusted” connection.


Like NSA did not controlled CA:s? Or are you one of those conspiracy nuts that think NSA cracked it?


The way that we deal with CAs now developed so much after these issues were disclosed.

It is actually adding to my argument. The NSA and any other government entities REALLY WANT to control these certificates. However, our interaction with CAs became much more secure now because we learned and developed things like CT logs. Major browsers are removing entire CAs from their trust store if shady stuff happens ASAP. You can’t do the same with TLDs. This argument is made frequently on here, why would you even want to propose to regress into stuff like DANE…? DNS servers are such a bad trust anchor, if you could even call it a trust anchor at all.

If you want to discuss further, I ask you to stay on topic instead of name calling.


Can you make a version that's HTTP over DNS (rather than the more common DNS over HTTP) for use on gated internet that allows DNS but no web traffic?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: