Chaos Lever Podcast

Ned and Chris discuss DNS’s importance, illuminating its role in networking and the need to enhance its security.


The Internet's Phonebook
In this episode of Chaos Lever, Ned and Chris dive into the world of DNS—the system that acts like the internet's phonebook by translating website names into IP addresses that computers understand. They explore the origins of DNS, its role in networking, and its evolution over the years. The duo also discusses the latest advancements in DNS security and how these efforts aim to protect users from cyber threats, making the internet safer for everyone. 


Links: 

What is Chaos Lever Podcast?

Chaos Lever examines emerging trends and new technology for the enterprise and beyond. Hosts Ned Bellavance and Chris Hayner examine the tech landscape through a skeptical lens based on over 40 combined years in the industry. Are we all doomed? Yes. Will the apocalypse be streamed on TikTok? Probably. Does Joni still love Chachi? Decidedly not.

Ned: I spent the extra money to get a nice chair. I don’t know if it actually helped at all. I feel the same.

Chris: It doesn’t even look like you’re in a chair. You could just be squatting awkwardly.

Ned: I’m floating. On a ball.

Ned: Hello, alleged human, and welcome to the Chaos Lever podcast. My name is Ned, and I’m definitely not a robot. I’m a real human person and not at all A human sitting on a balance ball awkwardly, while I listen to overly long Neal Stephenson books. That’s not a thing I would do, constantly and all the time. With me is Chris, who is also here. Hey, Chris.

Chris: I do have a chair, in case anybody was curious.

Ned: I bet you do. Have you thrown the chair recently?

Chris: This is a heavy chair.

Ned: [laugh]. Well, I guess it depends on how angry you are?

Chris: Well, it’s more it depends on how fast do I want to be able to go to the emergency room.

Ned: Because you’ve thrown your back out again?

Chris: Yeah. You know, just in case, I’ve done the research. Like, I’ve Googled traction plus drive plus manual transmission.

Ned: Not great.

Chris: No.

Ned: Yeah. I remember when I was, in my early 20s, also driving a manual transmission, and I sprained my ankle.

Chris: Well, that’s the end of your life, effectively.

Ned: [laugh]. And it was like—and I was with someone who didn’t know how to drive a manual, so—

Chris: Ohh [laugh].

Ned: I had to drive us home. And of course, I didn’t have good health insurance because I was working retail, so I pretty much just limped it out [laugh] for the next, like, eight weeks, until I could walk normally. And I don’t think it ever fully healed right. But that’s fine. That’s what your 20s are for: irreparable harm.

Chris: Which you totally think is going to just take care of itself. And then fast-forward, and it doesn’t.

Ned: But you don’t really realize it until you hit your 40s, and by then it’s too late. Yay. Not that this is a metaphor for anything that we’re going to talk about. Hey, it’s DNS [laugh]. Decisions that were made many years ago absolutely come back to haunt us.

Chris: Constantly.

Ned: And always. This is going to be a two-parter episode, not to spoil things. But I was 3600 words in and not done yet. Hadn’t even arrived at the thing that caused me to write about this in the first place, and I was like, mmm, I should stop [laugh]. But it’s all gold, so I ain’t cut and shit.

Chris: Oh, you’re half right.

Ned: Yes, yes, I am. So, there are few technologies as foundational to networking as DNS. In general, it forms the foundation for name resolution on basically every network, big and small. Even the little network that runs internally on your loopback port uses DNS. Fun.

Last week, Microsoft announced a preview feature for Windows they’re calling Zero Trust DNS, or a ZTDNS, and I thought this presents a great opportunity to dive into what DNS is, the trouble with DNS from a security perspective, and what ZTDNS technology is attempting to do. I should also say that while I think the idea behind ZTDNS is interesting, because it’s Microsoft, it’s also proprietary, fragile, and Windows only, so I don’t think it’s going to be the silver bullet for DNS security issues across the internet. Don’t get too excited. But anyway, DNS. Let’s talk about it.

Chris: Weee.

Ned: What do you know about DNS, Chris?

Chris: A Domain Name… System?

Ned: Sure, we’ll go with that [laugh]. To understand DNS, it’s useful to trace things back to where it all began: ARPANET.

Chris: Oh, I thought you were going to say FORTRAN?

Ned: [laugh]. Well… yes, that too. But we’re not going to talk about FORTRAN in this episode, I don’t think, unless you bring it up, which is fine. That’s your prerogative. So, in 1969—we’re going way back here—two hosts were connected at UCLA and Stanford University. The hosts at each site were connected using Interface Message Processors, or IMPs. Yep, that’s right. I learned last week that the gateway device at each site in ARPANET was called an IMP. Isn’t that delightful?

Chris: It’s certainly something.

Ned: Nerd culture has deep roots, and the jokes have never been that good. The very early implementation of those IMPs was really only meant to handle a single host at each site, but the standard used to interface between the hosts and the IMP did allow for multiple hosts. But honestly, who could imagine having multiple hosts at one site? The luxury. My goodness, what are we even doing here?

The very first hosts were an SDS Sigma 7 and an SDS 940 at UCLA and Stanford respectively. We could do a whole show on early computing with SDS, IBM, and other competitors in the field, but that show is not today, and I’m putting you in charge, Chris, of doing that.

Chris: [sigh].

Ned: Don’t pretend you’re not excited.

Chris: Copy-paste.

Ned: [laugh]. ARPANET kept adding more IMPs over the next 15 years while also developing new technologies to deal with the challenges of multi-host, multi-hop networking, transmission control between nodes, electronic communications, and more. And that’s where we got standards like TCP/IP, Ethernet, SMTP, and Telnet. They were all created during this time period.

Chris: And none of them have changed since.

Ned: That is… remarkably true in the [laugh] saddest way possible. At the same time, there were new networks created for different purposes. There’s one called CSNET, one called NSFNET, and the scale and number of nodes on those networks continued to increase. What had not been created yet was a distributed system to translate machine names to something human readable. Up until that point, every system had a local file that was called the hosts or host.txt file that simply had a list of computer names and their network addresses. That’s it. That’s how the whole thing worked.

Early versions of ARPANET used a transport layer protocol that was called—well, it wasn’t actually called this at the time, but it was later backronymed to be NCP. NCP included numeric identifiers for the destination host, and that was it. So, this works great when you only have a network with four hosts on it. Remembering the correct number and routing is pretty straightforward. Those IMPs, they didn’t have to do a whole lot initially.

As the number of hosts on ARPANET grew, the destination host address and NCP overall was replaced using TCP, for transport control, and IP for addressing and host identification. ARPANET officially adopted TCP/IP and deprecated NCP in 1983. And this is what most people point to as the beginning of the modern internet.

Chris: Are you sure it wasn’t Pokémon?

Ned: Well, I mean, we will be catching them all later. From this point on, the number of nodes in your local network and the inter-network—aka the internet—absolutely exploded. Computer systems are only too happy to use numbers to identify this explosion of new hosts, but humans were not so great with remembering a few hundred host numbers, and which host each number identifies. Think about how many people you know, and the telephone number for each of those people. How many can you actually keep in your head realistically?

Chris: Well, I’m not really the best example because I actually only know five people.

Ned: That’s a valid point, and I think it’s important, you know, in this day and age, that most people don’t have any mobile numbers memorized besides their own to give it to somebody else.

Chris: Right, and even there… not always, uh, top of mind.

Ned: [laugh]. So, like I said, the simplest hack was to add a hosts file to your system that had the identifier to hostname translation for all the other hosts on the network. And that worked fine. When there were relatively few hosts, and new hosts were being added maybe a few times a year, someone would update the host file and pass it around to everybody else, and now everybody was up-to-date and could find all the new hosts. By June of 1983, the successor to ARPANET, CSNET, had more than 70 sites connected and each had more than one host at the site, so passing around a static host file became, we could call it, untenable, and so we had the establishment of the domain name system in RFC 882 in November of 1983. And now I will read the whole of RFC 882 in full detail. Buckle up everybody. No, I wouldn’t do that. Well, I mean, unless of course listeners want that. Do listeners want that?

Chris: I mean, we can ask them but I’m going to go ahead and guess that—

Ned: Well, if you do in fact, want me to read the entire thing in full—I don’t know, maybe we should start a Patreon for that or something—let us know. Go to pod.chaoslever.com, and leave us a comment or send us a voicemail. Wait a minute. pod.chaoslever.com? What’s all that about? DNSes all around us, Chris, like an oppressive fog of depression.

Chris: [sigh].

Ned: RFC 882 established a hierchal—I can’t say this word. And it’s a good thing that’s in the document a whole bunch of times.

Chris: Heirarchical.

Ned: Hiearchical structure.

Chris: Hiker… high-caramba. No.

Ned: A high-caramba structure. I like that. We’re going with that. RFC 882 established a high-caramba structure of domains, starting with the root domain and branching outward. So, a hostname would include the name of the host, then a dot, then subdomains, all separated by dots, then the top level domain, and then a final dot at the end representing the route. Yes, friends, technically, the domain portion of all URLs should end with a period, but we decided not to do that because it was confusing and looked stupid. I’m amazed that we actually made that decision.

Chris: I mean, it’s also one of the most fun things when you teach somebody proper networking because it still exists—

Ned: Mmm.

Chris: —we just don’t see it in, like, URLs.

Ned: Yeah. If you do an NS lookup, depending on the program, when it returns back the records, it’ll have that trailing period on it. So, looking at pod.chaoslever.com, pod is the host, chaoslever is a subdomain, and com is the top level domain. And the implied dot after the com is the root domain, which corresponds to the root domain servers. But hold on, we aren’t there yet.

The second big decision made by RFC 882 was to make DNS distributed in nature. There was not going to be a single authoritative database of all DNS entries. And honestly, that decision was absolutely the right one, but it also led to endless headaches as we’ll see shortly. They could have gone with a centralized directory managed by one entity, and that would have been awful, so I’m glad they didn’t. Now, what’s interesting is that even back in 1983, the writers of the RFC realized that DNS can be used for more than just name resolution.

To quote, “The costs of implementing such a facility dictate that it be generally useful, and not restricted to a single application. We should be able to use names to retrieve host addresses Mailbox data, and other as yet undetermined information.” End quote. Another pressing issue of the time was the flow of email. Even though SMTP was introduced in 1982, the various email hosting applications did not have a standardized way to find and deliver mail to the recipient. It would kind of be like if we didn’t have street addresses, and it was just, like, “Yeah, you deliver it to the box that’s around the corner past the third Citgo.”

Chris: Right where Dave used to live.

Ned: [laugh]. Exactly. That was how we delivered mail. And if you needed to forward it to a new server, then you’d rely on Dave standing outside by the mailbox going, “Oh, no. You now have to go three blocks down, hang a right where the old Esso used to be”—I don’t know why I’m referencing gas stations here—“And you’ll find it. It’s under a rock buried in the backyard.” That was bad [laugh]. Not a great way to deliver email.

So, DNS was meant to solve that with special record types and a standard ending for email recipients in the form of recipient@address. Now, you could look up that address, find the mail delivery, mail forwarding, or mail exchange record, and figure out how to deliver that email. DNS could also be used for all kinds of other things, and it is today. We’ll talk about that flexibility and how it can be abused to encapsulate commands in DNS in part two, but there’s a thing called a TXT record, and you can put whatever the hell you want in that thing. It’s great.

Chris: DNS is a database.

Ned: [laugh]. Oh. It actually is. Corey Quinn is not wrong. It’s just not very efficient one. The RFC identifies three primary components. The domain namespace, which we’ve already covered, has an ay-caramba tree-and-leaf structure, name servers, which hold a portion of the namespace locally, and entries for other names servers that they can refer queries to if they don’t have the information cached locally. The portion of the namespace that they have the complete information on is called their zone, and they are said to be authoritative regarding that zone. And finally, there are resolvers, which are programs that know about at least one name server, and contain the logic to parse queries on behalf of other applications. You have a resolver running on whatever operating system you’re using right now.

Chris: Yep. And hilariously, that host.txt file still totally works.

Ned: Oh, yeah. No, that is still there. We are living with the decisions of 40 years ago. Every day.

Chris: One of the things that’s important to note, and why this is so valuable… in terms of… addresses, two things. One, the larger internet was getting bigger very quickly. We went from one to two to four to 70 to 7000 very, very fast.

Ned: Yes.

Chris: There’s a fun story about the origins of the Yahoo directory, which predated the Yahoo search engine. That used to be just a guy who knew all the websites and could keep track of them manually.

Ned: Every single website that existed, he knew all of them.

Chris: Yeah, he knew where all the rocks were, he knew where all the Dave’s were. But after a while, you just can’t do that. That’s what DNS was doing was centralizing all that stuff, so everybody had this address pool. But more importantly, we also started to create private networks that were not out in the world.

Ned: Yes. And those private networks also needed some form of name resolution, and most of them chose to use DNS. Some of them didn’t; we’ll get to those next week [laugh].

Chris: [laugh].

Ned: Foreshadowing. So, when you go to pod.chaoslever.com, your local resolver attempts to resolve that name to an IP address. First, it actually looks at that hosts file, and if there’s an entry, it’s going to use it. Really want to mess with someone? Alter their hosts file. It’s fun. If it doesn’t find it in the host file, then it checks to see if it has a value in the local DNS cache, and it will use that value until the TTL, or Time To Live, for that record has expired. If it doesn’t have that value cached locally, then it will send the DNS query to one of the configured name servers on the operating system.

That name server in turn will respond with the answer if it has it, or respond back with a different server to try, and technically, it can also reach out in your stead to other servers. It depends on how the name server is configured to deal with when it doesn’t know the answer for a query. The RFC also has some core assumptions that are just, they’re just adorable, and honestly understandable considering it was 1983, and no one had a PC, let alone a supercomputer in their pocket, and an average of 17 devices in their house that needed an IP address and DNS. That’s from 2023: the average house has 17 connected devices. Which I think is low, honestly [laugh].

Chris: Well, the side question is how many people know what all 17 devices are.

Ned: [laugh]. I definitely could not name all of them in my house, I’m sure I would miss at least 12. So, some of the core assumptions they made. One is that the size of the total database will initially be proportional to the number of hosts using the system. And they thought that mailboxes were the thing that was going to change that. We never really used that portion of DNS, and that’s probably for the best.

They also assumed that most of the data in the system will change very slowly. ‘Rapid’ is defined as once a minute. Incidentally, Kubernetes looks at that and just laughs maniacally. Third, clients of the domain system should be able to identify trusted name servers they prefer to use. Maybe—and I’m just spitballing here—that trust mechanism should be in the specification somewhere. It is not. It’s just, yeah, you know, whatever name servers you feel like you can trust.

Chris: Safe.

Ned: Yeah. And some users will wish to access the database via datagrams, and others will prefer to use virtual circuits. Now, this predates HTTP and TLS, so they weren’t wrong, per se, but I don’t even know what the virtual circuit is, Chris, and I didn’t look it up.

Chris: [What 00:19:38]? No.

Ned: Do you know?

Chris: I mean, yes, but I’m not going to tell you.

Ned: Okay [laugh]. Naturally. As you shouldn’t. Still, this RFC from 1983 is immensely well written, and it’s prescient in some ways. Like, they knew that the database would be distributed and identified using iteration or recursion to answer queries. So, if the server that you contacted didn’t have the record because it wasn’t authoritative for the zone, it could either use recursion to find that answer, or it could reply back with other name servers to try.

The RFC also identified resource record types that we still use today. Resource records are individual entries in a resource set for a zone. For instance, when I look up pod.chaoslever.com, I get back a resource record of type A, with the class IN, and a name and an address. Type A records are host addresses. That’s why the A is in there. Did you ever wonder why it’s class IN?

Chris: I assumed it was for internet.

Ned: You’re right [laugh].

Chris: Got one.

Ned: It does stand for internet. It was not the only class. There were actually several different classes, but effectively, the only one used today is IN, which was short for ARPA internet system. And the contents of the response beyond that largely depend on the record type. I get back an address because my query was looking for a Type A record, but other types include CNAMES, MF and MD records—which you don’t really see anymore; they’re all MX records now—or SOA for start of authority. And this was all created back in 1983, and honestly, it’s virtually unchanged.

Chris: And also TXT. Don’t forget TXT.

Ned: Useful but also terrible. There’s also SRV records, which again, I had no time to get into, but that is a whole other can of worms, and the foundation of Active Directory, honestly. Since DNS is high-caramba, the authority for any given name server is derived by the parent domain server. So, if I want to be authoritative for the subdomain pickles.cucumbers.com, then that authority rests with the name servers for cucumbers.com.

The name servers for cucumbers.com refers to the name servers for com, which refer to the root DNS servers for the internet. There’s supposed to be sort of a chain of trust, but that chain can be subverted because none of this is cryptographically signed at all. If my client is pointed at a name server that I, in theory, trust, that name server can just lie to me and claim to be authoritative for the entirety of the tree from the root label all the way down to pickles.cucumbers.com, and I will believe them because I have no reason not to.

If you’ve ever accidentally broken DNS in an Active Directory forest by doing, I don’t know, split-horizon DNS, you know what I’m talking about. Weee. Whycannoonegettoanything.com [sigh]? To add insult to injury, even if I am using the correct name server for a given domain, the requests are sent using UDP over port 53. No encryption, no signing. No session even. It would be trivial for an attacker to intercept responses and give me false resource records. They just have to respond before the legitimate name server does. That’s it [laugh].

Chris: Remember, this was 1983. We didn’t know what security was yet.

Ned: That is very true. And the requests are sent in clear text, which means anyone on the wire can see the contents of every DNS query I send to the name server and its response. You could say this system is ripe for abuse.

Chris: Problematical, some would say.

Ned: Now, they did acknowledge that back in 1983, but they didn’t really understand the scope of the potential issues. There was 70 sites on the global internet, and you literally knew every person involved. So, if you had a problem, you could just reach out to Fred over at UCLA, and be like, “What the hell, Fred?” Then the internet exploded, and suddenly DNS went from a convenience to an absolute necessity. But nothing changed about the standard aside from introducing some new resource record types and advice on how to structure your DNS implementations.

So, there are roughly three main concerns that need to be addressed with the security of DNS. One is data integrity: how do I know the response I’m getting is genuine and untainted? Two is authentication: how can I trust the server I get a response from? And three is privacy: how do I secure the responses from—here’s another word I can’t pronounce—interlocutors. Interlocutors?

Chris: No, I think you had it right the first time.

Ned: Okay. Well, I’m not going to say it again, so we’re safe.

Chris: That’s fair.

Ned: Ay caramba [laugh]. So, whatever device you’re currently listening to this podcast on likely has a network connection, unless you’re listening on, I don’t know, a Zune or something. Where did you get a Zune? Well, done. And the DNS servers it is using were probably offered up through DHCP, aka Dynamic Host Configuration Protocol. We could do a whole show on that, and we probably will at some point. This makes the bold assumption that whatever network you’ve obtained an IP address from is giving you DNS servers that you can trust.

Chris: And if you were paying attention to the Tuesday show, you’ll know they do that all the time.

Ned: Mm-hm [laugh].

Chris: And you can’t always trust it.

Ned: No. There are some benefits to this arrangement, especially if you’re inside of a corporate environment. Most organizations, as you mentioned, Chris, are going to have an internal network and an internal DNS service that can resolve those internal-only domains you might be running, like, say, megacorp.local. When you need to connect to your file server at accounting.megacorp.local, the DNS servers inside your corporate network handle the name resolution for the megacorp.local zone. For zones outside of megacorp.local, the internal DNS servers can query external DNS servers and resolve requests for you.

Moving to the personal world, your home Wi-Fi probably just uses the wireless router for DNS, which in all likelihood, doesn’t actually have any zones on it, but it’s simply acting as a proxy for your ISPs DNS servers, which it receives through DHCP as well. When you’re out and about connecting to free internet, you’re now at the mercy of whatever janky DNS server that free Wi-Fi is using. If someone’s feeling malicious—and they probably are—not only can they spy on all of your DNS requests, they can also lie to you by answering DNS queries for which they are not authoritative. This is not great.

So, some solutions. How can the DNS client on your machine trust the response it gets back from the DNS server? How does it know those records are genuine? One proposal is DNSSEC. RFC 2065—from back in 1997, so this is actually pretty early on—was the beginning of adding what are called DNS security extensions to the standard. The RFC acknowledged that, quote, “The Domain Name System (DNS) has become a critical operational part of the internet infrastructure, yet it has no strong security mechanisms to assure data integrity or authentication.”

What the RFC proposed was to use cryptographic keys to generate a new resource record type called SIG, like signature, and that would accompany a regular resource record to show that it had been signed by a trusted zone. The private key would be used to sign resource records, and then a public key would be available for each zone that a client could then use to verify the signature of the SIG record. How does your client get the public key for a given zone? Through DNS, of course. Oh, dear.

The idea was that your client would have preloaded public keys for certain top-level domains—think com, edu, et cetera—and then it could query the authoritative servers for those domains to get the public key for any subdomains. And the high-caramba nature of DNS makes this type of chain of trust possible. It’s very similar to how your operating system and browser has a list of trusted certificate authorities, including their public keys, and when you need to verify the authenticity of a certificate, you have the public key of the signing CA, and that in turn is signed by an intermediate or root CA, and your system hopefully has the public key stored locally to verify that chain of trust.

Chris: Right, because otherwise, a website could pretend to be a different website.

Ned: Absolutely. Which is why when you get that big certificate error, you should probably not go to that website, depending on the error. Now, this proposal came out in 1997. Do you want us to guess how widely DNSSEC has been adopted? Just go ahead. Go ahead, Chris. Guess.

Chris: One hundred percent.

Ned: Wrong [laugh].

Chris: [sigh].

Ned: Not even close. There’s a map on apnic’s website that tracks DNSSEC validation rates by country. The US stands at a—and this is actually not bad—34%. Incidentally—this is fun—Iceland is at 95%, which tells you how seriously Iceland takes naming. Just all naming, really.

Chris: They do use a lot of letters.

Ned: So, many letters. The most recent RFC for DNSSEC published in 2023 quotes adoption at 10% or less for website domain names. Yep, this thing has been around since 1997, and we’re at 10% adoption. Good job everybody [laugh]. They do rightly point out that although DNSSEC is still considered a best practice, in reality, most DNS implementations have decided that the juice just isn’t worth the squeeze.

Why? Honestly, probably because of the prevalence of HTTPS. HTTPS and the certificate system underlying it makes DNSSEC less useful because so many websites are already secured with a TLS certificate that they probably got from Let’s Encrypt. Even if you get an invalid response for a DNS query pointing you at the wrong website, the certificate for the bogus website should fail validation, so in a way, HTTPS supplanted the need for DNSSEC, at least kind of sort of. It’s good enough, leave me alone. Let me buy these damn pickles. I’m getting hungry, Chris.

Chris: I mean, one would have hoped we would have been a little bit more rigorous with a defense in-depth concept. Because if you can encrypt both sides and validate with signed keys, why wouldn’t you?

Ned: Yeah. Because people are lazy, and it costs money [laugh]. “This is hard. I don’t like it.” Part of it, if we’re being honest, is that the tooling for DNSSEC was never particularly friendly, and so the overhead of implementing it on your system was kind of high, and no one was really demanding it, so we didn’t do it.

Chris: Fair enough.

Ned: Okay. So, DNSSEC not widely adopted, but is available. What about encrypting your DNS traffic? The original draft of the DNSSEC RFC explicitly called out that DNSSEC was not meant to protect the communication channel used for DNS queries. And for the longest time, that channel was absolutely not secured. As I mentioned earlier, traditional DNS has used two protocols: UDP and TCP, both using port 53. Incidentally, that’s why the AWS DNS service is called Route 53. It’s cute, huh?

Chris: I remember when I figured that out. I was like, “Oh, I get it.”

Ned: Yeah, last week must have been great for you.

Chris: [shhr gzzz].

Ned: [laugh]. You walked right into it.

Chris: Yeah.

Ned: So, neither UDP or TCP provide encryption of the data packets on their own. That’s up to some other layer in the stack. IPsec is one such option, and that was actually called out by the DNSSEC RFC as a possible solution, but as with the DNSSEC implementation issue, enabling encryption of the data needs to be worthwhile for the DNS resolvers, and a standard needs to be agreed upon and implemented by all the major players or will make no progress. So, people have to care, and then we have to have a standard that actually does it. For a really long time, nobody cared. Until they did.

What finally pushed the problem with clear text DNS communication over the tipping point was probably the incredible amount of abuse being perpetuated by ISPs. You know, those companies that people universally despise? This is fun—and you know, hometown heroes Comcast call-out here—according to Yahoo Finance last year, Comcast is the number seven of the twenty most hated companies worldwide, beating out such stinkers as FTX and Equifax. So, like, impressive. Well done.

Chris: Quite. I mean, they used to be, like, number one or number two. So, that’s progress, I guess. Or is it just that everybody else is getting terribler?

Ned: I wasn’t going to say that, but yes. If you look at one through six, you’re like, “Oh, yeah.” TikTok’s in there [laugh]. Speaking of Yahoo, you know who bought Yahoo in 2017? Verizon. An ISP. Why the hell would an ISP buy Yahoo? Data. That’s why they bought Yahoo. Customer data.

ISPs are not content to simply provide fast internet access, although they don’t do that very well, either. They have bigger dreams, Chris. One way they can realize those bigger dreams is by selling your data to the highest bidder, and then to everyone else who has a few spare coins jangling around. And since they can see all of your traffic traversing the network, they can inspect every single DNS query to see what sites you frequent, and what queries you’re sending out. Even if you aren’t using the ISP provided DNS servers—which we very much recommend you do not use those DNS servers—it’s all still sent in clear text across their network, so they can spy on you. And they do. A lot.

Sometime in the mid-2010s, we sort of became aware of this and found out we didn’t like it, and companies like Google and Cloudflare also took notice that people were getting up in arms, and they wanted to make things better. Better for whom? That’s debatable. Remember, advertising company Google—we’ll maybe come back to that at some point—but the point is, they took the initiative and tried to make things better. While TLS kind of did an end run around DNSSEC, there was no such option for encrypting DNS queries, and so we had two competing standards. There’s actually, like, six more, but I don’t want to talk about them. The two standards are DoH and DoT. It’s like VHS versus Beta all over again. And if, listeners, you don’t get that reference, maybe Blu-ray versus HD DVD. No? USB-C versus lightning port?

Chris: That’s the one.

Ned: Okay. We’ve hit our demographic. DNS over TLS, or DoT, was first proposed in an RFC 7858 in May of 2016, and the core idea was to use TLS-based encryption to prevent eavesdropping and tampering of DNS queries and responses. There are a few caveats with this implementation that I think are worth bringing up. First, traditional DNS uses UDP by default, only falling back to TCP if absolutely necessary.

UDP is stateless in nature, which makes it really, really fast for communications that are like DNS queries. TCP requires a three-way handshake and session management. The addition of TLS over TCP adds more overhead to the session because of the TLS components. So, DNS over TLS is going to be somewhat slower than UDP, and require more resources on the client and server. Especially for a DNS server handling a large number of clients, that overhead will become noticeable.

The second big thing is the standard port for DNS over TLS is port 853. Traditional DNS uses port 53 as we’ve discussed. Port 853 is not a well known port, most servers are not listening on it, and most firewalls won’t allow it. Adding a new well-known port will require a lot more work than just upgrading some DNS servers. So, you can guess how widely DNS over TLS has been adopted.

Chris: One hundred percent.

Ned: One hundred percent of zero, absolutely. The other solution is DNS over HTTPS, or D’oh. Ay caramba. This uses HTTPS for DNS queries. Super confusing, I know. This one was proposed in 2018 by Mozilla and ICANN in RFC 8484. There’s not a lot to say here other than it works exactly like you would expect it to work. The client establishes a TLS session over TCP with the desired DNS server, it sends an HTTP GET request with the query, and the DNS server will respond with the necessary information.

And at first glance, it seems like DoH is adding the overhead of HTTP for no real benefit. For instance, are you expecting low-level clients to implement an HTTP stack just to do DNS queries? But there is a big benefit of DoH over DoT, and that is the use of the well-known port 443 for communications. It does require servers to be upgraded to support DoH, but it doesn’t require anything else in the path to change. So, DoH is ideal for internet-bound queries and traffic, and since browsers already have an HTTP stack, and web servers are already listening on port 53, you can guess which standard won out.

Chris: One hundred percent.

Ned: Well yeah, I mean, pretty close. The introduction of Quic, which uses UDP instead of TCP, adds a new form of DoH called DoH3, or DNS over HTTP 3, and the main thing here is the use of UDP to carry the TLS session, which should lower latency in response times. We don’t have to get into Quic here because I am, wow, already way over our normal time limit, and honestly, that could be its own episode, but suffice to say, DoH3 gets us closer to traditional DNS over UDP response times.

For internal and private DNS, snooping might not be as much of an issue, so using traditional DNS could be fine, or you can implement DoT or DoH internally. I looked it up and Windows Server 2022 supports both, which I didn’t realize. That means you can run Active Directory, and it will keep working as expected. If you’re a home user, and you want to start using DoH, you probably are already using it, depending on your browser. Mozilla was heavily involved in establishing the standard and Firefox started implementing DoH and making it the default in 2019.

As far as I can tell, Chrome did the same around the same timeframe. I should mention that just because you’re using DoH, that doesn’t mean that no one is harvesting your DNS query data. Your ISP might not be snooping anymore, but you can bet your ass that Google is, assuming that’s the DNS service you’re using for resolution. There are some privacy-focused DNS providers out there like Cloudflare. I think it’s 1.1.1.1. They allegedly do not record anything about your queries, so you can manually configure your browser to use those servers if it’s not already configured that way. And that finally brings us to Microsoft’s Zero Trust DNS concept.

Chris: Oh, that’s what we were talking about.

Ned: [laugh]. And, yeah, guess what? We’re already at time. So, this is going to be a two-parter. Next week, we’re going to look at Microsoft’s history with DNS, their failed attempt at a DNS alternative that I had to deal with for entirely too long, and what ZTDNS actually is. Whoo.

Hey, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end, so congratulations to you, friend. You accomplished something today. Now, you can go sit on the couch, fire up your browser, and read all of those cool RFCs that I referenced. You’ve earned it. You can find more about this show by visiting our LinkedIn page, just search ‘Chaos Lever,’ or go to the website, pod.chaoslever.com, where you’ll find show notes, blog posts, and general tomfoolery. I might post this whole thing as a blog post because it’s a lot, and it was kind of fun. We’ll be back next week to see what fresh hell is upon us. Ta-ta for now.

Ned: Ay caramba?

Chris: [Havanagila 00:41:21]?

Ned: [laugh]. Huzzah.