Some car automatically lock their doors during driving in order to prevent hijackings, especially at red lights.
Same cars has issues with drivers being locked inside if the car goes into the water. That is a bug.
Is the solution to abandon locks on cars, or is the solution to fix the problem of the car doors staying locked when submerged into water? The security system by now is fairly advanced and addressing issues with accidents is a real problem. No one however would sell cars without locks.
If we follow this analogy further, why should we keep the concept of car doors if particular car locks can be made with bugs in them? Doesn't the possibility of bugs in locks means that there will always be a risk, even if we abandon specific locks that has demonstrated to have a bug in them?
Solutions exist within problem spaces, or "paradigms", a car lock is only useful for a specific set of things i.e. deterring thieves or unwanted entry e.g. at stop lights.
What I really want is a force-field to keep people and highway debris out while driving around, and a secure storage solution while not in operation.
If you could sell me a force-field car with a retracting tent, and it were safer/had less issues, I might not care to have locks on my car.
See also - door-less/open air vehicles. Car door locks have some pretty serious issues. There are some problems they just can't solve.
Doors are part of the structure of the vehicle and must stay closed to do their job in a collision. To my understanding this is the primary if not sole reason doors lock after a few seconds in motion.
I'm not really a fan of dnssec, but in fairness to the analogy - car keys really do make life harder if you need to rush someone to the hospital and dont have the keys.
In a sense, yes, but not so much so that DoH is a dispositive argument for deprecating it.
The difference is that DoH protects transactions and DNSSEC protects the authenticity of records. It's perfectly possible for a DoH server to feed you bogus cached records; you have to trust the DoH server you're talking to, where you wouldn't have to do that if all the records on the chain of lookups you're doing are signed with DNSSEC, and you're running DNSSEC on your local system rather than a stub resolver than talks to full resolver server you have to trust (this is an uncommon set of circumstances and in practice you have exactly the same server trust problem with DNSSEC that you do with DoH).
Muddying the waters further, the attacks DNSSEC protects against overlap with the ones DoH protects again, so that if the whole Internet managed to switch to DoH, you'd have bottom-up built 95% of the security feature DNSSEC is attempting to provide (DoH has massively better deployment stats than DNSSEC, so this is plausible).
It's better to think of DoH as one of a catalog of different arguments that together make a clear case for sticking a fork in DNSSEC and calling it done.
I think the best argument for DoH being the killing blow to DNSSEC is that not only does DoH overlap a lot with the security features DNSSEC is trying to be the solution for, but DoH also gives you arguably more important security features that DNSSEC does not and cannot reasonably provide.
This is particularly true if your security model also includes things like TLS to secure your communications with whatever domain you just resolved. In that scenario, the features DoH provides that DNSSEC does not (e.g. lookup confidentiality) are still quite useful, while the 5% of DNSSEC use-cases that DoH doesn't cover are essentially redundant if not better provided elsewhere in your protocol stack.
Where what matters? On-path DNS attacks occur everywhere across the Internet, and are probably more common on the lookup side and at the edges. Certainly, the use of DoH to protect authority transactions isn't common, yet!
The most devastating and primary attack I am worried about is someone obtaining a TLS certificate for my domain via services like Let's Encrypt.
Thus, I really care about LE getting the right IP, I don't care about random users' DNS getting hijacked because their browser will reject the missing/invalid certificate.
LetEncrypt does validate DNSSEC signatures (when they exist), but CA's aren't even required to do that. LetsEncrypt also does multi-perspective lookups, so a single hijacked DNS transaction or poisoned cache is insufficient to trick it. Hopefully, at some point in the not-too-distant future, LE's multi-perspective lookup will generate data we can look at about the frequency of DNS attacks on certificate issuance. I expect it to be quite rare, because it's an elaborate attack with a weak payoff.
The right way to think about this CA issue is this:
* The largest, best-funded, savviest security teams in tech are, like the rest of tech, not signing their domains; the major TLDs are overwhelmingly not signed (there is low single digit uptake in .COM for instance, and what's there is overwhelmingly not big companies but rather random domains signed by registrars that auto-sign). Nobody who's actually targeted for CA misissuance attacks uses DNSSEC to mitigate that threat.
* The WebPKI already has a system in place to guard against misissuance that, unlike DNSSEC, actually does work: Certificate Transparency. So if you're actually concerned about CAs not issuing bogus certs for you, match your revealed preferences to your stated ones and set up CT monitoring.
Subscribe to CT logs for your domain, and you will know if this ever happens, from LE or from any oher authority. Such attack is a very big deal, if this happens this will only happen once, whichever hole is used will be closed. And if this does not happen, you can be rest assured your TLS is safe.
Meanwhile if DNSSEC's vision is ever fully realized, you will lose that control entirely. There is no CT there, and even if it was build somehow it will be useless as it has no "teeth".
> Meanwhile if DNSSEC's vision is ever fully realized, you will lose that control entirely. There is no CT there, and even if it was build somehow it will be useless as it has no "teeth"
This is a false dichotomy. DNSSEC secures DNS records, it doesn't prevent logging certificate issuance.
I think you misunderstood the claim: say the U.S. government leaned on the .com DNS server operators to issue a different response for, say, Gmail.com to certain requesting IPs. The absence of a mechanism like CT makes that very hard to detect since everyone else in the world is going to see the same correct response, and there’s no reason for the target’s DNS resolver to question a response with a valid DNSSEC signature, and since DNSSEC has no UI there’s not even a way for the user to notice.
That matters because, as the person you were replying to explained, there’s no plausible way to build such a thing. We have CT because the browser developers insisted on it and they control the clients but DNSSEC doesn’t have an equivalent party with that kind of leverage.
They serve different purposes. DoH protects DNS information while in flight. DNSSEC cryptographicly signs DNS records so they can be validated as being created by the owner of the domain. With only DoH you can be assured of privacy in flight and that the response hasn't been changed in flight; however, you don't know that the records on the server you connected to have not been manipulated.
The motivating use case for "cryptographically signing DNS records so they can be validated as being created by the owner of the domain" was the protection of those records in flight, which is something DoH does.
A reminder that DNSSEC's "cryptographic security" coalesces to the single AD=true bit in the DNS header by the time DNS responses hit your browser; DNSSEC is a server-to-server protocol. So in almost all cases, save those in which nerds have run full recursers on their desktops, the server trust situation with DNSSEC is largely the same as that of DoH.
> The motivating use case for "cryptographically signing DNS records so they can be validated as being created by the owner of the domain" was the protection of those records in flight, which is something DoH does.
According to the original DNSSEC RFC, RFC2065, the purpose of DNSSEC is to provide data origin integrity and authentication. It also states "In addition, no effort has been made to provide for any confidentiality for queries or responses. (This service may be available via IPSEC [RFC 1825].)" IPSEC didn't end up providing that role but DoH does. It doesn't address the issue of data origin integrity though.
Nice quote. I don't understand why so many people on HN have such a vocal dislike of DNSSEC and DoT. It's perfectly ok for DNSSEC, DoT and DoH to co-exist.
Let’s say I operate my own authoritative DNS servers and my own web server. Which I do.
With DNSSEC, I know that anyone asking for IP addresses of my web server will get the correct address, and in the future it may be possible to use TLSA records (and/or HTTPS records) so that the user’s web browser can be certain that it is connecting to the correct site, with the correct key. The only weak point is that the user might be using a DNS resolver which may be correctly checking the DNSSEC signatures, but the DNS replies, sent from the resolver to the user, are not signed, only “authenticated” by the AD bit. (This is where DoT – or even, blech, DoH – might actually help.) Or the user could run their own local resolver with no untrusted path between their device and their resolver, closing the gap completely.
Contrast that with using no DNSSEC, only DoH (or DoT). The user asks the resolver for the IP address (and possibly HTTPS/TLSA records), and the resolver asks the authoriative DNS server for the information. But the whole conversation between the resolver and the authoritative DNS server is completely unsigned, and can be manipulated at will by any middlemen. Also, it is not even TCP (which is hard, but not impossible) to manipulate as a man-in-the-middle, but DNS is most often UDP. But let’s say there is no manipulation here. How do you trust the identity of the resolver? Because they have a nice certificate from “Honest Achmed's Used Cars and Certificates”¹, which says they they are indeed the DoH resolver the user has configured? The same problem also applies to the web site certificate; how can you trust it, when any CA in the world can issue any certificate at will?
> The same problem also applies to the web site certificate; how can you trust it, when any CA in the world can issue any certificate at will?
Which is where you get to the crunch point though: this is not a digital problem, it's a social problem. The way you establish trust is by generating a suitable preponderance of evidence that someone is who they say they are.
On the internet we outsourced this to a couple of big players who broadly benefit from decent standards, but the ultimately it is just "might makes right" - Google says they don't like you, your business ceases to exist.
The trouble is some of our standards in this area are complete junk and so widely implemented they're near impossible to change - i.e. why CA certificates don't have a limited domain scope (technically yes, they can, but it's an extension and a lot of implementations don't check it to the point you can't rely on it at all).
IMO this is where open-source and it's anti-government bent has to some extent done a huge disservice to the internet. The vast majority of internet traffic people need to be "no questions secure" is traffic that interacts with government or government-recognized entities and their associated legal systems. The identity people are trying to establish is most frequently "Are you under a legal jurisdiction providing me recourse if dealt with unfairly, and honestly representing yourself?"
Which incidentally is why I wish like hell the Signal project would create a paid enterprise offering to fund itself. I want my bank to use Signal to send me things, and I want some assurances around proving that's who it is.
No. They're completely unrelated. DoH/DoT are solutions for encrypting the traffic to hide it from ISP, they do nothing to improve trust in the results (unless you trust the resolver and the CA they used to sign, but then that's the same world you were already in with TLS after the fact).
If your browser ignored all certificate errors, you'd have a real security problem. That's not at all the case for DNSSEC: it's possible that all of the DNSSEC root keys could hit Pastebin and nobody would really need to be paged.
Browsers did ignore most certificate errors back in the early 2000s. HTTPS sites were fairly rare and most people did not care about it or even considered https to be a negative. Many administrators considered it as bad technology that only increased instability with no obvious benefit. "Who cares about what people post to a forum?" was something I personally heard when I added https to one site. It was only really banks with plain passwords that needed https, and then external hardware devices really made https obsolete for that problem.
For more fun diving into this topic, I can recommend a famous old presentation called the "Everything you Never Wanted to Know about PKI but were Forced to find out", and godzilla crypto tutorial written by the same author (Peter gutmann). The certificates in browsers has had a long history of problems and ill designs. People did not like them, and they definitively did not like them when they caused major issues.
> Browsers did ignore most certificate errors back in the early 2000s. HTTPS sites were fairly rare and most people did not care about it or even considered https to be a negative. Many administrators considered it as bad technology that only increased instability with no obvious benefit.
I’m not sure what you’re basing that on but every claim is the opposite of my experience back then. Even in the 90s it was expected that you used HTTPS for any site selling things, for example, as the credit card companies would block a business who let numbers go over the network in plaintext.
Early on there were concerns about performance but that was mostly over by the turn of the century for all but large file transfers. The primary drawback was the cost of a certificate back then.
I recall well those discussions. Web stores did indeed often use https to protect credit cards. The argument was however that physical stores did not need to have similar protection, and that the issue really was with the weak security of credit cards. HTTPS was a unstable solution for a problem which people argued should had been solved with the credit card system. Physical security devices was again lifted as the future solutions to this problem.
It should also be mentioned here that credit card numbers as a security token has actually slowly been phased out in favor of other forms of payment systems online, and many banks today implement additional security requirement if you pay with a credit card. Black market with stolen CC numbers, despite https use by web stores, used to be one of the biggest issues with the internet, so even with all the stores using https it wasn't a solution to that problem.
I remember people talking about performance issues with https until the early 2010. "Every single micro second slower means reduced sales" was something people was very concerned about. I even heard it from people during an IETF meeting. It was talked in similar tone to how people today talk about SEO.
Yes, that’s why I found the assertion that it didn’t exist so odd since almost anyone who supported web sites or browsers back then was familiar with that dialog.