Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is the very problem of certificates, because self-signed was never trusted and is not an option.

What I don't get, with DNSSEC getting rolled out more and more, why can't every site operator simply stick the fingerprint of their HTTPS cert in DNS and have DNSSEC take care of the trust chain?



There are a bunch of reasons why this doesn't work, but first among them is this: This would require browsers to make DNSSEC queries, and if those queries were blocked or otherwise failed, the browser would have to fail closed and present a certificate error. (The alternative, failing open and allowing users to visit a site if the browser doesn't get a response to the DNSSEC query, would be completely insecure, because the whole point of HTTPS is to defend against MITM attacks, and an MITM could simply block all DNSSEC traffic.) This is impractical because there are a bunch of existing firewalls and middleboxes that don't understand DNSSEC and that (contra Postel's Law) block DNS traffic that they don't understand; implementing this change would be cutting off everyone behind those firewalls and middleboxes from the Web.

There are also a lot of other objections to it, including that it's too centralized (see https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...), but that's the basic reason why it can't be deployed.


Contra GP, could you clarify how DNSSEC itself even important in this though? As far as I can tell, nearly all of my typical stack ultimately comes down to my domain. If my registrar, account with them, or DNS is compromised that's it. Let's Encrypt will cheerfully issue all the certs hostile attackers want. They'd have the ability to get most new emails (except for PGP/custom S/MIME) and in turn most accounts as well except for those with out of band pre-shared keys required (SSH, webauthn sites, maybe SMS). "Required", because realistically with that level of access many services that claim to require 2FA could probably be socially engineered around.

So I guess I'm confused how

  DNS records hold a public key rather than _acme-challenge
would be any worse? Is it that while browsers may not be able to consistently get DNSSEC, Let's Encrypt consistently can for those that use it so it provides a weak bit of OOB workaround? I can see how that might be of some value, but not sure it's worth the trade off. I hope there's at least a concerted effort to someday secure DNS in general, thus making CAs (including LE) obsolete unless they do something beyond domain checks.


You seem to be looking at it from the point of view of attacks against the domain owner.

If someone gets control of your domain, then yes, there really isn't much difference between a system where a public key in your domain's DNS records is used by clients to tell they are talking to your sites and a system where a certificate issued by LE is used for that.

As you note, once they have control of your domain they can get new certificates from LE for it.

But attacks against the domain owner aren't the only kind of attacks. There are also attacks against the domain's visitors.

Suppose Bob has an old router at home with out of date, buggy firmware that I know a remote exploit for that allows me to change his DNS settings. I change it so that instead of using the DNS servers it get from Bob's ISP via DHCP to resolve DNS requests from the LAN, it uses my DNS server.

My DNS server points your domain to one of my sites instead of yours, where I run my nefarious fake version of your site.

Under the public key on DNS server approach, it's my DNS server, so I can put a public key on it that corresponds to my keys on my fake version of your site, and everything will check out from Bob's point of view.

Under the LE approach, this does not work. To get LE to issue to me a certificate for your site, I have to get LE to use a DNS server I control to resolve your domain.

That's the difference. Under the current system to securely hijack your site I need to compromise your system, your DNS provider, your domain registrar, your ISP, or your hosting provider. Under the keys in DNS approach, I just need to compromise end users.


Thanks for taking the time to flesh that out! I was guessing something like that might be it in a neighbor post, so automatic web CAs at this point are essentially a bit of a hack/workaround on a common inability of clients to securely verify DNS itself? And from reading it looks like there are plenty of ways to do that (from DNSSEC to DoT/DoH or even for low sophistication attacks simply hard coding a DNS server rather then depending on DHCP, though some of these themselves create other headaches), but those ways are far from universal and in some cases could have conflicts with other network middlemen?

It definitely makes sense, and I completely understand how this sort of workaround can be necessary to get stuff done. But it's also too bad and I hope there can be a push towards securing DNS sooner rather then later, as it really should be the minimal source of trust in most cases. Having DNS be reliable for that with widespread support would open up a ton of additional new possibilities too, and not just for public websites.


Are you saying, there exists some goofball who filters DNSSEC traffic, therefore DNSSEC isn't possible? If that's how you want to play it then Google now has a moral duty to implement NAPTR and SRV too. That way fallback and load balancing can happen in the Chrome client. Thus folks won't have to pay for BGP or VIP privilege anymore to be able to serve internationally.


IIUC the Chrome developers ran experiments to determine how many users would break if they rolled this out, and the number was above whatever they consider an acceptable threshold.


Which L Team member came to that determination? Would he or she like us to help by setting up public RRDtool monitoring of the policy impact on Internet properties?


> This is impractical because there are a bunch of existing firewalls and middleboxes that don't understand DNSSEC and that (contra Postel's Law) block DNS traffic that they don't understand; implementing this change would be cutting off everyone behind those firewalls and middleboxes from the Web.

Funny how this was never an argument when HTTPS was forced on everyone (to solve the problem with JavaScript being insecure), despite almost the exact same reasons holding up with middle-boxes, proxies and the like.

Now that there is an alternate, much simpler HTTPS-solution everyone is "oh noes that would break corporate environments". What's different this time around? Why is breaking things bad this time?

If we could just sit down, and all agree that the only reason we "need" HTTPS everywhere is because running JavaScript in a browser represents a security-issue, the quicker we can start solving the real problem: JavaScript as a default-permission.

The quicker we can phase that out from web-pages (which honestly more often than not, simply represents documents), the better.


> Funny how this was never an argument when HTTPS was forced on everyone (to solve the problem with JavaScript being insecure),

This misunderstanding is where your argument went off the rails: people wanted HTTPS because they are sending data over networks which aren’t perfectly trustworthy. Starting in the late 1990s there was string consumer demand to feel safe entering credit cards and other personal information even if the site used no JavaScript - our household-name clients expected it even if they only used JS for mouseover effects. That was before WiFi became common, too, and people realized the risks of letting everyone at Starbucks see their information.

DNSSEC has no equivalent because it took so long to be adopted that it offers no meaningful user benefit over HTTPS, and it’d need to be a substantial win to balance out the poor user experience and implementation challenges.


But most of the websites today are not sending sensitive data back and forth, they are simply sending public data from public websites.

And here the HTTPS crowd‘S argument is that an evil attacker can inject malicious JS if MITMed, and this according to them is why everyone must have HTTPS for everything. No middle ground.

But if JS was not a default permission, that wouldn’t be a problem and plain HTTP would go a long way for most people, without risking stuff like heart bleed and insecure servers, due to an overly complicated and often vulnerable crypto stack.

Edit: I don’t see why people think this is a crazy suggestion. We used to treat MSWord-documents the same way we treat web-documents now. That is, once op deed, the document had the permission and capability to run any embedded script it wanted.

In the end everyone realized this was a bad idea, and now Word-documents have to request the script-permission from the user before scripts are allowed to run.

The result? MSWord-based worms and malware almost eliminated over night.

And I’d like to see anyone argue that we should give HTML-email script-permissions by default. You know why that is a bad idea.

So why wouldn’t want the same for the web? Why should not the web be safe by default? Why should we have to accept scripts we haven’t authorised?


Making 99.9% of websites ask for js permissions, and often break if the user clicks no, is just adding unneeded friction for the huge majority of cases. “Do you want to accept X” is also a terrible security boundary since most users won’t be able to tell when it is reasonable vs an attack scenario.

The HN community loves static sites and the old web. But this is a tiny tiny tiny minority.


> Making 99.9% of websites ask for js permissions, and often break if the user clicks no, is just adding unneeded friction for the huge majority of cases.

That problem is solved by making sites avoid needless JS.

Today I can browse almost every major news-site without JS and they won’t visually break. Same for documentation. It’s all documents after all.

If I enable JS they load megabytes of scripts and still act just the same. What did those scripts bring me? Increased CPU load and lowered battery life.

What is it wrong with not wanting that as default?


You want that as a default. You aren't wrong for your own needs. But your needs are different than other people. The large majority of web usage is not interacting with documents. It is interacting with applications.

"I wish things were like the 90s, but with broadband" is a constant refrain among techies but go talk to the rest of the world and it is absolutely not what people want.


> But most of the websites today are not sending sensitive data back and forth, they are simply sending public data from public websites.

Really? Most websites don't have login systems? Don't ever have content you might not want everyone to know you're reading? Don't allow you to post messages you might want to keep separate? Don't allow you to search for things you want to keep private?


> But most of the websites today are not sending sensitive data back and forth, they are simply sending public data from public websites.

I disagree. Most websites include some sort of login system, at least for the editors to use (e.g. Wordpress). And the few that don't, often include data that doesn't need to be private, but should be signed (e.g. GPG key fingerprints).


What you're talking about is DANE.

RFC 6698

https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...

I wrote a short explanation of how it works a while back.

https://www.metafarce.com/adventures-in-dane.html

The short answer is that DANE has been seeing deployment success for SMTP, but has yet to see much deployment success for HTTPS.

https://stats.dnssec-tools.org/


I'm one of the few weirdos that actually validates DANE for HTTPS using some software I wrote called Danish.

Once in Python

https://github.com/smutt/danish

And again in Rust.

https://github.com/smutt/danish-rust

An old explanation can be found here.

https://www.middlebox-dane.org/

Both middlebox-dane.org and metafarce.com have DNS TLSA records that match their certificate from Let's Encrypt. You can do this, but almost no one is.

There is some pickup in The Netherlands where I live. For example, www.digid.nl has delpoyed DANE for HTTPS.


> ... DANE has been seeing deployment success for SMTP, ...

Since "DANE for e-mail" is still relatively new / obscure, for the uninitiated, RFC7672 [0] describes "SMTP Security via Opportunistic DNS-Based Authentication of Named Entities (DANE) Transport Layer Security (TLS)", which requires DNSSEC -- and, as a result, comes with all of its associated disadvantages / trade-offs.

An alternative, more recent method of providing authenticated and encrypted delivery of e-mail is described in RFC8461 [1], "SMTP MTA Strict Transport Security (MTA-STS)", which uses DNS and HTTPS -- and, thus, relies on CAs / PKIX. MTA-STS doesn't require DNSSEC but it is then susceptible to "malicious downgrades".

Although we're not going to see all e-mail being encrypted in transit anytime soon (if we ever do), it's good to know that there are folks working on solutions, including at some of the largest e-mail providers.

---

[0]: https://tools.ietf.org/html/rfc7672

[1]: https://tools.ietf.org/html/rfc8461


MTA-STS doesn't so much "not require" DNSSEC so much as explicitly work around ever needing it; that's stated as one of the goals of the protocol in the RFC.


> What I don't get, with DNSSEC getting rolled out more and more, why can't every site operator simply stick the fingerprint of their HTTPS cert in DNS and have DNSSEC take care of the trust chain?

This question is extremely relevant given how DNS-based authentication is the preferred way to authenticate with letsencrypt.

If DNS is good enough for letsencrypt to validate you and give you a cert, why can't DNS be good enough for the browser alone, without needing to rely on some third party service?

Letsencrypt has certainly made HTTPS easier in practice, but we really need to get rid of the whole CA-model entirely.


This was proposed already in 2012 with RFC6698. DNS based Authentication of Domain Entries, DANE.

Hasn't really gone anywhere of importance.

I personally think there's been a few factors as to why:

* Distrust of DNSSEC and centralized authority.

* Lagging DNSSEC deployment, not just in DNS but also in clients and applications.

* Needs another DNS lookup on connection to validate certs, adding lag. I think it probably needs stapling support in some form just like cert validity checking.

There's also been follow up RFC detailing it's use for SMTP, SRV records and PGP.


A DKIM like service? For emails I find it very convenient, publish your public key with DNS and sign outgoing emails, outgoing emails are now "certified" coming from your server and the recipient can easily verify it, no CAs involved.


Let's Encrypt doesn't have to worry about firewalls and middleboxes blocking their DNS queries. Browsers do.


Yes, and browsers solution has been to tunnel those queries over some protocol that middleboxes can not mess with. Because losing the DNSSEC fields is the lesser of the problems they cause.


What Letsencrypt does (and all other providers of certificates to end users do) is verify that you control the domain you are requesting a certificate for.

That's well and good, but who are those Letsencrypt guys, anyway? That's why there is a chain of trust and Certificate Authorities.

You, and your browser, cannot know everyone out there so you pick a few 'authorities' that are well-known and deemed serious enough to be trusted and go from there.

DNSSEC operates on the same principle of a chain of trust from root.


But isn't the point here (and a source of confusion I share with all the other posters) that DNS is the actual root of trust? Heck, forget DNSSEC even, because it's not like Let's Encrypt demands it. You say that LE verifies

>that you control the domain you are requesting a certificate for

but it mostly seems to do it the exact same way any random person browsing to an HTTP site would: it checks the DNS records. It's not like there's an out of band channel here where they actually verify business records separately or something. So what exactly is the value of the middle man in this? Why not just have public sigs in the DNS records too? When DNS is the root of trust anyway and "Certificate Authorities" are reduced to fully automated systems, what value to "Certificate Authorities" even bring anymore in this context? It seems like a hack from a time when CAs were expected to provide some out of band verification that would actually be useful. But on the web that mostly hasn't happened or has been rendered moot (witness the death of EV).

I can see how central CAs could still be useful in many other circumstances. But for anything where control of DNS directly correlates with control of the entire chain, it seems like it'd be a lot better to just decentralize into DNS.


The value of certificate authorities is to be the roots of trust, as said.

The exact same applies to using DNS as chain of trust. You have to start with a well-known root of trust because it's impossible to know all the DNS servers or registrars out there. In fact that's exactly how DNSSEC works.

It seems that the question is whether DNS and HTTPS certificates are converging to provide the same service. Perhaps, though I'm not sure, but that wouldn't change the system fundamentally.


>The value of certificate authorities is to be the roots of trust, as said.

But they aren't, DNS is. That's my question. If someone controls my domain, they can point it wherever and get all the Let's Encrypt CA signed certs they want. So how exactly is the CA being a root of trust there if the CA itself is basing trust off of domain control? In neighbor comment it seems that maybe the CA is basically acting as a hack to bypass an inability by clients to check DNS? I can see why that would have some practical value in the near term but it'd be good to do away with it as soon as possible. Apple/Google/Microsoft (and maybe Mozilla) may be in a position to do so if no one else.


This is another question.

You're discussing how to prove to Let's Encrypt, or anyone else, that you are the legitimate owner of a domain.

That does not mean that I know or trust Let's Encrypt. The root of trust is an entity I know and trust and which can vouch for Let's Encrypt, which can in turn (or not) vouch that you are the legitimate owner of a domain.

The same applies to DNSSEC. The root of trust being the root servers.


> What Letsencrypt does

Letsencrypt is run by Mozilla. So lets instead make that:

> What Mozilla does ... is that you control the domain you are requesting a certificate for.

So Mozilla trusts me based on me controlling the DNS of my domain, and can issue me a security-token which Mozilla's browser (and other browsers) then can trust.

Note: It does not trust me or verify me to be non-malicious, honest or have good intentions of any kind. It merely verifies that I control the DNS of the domain I request a certificate for. That's it. That is all.

So why can't Mozilla's browser simply instead of trusting these magic tokens, instead just verify that the cert for the site matches the cert pinned in DNS, thus validating that the site-operator is indeed in control of the DNS of his domain?

It will be exactly the same validation, except now we don't have to rely on a third-party service to do that for us. And then the web can finally be autonomous and decentralized again.


> Letsencrypt is run by Mozilla.

Let's Encrypt is a service of ISRG, a public benefit corporation https://www.abetterinternet.org/about/

Some key ISRG people were from Mozilla, but so what?

> So why can't Mozilla's browser simply instead of trusting these magic tokens, instead just verify that the cert for the site matches the cert pinned in DNS

Poor deployability and interoperability. Now your browser mysteriously doesn't work on a lot of the world's networks and the site doesn't work with other browsers.


Could you please point to a tutorial/howto on this?

EDIT: why on earth was this question downvoted? It was genuine, I honestly don't know how to do it.


In theory there is the CERT DNS record type that could be used, but no OS/lib uses it unfortunately as prior to DNSSEC there was no way to prevent the CERT record from being manipulated/intercepted :/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: