Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s a glaring security hole, IMHO. I create such devices and the only way I know is self-signed certs, but the browsers complain a lot about that. Ideally there’d be a way to sign .local domains with browsers handling it while letting people know to verify the identity of their local devices/services and that the identity isn’t verified by https like most sites.

The issue lies between the browsers and https system. SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?". Then if it changes informs you of that. Browsers could easily implement that for .local with self-signed certs.

Of course browser developers assume everyone has internet all the time and you only access servers with signed domains. I’ve wondered what it’d take to get an ITEF/W3C RFQ published for .local self-signed behavior.

(Edit: RFQ, not my autocomplete’s RTF)



> Ideally there’d be a way to sign .local domains with browsers handling it while letting people know to verify the identity of their local devices/services and that the identity isn’t verified by https like most sites.

For these types of sites we run a local CA, and sign regular certificates for these domains and then distribute the CA certificate to our windows clients through a GPO. When put into the correct store, all our "locally-signed" certificates show as valid.

In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to 10.92.83.200. As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.


> In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to 10.92.83.200. As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.

This is basically what I've been doing lately as well. I'll create a wildcard letsencrypt cert for .vpn.ourdomain.com and then point the subdomains to internal IPs. You can even set up a split-dns where it responds to the challenge txt records for letsencrypt, but only the internal side responds to requests under .vpn.ourdomain.com.


That works if you have control over the device/network. IoT devices usually don’t work that way. It’d be nice to be able to ensure the passcode to the device isn’t broadcast in clear http.


It handles it by asking "Do you want to trust this new server?"

Asking the end user to accept downgraded security is a huge security antipattern.

Also, if I’m operating an evil wifi AP at a coffee shop and I intercept your web request for bankofamerica.com with a redirect to bankofamerica.local, would HSTS prevent the redirect? Or could I then serve you a bad cert and trick you into accepting it?

Also, what sokoloff said makes a lot of sense. Encryption without authentication is worthless, and that cert chain only works in so far as someone at the top vouches for someone’s identity. If that’s your print server, then you are the one vouching for its identity. It makes more sense for you to be the certificate authority and just build your own cert chain.


If the browser correctly explained what you were doing, and warned you that this is an attack unless you are in control of the entire network and the machines on it, I don't see the problem.


What would it say?

“You’re connecting to an IoT device that has a worthless certificate. Would you like me to open up a completely pointless AES256 session with it and pretend that you have a secure connection?”

Just use HTTP.


The identity isn’t trustworthy, but as mentioned below their are ways to handle that with device id’s. Also it’s not pointless to encrypt communication with a device you’ve verified the identity of in the past. It prevents hijackers from later hijacking the device, without The user knowing it. Just the same as the nastygram you get when your ssh server changes its private key.

You’re effectively claiming SSH is pointless and/or useless encryption as it doesn’t use certificate chains to verify url/domain. Your argument is the same as saying that any devops ssh’ing into a new local server is pointless and they should just use telenet.


(Disclaimer: I'm not a security expert).

Ideally, I think, something like this: "You're trying to connect to a new device on your local network. To ensure the security please check that the device has a display or a printed label that says 'HTTPS certificate ID: correct horse battery staple couple more random words'?" (mobile devices may suggest to scan a QR code instead).

I'm pretty sure if at least one major browser vendor would implement something like this (denoted by a special OID on the certificate), IoT vendors would be happy to follow. Verifying a phrase or scanning a code is not a big burden, and it resolves trust issues.

The fingerprint could be either from a private key generated on device (for devices that have a display and can display dynamic content) or from vendor's self-signed "CA" with special critical restrictions (no trust for any signatures unless individually verified + signed certs are only valid on what clients consider to be a local network) which private keys are not on the device itself (for devices with printed labels, to avoid having the same private key on all devices).


It certainly sounds a lot better than simply asking browser vendors to give .local a pass on cert validity.

I’m still wary of any flow that would have browser users “accepting” a device as secure - could I impersonate that device on the local network? Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.

Maybe another approach would be to build infrastructure (like protocols and client software) to make building a home cert chain easy? A windows client that would let you create a root cert, install it in your cert store, and then give you server certs to hand out to devices? Give it a consumer friendly brand name or something and get IoT vendors to add a front-and-centre option to adopt a new server cert.

Authentication isn’t a tricky problem; it’s the trickiest.


> I’m still wary of any flow that would have browser users “accepting” a device as secure -

It’s about accepting that communication with the device is secure, not guaranteeing that the device itself is secure. In reality you don’t know if your bank’s servers are secure or if they encrypt passwords properly, etc, but you do know it’s them and your communication isn’t tampered with.

> could I impersonate that device on the local network?

Not readily with a device specific id check and TOFU (trust on first use) similar to SSH. If the device certificate was stored permanently for .local urls like `my-device-23ed.local`, then anyone who tried intercepting or MITM’img that device would have the user receive a message "warning device identity has changed, please check your device is secure ... etc " warning.

Not having any browser support .local certificate or identity "pinning" means that anyone who compromised your network (WiFi psk hacking anyone?) can impersonate a device you’d not know it. Browsers forget self-signed certs regularly, if they let you "pin" the certificate at all. A hacker can intercept the .local url (trivial) and use another self-signed cert. the user’s only real option is to blindly accept it whenever it happens. Then an intruder can MITM the connection to the device all they want. Is your router’s config page really your router? Who knows.

> Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.

Any `.local` domain isn’t allowed to have a normal cert or global dns name. They could trick them on first use, but again with a device specific ID on first use it’d make that harder to do. After trust-on-first-use any access afterword wouldn’t be able to be tricked without a explicit warning to the user about changing device identity and that something funny might be happening.

If browsers implemented entering a device specific code as part of the "do you accept this device" on first use, that’d make it a much more usable and secure pattern. It’d standardize the pattern and encourage IoT shops do setup the device id checking properly.

To impersonate a device using .local certificate/identity pinning, a hacker would need physical access to the device to get it’s device id code, then hi-Jack the mdns request with the correct device specific .local address (on first use!), then setup a permanent MITM in order to impersonate a device. Otherwise the user would get a warning. Possible but serious resources required. With physical access you can modify hardware, possibly install a false cert on the user machine, etc, so security in that scenario would be largely compromised already.

Perhaps some IoT devices use custom apps and certificates but many just use http, or self-signed https. In my experience, IoT device makers have little experience with something like creating a CA. Getting users to install it would be a headache. Time is money on those projects and having an entire factory down because they can’t figure out how to install a certificate chain on Windows 7, well, most users will complain loudly. Currently IoT is full on IT-installing-certificate-chains, or no security at all. Many go with none at all therefore.

Therefore the current status quo with browser certificates on .local domains encourages far more security gaps and effectively makes it difficult for non-internet connected device to Operate securely without a fairly expensive and complicated IT setup.


I like this idea, as we already add a device serial ID to the ap hosted mode, partly for this reason. Chromecast’s show a random code and users seem to be able to handle this fine.


That works fine with a brand new device you just unboxed. But what should happen 3 years later? Should the IoT device have a certificate that is valid forever?


I would say yes, as long as certificates are unique per-device. The harm of my light bulb's cert being compromised is less than the harm of my light bulb no longer working if the manufacturer's CA goes offline and it can't renew its cert.


HTTP traffic would be unencrypted, so everyone (esp. on Wifi) could record passwords etc. flying around. With HTTPS, you at least need to MITM the connection to do that. If you establish trust in some other way (cert ID printed on the device?), the connection is secure.


Not just intercept: Using HTTPS prevents messing with the connection in-flight. So an attacker won't be able to inject their own payload into that web page you just requested.


Except you’re using garbage certificates so anyone could MITM you and inject whatever they like.


In what way are self-signed certs garbage? They're essentially the same as ssh certs.


That's a separate kind of attack, not intrinsic to the protocol. If the keypair got leaked or a CA misbehaves and issues multiple certificates that can be used for a host, then yes, it can happen.


It's like calling someone you don't know, but over a secure line.


I love this idea. There are enough influential tech people who read HN, can we make this happen please?


Thirded. This sounds like a sound solution.


How is that different than how self signed certs work now?

My browser warns me, I can accept the warning for that particular certificate, and it warns me again if it changes..


At least for Chrome and Firefox, I can't accept self-signed cert easily for permanently. It asks again if I exited the browser.


Do you have them configured to clear those settings on exit? Is the certificate actually the same when you visit the site again?

Chrome and Firefox remember the acceptance of self-signed certs for a long time on my PC.


Why do you need to use self-signed certs? (As contrasted with CA-signed certs that happen to be signed by a CA that you own and trust as suggested by arwineap?)

It took my a little over an hour one evening to figure out how to create my own CA, trust it, and sign certs for all my local devices (except my UniFi cloud controller which I admit I gave up on due to time).


Because 99.99%+ of users don't have the technical skill to do this, but still need to be able to access local devices and it would sure be good if they could do so in a secure manner?


So the answer is to subvert the global certificate infrastructure that protects web traffic? No, it isn’t. Your IoT device has no security at all if a non-technical user is setting it up or if it doesn’t have a way of accepting a user configured certificate, and you shouldn’t pretend otherwise by dressing it up in bad certificates and worthless encrypted tunnels.

Just use HTTP.


I mean if they’re worthless tunnels then so is every SSH tunnel. Should we just go back to telnet?


On your own network? Does it really matter whether you use telnet or ssh? And if it’s on a shared network, don’t you have an IT department that can set up the local key infrastructure and push out certificates?

The argument here is that we should enable lots of shitty IoT devices to masquerade as being secure, and inure browser users to click ‘yes’ to accepting a broken certificate.

If it’s on a managed network, IT can set up a certificate and push that out to client machines. If it’s on your home network you can do that (unless your IoT device can’t take a user configured client cert, in which case it’s rubbish anyways), and if you can’t then you might as well use HTTP.


Ok we can always use HTTP instead. I personally hate how using HTTPS gets harder and harder every single year.


A lot of the devices mentioned ("routers, printers, and self-hosted IoT devices") don't give ways to change the certificate. Besides, not every individual or company wants to or is knowledgeable enough to manage their own CA. For the audience in HN it might be an hour one evening, to others the instructions read like black magic.


Routing is scary magic. DNS is scary magic. Wifi is scary magic. Everything in computing is scary magic until someone writes an app with good UX for it. It's not a fundamental problem.


It is a fundamental problem, because you need specialized knowledge to understand what is being presented and asked of you in the UI.

The average user knows absolutely nothing about routing, most will throw their hands up or their eyes will swim if you so much as mention something like IP address.

They also know nothing about DNS and don't have to: because we always give them defaults that they never see and they go along with their lives.

As for wifi, once again, largely automated. Most people never change the default SSID and password. There's some manufacturers that will make a good UI, but it stops at the SSID and passwords because that's the extent of most users' understanding. Some users have a vague understanding that 2.4GHz and 5GHz is different, but don't know the significance of the difference. Channel, authentication type, and other options aren't given to users in those UIs because they simply wouldn't know what to do with it and people don't read manuals anyways.


> SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?".

The problem is to figure out whether to trust the server you need to get its fingerprint through another channel. Is there an HTTPS equivalent of that?


You don't need to get the fingerprint through another channel. Getting the fingerprint through another channel prevents some classes of attacks. Blindly storing the first fingerprint offered also prevents a variety of attacks.


> It handles it by asking "Do you want to trust this new server?"

That's basically how it works though; your OS packages a group of trusted CA certs. You can add additional trusted CA certs, even ones minted by you to ensure your apps trust the connection


There are two options:

* Manually install a root certificate, which is a confusing process for most end users and a non-starter for anyone who cares about security. (Imagine walking your parents through the process.)

* Trust a self-signed certificate, which is an increasingly difficult and counterintuitive process since Chrome and Firefox started competing to see who could destroy their usefulness faster. I'm not even sure if it's possible anymore.

Neither of these are acceptable.


I mean I’m not sure if there’s a solution that will make everyone happy then. Making trusting self-signed cheers easy and not scary has real security implications because users just click-through warnings.


Making casual users create their own root certificates sounds like an even worse problem. Now an attacker isn't restricted to impersonating your lightbulbs. They can impersonate any domain if they can get your private CA. Now imagine if an IoT vendor engages in questionable practices like creating the CA for you and the user only has to download an exe that automatically installs the root certificate. The benefit for the vendor would be that all devices you order from their website would be shipped with correctly signed https certificates. Later a hacker dumps the database with root CAs and uses it to impersonate your bank.


That’s why self-signed and "local signed" should be distinct concepts, IMHO. The .local domain is already special cased, and could provide a different UI path more akin to how SSH works. AFAICT, you can’t get a https cert for a .local domain, so it’d not break existing https security model. It’d provide a more secure way for apps like syncthing to provide a secure local UI as well. Getting browsers to accept my self-signed certificate is a pain and makes people just use http.


Honestly, I don't trust most end users to install root certificates

If you are doing something for an end user, I think it makes a lot of sense just to get a certificate; it's just not a large barrier anymore.


The mechanism SSH uses is called Trust on First Use ("TOFU") and is closer to what used to be HTTPS certificate pinning. In this scheme, certificates never expire, and if they do, clients warn about the unexpected change in certificate.

It is different from the CA PKI system, where the client trusts any certificate signed by a trusted CA without prompting the user at all, and doesn't prompt the user if the certificate for a site changes.


Self-signed certificates give you basically this. It's a bit of a hassle to mark them as trusted, but you only have to do it once.


That has not been my experience with current versions of Chrome.


Crazy idea: Why not serve an initial page over HTTP, and then implement encryption in JS using webcrypto for all subsequent calls.

I'm not sure self-signed HTTPS can do much better than this anyways.

(Yes, yes, it's a crazy idea, hehe)


You can no longer do webcrypto because the initial page is compromised.

Self signed HTTPS works for this case as long as you know the fingerprint/cert to accept.


Oh, yeah... webcrypto only works on HTTPS.

So you would need to ship a crypto library in JS, hehe :)

Self-signed certs probably does work, if you install the certificate root on your machine. It just not something you would advice end-users to do.


The other problem is shipping a crypto library if the entire page and script is not served over HTTPS means that it's no longer useful, because the crypto library is compromised.


> (Edit: RFQ, not my autocomplete’s RTF)

Sorry for the mostly insubstantial comment but it may help you in the future: it’s RFC (Request For Comments) not RFQ.

And it’s IETF (Internet Engineering Task Force) not ITEF.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: