Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

why does the client need to be built locally? Are you inherently suspicious of anything delivered over HTTPS?

I'm genuinely interested in why people feel local clients are more secure than something running in a browser. It's something I came across when writing an ssh client in browser (www.minaterm.com).

I guess it's the potential for a HTML page to updated overtime so it no longer reflects an audited version. However it seems that it's really a failing in our browsers that this is the case. Perhaps an external service that verifies the hash of a page would help? But this would need browser support of course.

The only thing I could think of that could be implemented in current browsers was a small stub page which calculates and displays a hash of the HTML/Javascript to be launched. The stub would need to be small enough that a user could manually check that nothing malicious has been added here.



"why does the client need to be built locally? Are you inherently suspicious of anything delivered over HTTPS?"

A good question.

In order to have end-to-end security, you need some sort of secret that is only known on the end points (possibly negotiated over some sort of key exchange protocol), and it should be impossible for the server in the middle to have the secrets.

The core problem is that a webpage is really, really, really designed to be a representation of the server, sitting on a client sandbox. There is no built-in way for a web browser to inject anything into the connection that could be used for a security connection in such a way that the server can't see it. All the local storage the page has access to, the server has access to. All the cookie data the page has access to, the server has access to. Anything else you can come up with that the page has access to, the server can either read or destructively set by sending down the correct HTTP or HTML. There's no independent client "context" that can be passively, safely used by the page somehow, and in a world where the page is running javascript provided by the server it's not even particularly clear what could be "used" by the page without being something that the server could "use" by reading, then sending to the server.

There is, therefore, no way to use the web through a conventional browser to create an end-to-end connection that the server doesn't have full access to. Browsers just aren't designed for this use case.

Note nothing stops you from providing an HTTPS REST interface that would allow full end-to-end encryption that is used by a client that is capable of having local secrets and does not provide any way for a server to run code against it. It is specifically the browsers making this impossible. I'd also observe this isn't necessarily fundamental, browsers could be changed to fix this, but... I'm not sure it would be a good idea. Browsers are already insanely complicated security environments that just barely work on the best of days. Not sure I want to add "secure-from-the-server secret storage" to the list of things a browser is supposed to be able to do. (It is also possible certain extensions in the browser have already hacked together this ability, such as the video chat extensions, I haven't studied them to that detail, but AFAIK secure secret storage and key negotiation aren't generically and generally available.)


Users fall into two categories:

1) Don't really care about privacy. Might not want their chat on the front page of the papers, but aren't going to go to great lengths to achieve that.

2) Actually care about privacy and are informed. There's not many of these people, but they're trained to be wary of every outside dependency and opportunity for hostile code injection. Crypto running in the browser can be replaced any time you load it if the host is compromised - either in the technical sense or the legal sense. Yes, it could be hashed, but it isn't and there's no mechanism for this nor plans to build one.

Not to mention that the browser itself presents a pretty large attack surface.


> Yes, it could be hashed, but it isn't and there's no mechanism for this nor plans to build one.

That's kind of a shame. It would be nice if apps distributed over the web could be signed the same way they are from repositories.

> Not to mention that the browser itself presents a pretty large attack surface.

As does the operating system itself. I would have thought with a local (likely native) client, you just have one less layer to get through.


> That's kind of a shame. It would be nice if apps distributed over the web could be signed the same way they are from repositories.

This sounds like a theoretical impossibility. The server's source code is by nature closed, and while the server could provide you a copy of the source with a signature, there's really no way for you to verify that the code you've been promised is the code that is running.


A browser feature would be required that could calculate/display the hash of the delivered code and optionally verify it against a 3rd party server. Ideally you'd want have particular versions signed as "audited" etc.

I don't see how it's a theoretical impossibility.


You're neglecting the server-side code. If you have access to the full source code to verify it, you're not describing a web service, you're describing a local application that happens to be implemented in a browser.

You already can distribute signed browser add-ons.


I was looking for something like minaterm the other day, trouble is i'd be scared to put my credentials into it. A when I think about it logically that isn't rational (putty can grab my credentials just as easily), but still.


It's not entirely irrational. If putty wants to grab your credentials they have to ship a broken binary that once downloaded exists forever and can be examined and reverse engineered in the wild. Someone running a web service (or someone who has compromised said service) can target a particular user for a single session and the evidence that an attack occurred will only exist until a few caches get cleared.


yes, and I also would be scared to too. It's interesting thinking about why though. I think there's a significant social/psychological component to the decision.

I'd also be less scared if it was running on my own server, but it's not clear to me that this is completely logical either.


If the code can't change, what's the point of having it be delivered through the browser each time? Aren't you better off saving the bandwidth by downloading it once?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: