Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It might be possible to exclude enough realistic current day threats to eventually end up with something that "works" but I don't think that's useful in any way.

None the less, if you want to exclude computers, the human equivalent of "stick it in an emulator" is the old philosopher's "brains in a vat" problem. That's well traveled ground no there is no proof you're not in a vat.

There is no way to prove you have not been compromised because there is no way to prove no theoretical advancement will ever occur in the field in the future. (or not just advancement but NSA declassification, etc) So you're limited to one snapshot in time, at the very least.

You're asking for something that's been trivially broken innumerable times outside the math layer.

Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.



> Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.

None of those; I know the current state of the art in cryptography/authentication, and that it doesn't quite cover what I'm asking for. I'm basically just waiting for you to say that the specific kind of designed proof I asked for is impossible even in theory, so I can go and be sad that my vision for a distributed equivalent to SecondLife[1] will never happen.

My own notion would be that the Goodlandian agent would simply request that his contact come and look at the machine itself, outward-in, and verify to him that he's running on a real, trusted piece of hardware with no layers of emulation, at which point the contact gives him an initial seed for a private key he will use to communicate with from then on. The agent stores that verification on his TPM as a shifting nonce (think garage-door openers), so that whenever the TPM is shut down, it immediately becomes invalid as far as the contact is concerned--and must be revalidated by the contact again coming and looking at the physical machine. All we have to guarantee after that is that any method of introspecting the TPM on a piece of currently-trusted-hardware fries the keys. Which is, I think, a property TPMs already generally have?

Besides being plain-ol' impractical [though not wholly so; it'd be fine for, say, inspecting and then safe-booting military hardware before each field-deployment], I'm sure there's also some theoretical problem even here that renders it all moot. I'm not a security expert. :)

---

[1] More details on that: picture a virtual world (technically, a MOO) to which any untrusted party can write and hot-deploy code to run inside an "AI agent"--a self-contained virtual-world object that gets a budget of CPU cycles to do whatever it likes, but runs within its own security sandbox. Also picture that people who are in the same "room" as each AI agent are running an instance of that agent on their own computers, and their combined simulation of the agent is the only "life" the agent gets; there is no "server-side canonical version" of the agent's calculations, because there are no servers (think Etherpad-style collaboration, or Bitcoin-style block-chain consensus.)

Now, problematically, AI agents could sometimes be things like API clients for out-of-virtual-world banks. Now how should they go about repudiating their own requests?


"Bitcoin-style block-chain consensus"

Majority rule not consensus. Given a mere majority rule protocol, I think your virtual world idea could work.


Eh, either way, it's the same problem. Imagine you're an agent for BigBank, thinking you're running on Alice's computer. If you authenticate yourself to BigBank, BigBank gives you a session key you can use to communicate securely with them--and then you will take messages from Alice and pass them on to BigBank.

But you could also be running, instead, on an emulator on Harry's computer--and Harry wants Alice's credit card info. So now Harry reaches in and steals the key BigBank gave you, then deploys a copy of you back into the mesh, hardcoded to use that session key. Alice then unwittingly uses Harry's version of you--and Harry MITMs her exchange.

In ordinary Internet transactions, this is avoided because Alice just keeps an encryption key (a pinned cert) for BigBank, and speaks to them directly. If you, as an agent, are passed a request for BigBank, it's one that's already been asymmetrically encrypted for BigBank's eyes only. And that works... if the bank is running outside of the mesh.

But if the bank is itself a distributed service provided by the mesh? Not so much. (I'm not sure how much of a limitation that is in practice, though, other than "sadly, we cannot run the entire internet inside the mesh.")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: