"If we assume the purpose of a WoT is to unambiguously and unimpeachably map public keys to human beings"
Why would we assume that? I can think of a few other use-cases:
* I want to verify the PGP keys used to sign packages in some GNU/Linux distribution
* I want to verify the keys used by anonymous remailers (or at least a PGP key used to sign a mixmaster / mixminion pub key)
* I want to verify the PGP key used by a business to sign official messages (such businesses do exist, as I found out a few months back)
It is also wrong to think that the purpose of the web-of-trust is to unambiguously or unimpeachably map public keys to anything. The web-of-trust is a heuristic that makes a particular kind of impersonation more difficult, so that PGP is more convenient to use. If you need something unambiguous you need to manually verify keys (which most people do anyway).
I think it is reasonable to observe that the current PGP WoT is designed to try to unambiguously and unimpeachably map public keys to entities (if not necessarily literal human beings). It is ambiguous to me in the original text whether the author was claiming that all WoTs must have that characteristic or if he was just describing the current one.
Personally I think this has been a problem with many security systems so far. You can't be 100% confident about anything, so building a system in which that is a fundamental element is doomed to fail. See also the SSL/TLS cert system, in which it is assumed that you 100% trust all the cert vendors in your key store, absurd even before we consider the default key store inflation over the years. If WoT is going to work, it's going to have to have some concept of levels of trust. I'm willing to sign my wife's key with the highest authority I can give. I'm willing to sign the dev I meet at some meetup and who definitely seems to have the same personality and knowledge as the guy I know online with a medium degree of trust. I'm willing to sign other people with low degrees of trust.
Sure, dealing with the consequences of partial trust are difficult. But since you can't have full trust, it is in less difficult than our current systems based on it, inasmuch as a thing that is possible is less difficult than a thing that is not possible.
> I think it is reasonable to observe that the current PGP WoT is designed to try to unambiguously and unimpeachably map public keys to entities (if not necessarily literal human beings).
I don't really think the PGP WoT can "unambiguously and unimpeachably" provide that mapping, or is even intended to. At best, I think that the WoT documents trust relationships between entities with keypairs.
I actually really like this feature of the WoT, because I think it does a good job of simulating the actual trust relationships between people in real life. In a private conversation, I might imagine a friend vouching for someone another person as trustworthy; a key signature from my friend does something similar, in a secure fashion. This is good because I don't want my web of trust or social network to be able to say "this key is certain to be trustworthy": after all, it can't actually guarantee that. But I don't mind seeing trust opinions from my friends, and their friends' friends.
If anything, I think the trust relationships in GPG ("unknown", "marginal", "full") are too unclear, and I know that these numbers mean different things to different people. I'd prefer the ability to add a short note to the signature so I could say something like,
"I know this person well and you can be confident that this key is theirs, but I don't think they're careful enough to trust their signatures."
I've never understood why a weighted WoT system has never become popular. I trust some friends implicitly, and I trust some other friends less. I trust friends of friends, but generally less than I trust direct friends. I'm still willing to trust someone who two friends friends know, and if you can trace me a dozen lines to Kevin Bacon, I'll trust him, too. Sure, there's some hard graph theory and weighting to be done, but I can't imagine those aren't problems that can be solved with modern big-data techniques.
If someone sends you an email and you partially trust the key, how does that map to the contents of the email? How does it map to executable code? Source code? Images? Digital signatures?
It's like saying some people's trust is a square circle. It's a correct sentence but it doesn't map to any meaning usefully.
As two examples: Partial trust means you trust a key for different purposes, or for different levels of validity.
Different purposes: "Do I trust that this key correctly identifies this person?" is a separate question from "Do I trust this person to do proper verification before signing others' keys?" (i.e., trusted link in the Web of Trust)
Different validity: "I've met this person and verified they own the key", is different from, "They've identified themselves with two forms of government ID", is different from "I've known them all my life".
Why would we assume that? I can think of a few other use-cases:
* I want to verify the PGP keys used to sign packages in some GNU/Linux distribution
* I want to verify the keys used by anonymous remailers (or at least a PGP key used to sign a mixmaster / mixminion pub key)
* I want to verify the PGP key used by a business to sign official messages (such businesses do exist, as I found out a few months back)
It is also wrong to think that the purpose of the web-of-trust is to unambiguously or unimpeachably map public keys to anything. The web-of-trust is a heuristic that makes a particular kind of impersonation more difficult, so that PGP is more convenient to use. If you need something unambiguous you need to manually verify keys (which most people do anyway).