I feel like you danced around the central point of my post: there is no suggestions for how to secure and harden devices without refining these trusted computing techniques. We need to harden these devices.
Your argument is no one can be trusted to make them. But my argument is that if you believe that then you know you can't trust anyone to make anything one way or the other.
Surely rather than botch the whole thing (because we don't like the vendors) we should start to propose stronger and more consumer-centric versions of these?
A skeptical and cyncial part of me notes: given the tiny tiny number of users who can actually verify their machines are what they say they are, all "trusted computing" rebukes do is argue against a tech that actually does mitigate rwal attacks for the vast majority of uses.
> there is no suggestions for how to secure and harden devices without refining these trusted computing techniques. We need to harden these devices.
Because the thing you are asking for is not possible. You have a bad premise:
> And I just don't understand how anyone can maintain that farce when the last year has shown that it's a genuine challenge even for the US FBI to unlock a mobile device without the owners say-so and it's getting harder all the time.
Which has two flaws. First, it wasn't a challenge for them, they were just using it as an excuse to whine about the second one that actually is. And second, the only real security is math (encryption), but it doesn't require any special support from the hardware.
If you have full disk encryption with a strong passphrase and the device is currently locked (i.e. the key is not in memory), the only way to get that data is to have the passphrase or break the encryption, and breaking the encryption is not expected to be possible.
The problem is, if someone has physical access to your device and can compromise your firmware, they can record your passphrase the next time you unlock the device, and then they don't need to break the encryption.
But this is not a thing you can do anything about. If someone has physical access to your device they can just steal it and leave you with one that looks the same up to the point of you entering your passphrase and then transmits it to the attacker. Nothing about the original device can fix that because it isn't the original device.
Similarly they can install a surveillance device in your room that can record you entering your passphrase and then come back tomorrow to take your device.
The only answer to these attacks is physical security. Secure boot does nothing.
> Which has two flaws. First, it wasn't a challenge for them, they were just using it as an excuse to whine about the second one that actually is. And second, the only real security is math (encryption), but it doesn't require any special support from the hardware.
They bought a hack from another company for an old model of phone. If the case had been involving the latest model handset, the vendor claimed they had not yet (but were confident they would) hack it.
> The problem is, if someone has physical access to your device and can compromise your firmware, they can record your passphrase the next time you unlock the device, and then they don't need to break the encryption.
And if the device is tamper-resistant? I had this same conversation with a person who hated Yubikey. Nearly exactly the same. It made even less sense, because the entire point of the Yubikey is to be tamper proof.
> The only answer to these attacks is physical security. Secure boot does nothing.
I think maybe what I object to most is that essentially the only attacks considered in this discourse are attacks directly by nation-states at scale. Not only is it clear that handsets and self-built computers are subject to these (at scale), but it's doubly not sure that if you used a TPM then you wouldn't be exposed to attacks against TPM-backdoored devices since your information in an online world is stored on commodity clouds that probably DO have that hardware and if an attack exists, it'll certainly be able to ignore FDE.
Even if we ignore that, TPM mitigates real attacks we see in the real world, and increases the difficulty of those attacks. Average consumers are a case that should be considered in the discourse.
Most people can't effectively harden themselves against nation-state level attacks (if only because incarceration and interrogation exist and even physical security won't stop them), but nation-state level attacks involving a conspiracy amongst manufacturers and the NSA is the justification used to discredit the use of TPMs.
And for the dubious benefit of saying, "Well I have a spec and presumably this board is fully defined by this spec." Of course, truly verifying the board has no back doors is not made substantially easier by the absence of a TPM. So I have trouble believing that this is not an argument among different factions who want final say on a wide variety of consumer hardware.
> Most people can't effectively harden themselves against nation-state level attacks (if only because incarceration and interrogation exist and even physical security won't stop them), but nation-state level attacks involving a conspiracy amongst manufacturers and the NSA is the justification used to discredit the use of TPMs.
That is missing the point, as this is not about the security of an individual against targeted attacks, but about the reliability of our governing structure. In order to harden a democracy against subversion by minorities, it's not necessary for each individual to be able to fend off an army. That does not mean that implanting every citizen with a centrally triggered kill device would be a good idea.
Also, no conspiracy is required: If there is a remote access key, say, that is a single point of failure that no company can defend if a nation state wants to have access to it, even if that may well be their intention.
> They bought a hack from another company for an old model of phone. If the case had been involving the latest model handset, the vendor claimed they had not yet (but were confident they would) hack it.
There is actually an important distinction to make here too.
When you have something like Secure Boot, whose purpose is to make the device trustworthy to enter your passphrase into, it's completely impossible. You don't know if the device you're using is actually the same device, you don't know if someone is watching you, the thing it claims to do is not a thing it can actually accomplish.
But Apple does something separate from that. They have tamper-resistant hardware for storing keys, so that the hardware can store a strong key and enforce a maximum number of guess attempts for a weaker password/PIN.
The disadvantage of this is that it's pure attack surface compared with using a strong passphrase to begin with. If you have a strong passphrase the attacker has to break the encryption. If you have a weak PIN for hardware protecting a stronger key the attacker can break the encryption or break/backdoor the hardware or guess the weak PIN before hitting the maximum number of attempts.
The advantage is of course that it lets you use a PIN instead of a long passphrase, but there is also something else. That hardware doesn't need root. All it needs is to store a key while the device is locked and then spit it back out if you give the right PIN and erase it if you make too many bad attempts. No part of that inherently requires it to be at ring -3. It can be completely independent from all of that.
> And if the device is tamper-resistant?
That's the problem with Secure Boot -- it doesn't matter. Stealing your passphrase by recording it is an attack that can be pulled off by a middle schooler with a nanny cam. It's easier to do that than to backdoor the firmware on a non-tamper-resistant computer, which at least requires you to know what "firmware" is. So what attack are we actually preventing at the cost of having untrusted and potentially vulnerable code at ring -3?
They don't need physical access to compromise your firmware. I thought that was one of the things the original article claimed, at least. The EMs firmware can be updated remotely if the system is plugged in (whether powered on or not).
Your argument is no one can be trusted to make them. But my argument is that if you believe that then you know you can't trust anyone to make anything one way or the other.
Surely rather than botch the whole thing (because we don't like the vendors) we should start to propose stronger and more consumer-centric versions of these?
A skeptical and cyncial part of me notes: given the tiny tiny number of users who can actually verify their machines are what they say they are, all "trusted computing" rebukes do is argue against a tech that actually does mitigate rwal attacks for the vast majority of uses.