1) Hide in the network. Implement hidden services. Use Tor to anonymize yourself.
SKETCHY ADVICE. Tor might make targeted attacks on you personally take more work, but because Tor attempts to provide anonymity and not confidentiality or integrity, it may make you more susceptible to dragnet surveillance.
2) Encrypt your communications. Use TLS. Use IPsec.
Great advice. But prefer TLS to IPsec, which is largely the provenance of commercial systems and the product of much more vendor-centric standardization.
3) Assume that while your computer can be compromised, it would take work and risk on the part of the NSA – so it probably isn't.
Sure, I guess.
4) Be suspicious of commercial encryption software, especially from large vendors.
Absolutely, great advice.
5) Try to use public-domain encryption that has to be compatible with other implementations.
This starts out great. This sentence is great. It continues:
Prefer symmetric cryptography over public-key cryptography.
Dead on, absolutely. Number-theoretic public-key crypto is terrifying. But then:
Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.
ARGH NO ARGH. First, conventional DLP systems also have suspect parameters. Second, the best ECC systems have parameters of well-known provenance. Third, ECC systems are known to be stronger than DLP systems. In fact, the most likely source of any claim about an NSA cryptographic breakthrough is that NSA has a viable attack on RSA-1024 (an IFP system, but six of one).
If you were going to change one thing about the way you decrypted in the wake of today's "new information", make it be that you abandon RSA.
RSA-1024 was assumed already to be breakable, but RSA-2048 and RSA-4096 will likely stay unbreakable for a long time.
It's really as many suspect - they are prepared for targeting the crypto implementations, through exploits or backdoors and if they can't they can target your computer.
Also, open-source is of course safer and some of us have been repeating for years that you can't trust binary blobs, security being just one reason out of many.
>Also, open-source is of course safer and some of us have been repeating for years that you can't trust binary blobs, security being just one reason out of many.
Only if you're auditing the code and building it yourself, or have a trusted place where that's done. I would wager that for the vast majority of users are just using the binary blob provided by someone. Otherwise, the "code" you're using could be as compromised as any closed source program.
Again with the vast majority of users fallacy? Bring up a story of grandma for a full picture too.
Binary blobs can be checked easily if they originated from the source code published. Major Linux distributions, like Debian, have people all over the world auditing the repository. If a backdoor is planted, with the repository being public, most projects have a full history of whom added what and when. Code reviews happen too.
You're right in that this mostly foils the NSA in its role as a global passive attacker. However, having a FOSS OS doesn't foil active, targeted attacks at all: the version of a package in the repo could be perfectly safe, while the version distributed to just your computer could be backdoored. (The NSA, after all, can FISA-order someone to give up their package signing key just as well as they can grab a decryption key, and then MITM just your connection to deliver an "automatic security update" that nobody else got.)
...while the version distributed to just your computer* could be backdoored.*
They would have to get the base distribution signing key, which is probably a shared secret that requires multiple people to unlock.
Have there been any hints of NSLs or FISC orders requiring the disclosure of private Linux distribution keys, some of which may not even be held in the US?
The funny thing is, you don't actually need to replace a package that's part of the base distribution. Every package format we have (RPM, DEB, etc.) runs its install- and upgrade-scripts as root, so every package can potentially do anything on install, including patching the installed files of other packages (or cleverer things, like adding SSL trust roots.)
So, since they don't specifically need to patch your openssl package to compromise your openssl library, they could just as well suborn the signing key of (presuming Ubuntu) some PPA author, as long as the user they're trying to bug has that PPA in their sources.list.
Solaris introduced some concepts in its IPS packaging to avoid packages changing the system. IMHO, it is very good but adds some pain to developers and sysadmins alike.
Though I do use them, the PPA concept is kind of flawed, because there's no way of saying which packages you will accept from a particular PPA, and no way for a PPA signing key to limit the packages it covers.
Packaging systems do detect file collisions, however, and with something vaguely similar to checkinstall or fakeroot they could detect attempts to overwrite other packages' files in the pre- and post-install scripts.
The Debian OpenSSL bug was and is highly embarrassing, yes, but the fact that the source was open was absolutely instrumental in its discovery. It's also difficult to find a better example of the right way to react to such a bug.
Certainly open source benefits only so far from the review it receives. The NSA could conceivably submit a patch to openssh to Debian that once again subtly breaks security.
On the other hand, audit is possible, which is a hell of a lot better than the alternative, limiting their options to underhanded subversion is again better than the alternative, and to my knowledge nobody has actually ever picked up on a case of this happening (while we do know they have had the security of close-source software subverted).
And you're auditing the compiler code, which you bootstrapped up yourself without using a possibly backdoored compiler on the way.
(I wonder if anyone's audited gcc/clang binaries installed in readily available linux/bsd/MacOSX distros to see if they're free of "Reflections on Trusting Trust" concerns?)
I don't know the answer to your question, but it's worth noting that "auditing the binaries" no longer has to mean looking through every line of disassembled byte-code, as was originally thought:
Yeah, it blew my mind when I first encountered it. The basic idea is looking at compilers as both data and deterministic (for the individual compiler) transformations of data.
No, the fact that others can compare things means there's still some improvement in security. Even if it's not actually done, there's a greater threat of it being done, so it's a less tempting target.
To be sure, absent the measures you describe the effect is significantly weaker than with them, and other effects may dominate.
Yes, because nothing can ever improve security unless it is 100%.
Using FLOSS, alone, is a marginal improvement in security, other things being equal. It can be combined with still other measures to make a bigger difference.
ARGH NO ARGH. First, conventional DLP systems also have suspect parameters.
Which parameters are those? For a classic DH run, there are just two parameters: a prime p, and a generator g. Normally you generate these yourself. Any safe prime is good as a p, and g is usually 2 or 5.
Contrast this to ECC where generation of parameters (the "curve") is so complex that you never do this yourself. You use one of the published curves, for example the ones specified by NIST.
So, I can't judge if ECC was "tweaked" by the NSA or not, but the original statement is true that ECC contains more "magic numbers" than DH, which contains none.
I believe the point is that with the magic numbers provided by the NIST you are trusting that those numbers have no side effects that decrease the content that you're encrypting to the party that decided on producing those numbers in the specification.
(tin foil speculation) It isn't known if the NSA already has methods that, with this specified set, is more easily broken or is already broken.
Systems that generate their own DH parameters have to negotiate them, which adds mechanism and thus potential vulnerabilities. NIST publishes standard groups for systems to use to avoid that problem.
I'm certainly not claiming that conventional DLP is more complicated than ECC, though.
Does NIST actually publish standard DL groups? All I have seen is instructions on how to generate them, i.e., SP800-56A.
NIST publishes standard curves because the instruction list to generate them would require describing Schoof's algorithm to count points, among other things.
To which "constants" is he referrin? Is he suggesting not using the NIST curves, which were themselves selected to avoid a different class of curve attack? Ok, then don't use those curves.
For end users, this is rather difficult. I maintain a Jetty web server, which is based on Java. The SSL/TLS implementation relies on the cipher suites provided by the JVM. I can restrict Jetty to using ECC as preferred cipher suites but not choose the curves. I assume this is also the case for many other web servers out there.
On the other hand it is easy to avoid ECC altogether in this scenario.
> If you were going to change one thing about the way you decrypted in the wake of today's "new information", make it be that you abandon RSA.
For the purposes of exchanging symmetric keys, what would you recommend as a replacement for RSA when configuring various services (tls-based servers, ssh, etc...) that rely on key exchange?
(Note I'm speaking with only cursory knowledge of security -- which boils down to "use RSA 4096 or the like for key-exchange, use well known open sourced and audited implementations of TLS", etc...)
The premise of his statement is invalid. It's "not even" wrong. There are NIST ECC "constants" you might want to avoid, but there are other parameters you can use instead.
Perhaps. But the problem is that we don't actually know any specific way that NIST could have selected malicious ECC constants. So the problem is that whatever bad property they might have selected for may also exist in your randomly selected ones.
We actually know quite a bit about how NIST generated the random curves, since the methodology (which is based on hashing strings) is in the document that defines the curves.
Please reread my post, you seem to have misparsed it. I was not saying that we don't know how they were generated, you can see that I was posting otherwise at the same time elsewhere.
In theory, could NIST, with NSA's help, have used the "parallel construction" approach and generated "bad" parameters using a supposedly benign process?
> Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.
When I read this, I realized that I can't recall Schneier ever saying anything positive about switching to ECC. (Although that's more or less off the top of my head.)
Along those lines, he had a recent post[1] regarding the Blackhat Cryptopocalypse presentation (that you helped with ^D^D^D^D^D^D^D^D^D^D^D were credited on) where he spoke optimistically about the ability of classic number-theoretic ciphers to choose key lengths that out-pace mathematical breakthroughs.
Tor is really shitty against the NSA. Tor offers almost no protection what so ever against a globally passive adversary who can see all packets going into and exiting the Tor network.
Not any better from an anonymity standpoint currently I think. The hidden service's server still runs traffic from it-self to a Tor entry node , through the network, and then back out via an exit Node. So if the NSA can observe all entry and exit node traffic timings, they can figure out who is talking to the hidden service and where it is.
The hidden service is a tor node itself and there are no exit nodes involved. As long as there is other traffic passing through the nodes involved an observer can't tell which traffic is for a HS. Maybe :)
I thought that was the case as well, however the documentation [0][1] I read indicated otherwise. I didn't do an exhaustive search after reading those, however, so it's possible. If so, I'm surprised. I'd expect the documentation to make it explicitly clear since their are legal and bandwidth implications to acting as a Tor node (though for non exit nodes, the legal ones are small).
Regardless, even if the hidden service is a node itself, a passive observer can still do traffic correlation attacks. It just requires more resources.
I believe the biggest weakness with hidden services (besides the regular "endpoint" issues) is that they use 1024-bit RSA keys which, as of recently at least, have come under suspicion.
The keylength is a "known weakness" to the developers and, while the plan is to eventually shift to longer keylengths, I don't believe there is any goal or deadline for that to happen.
"Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it's explained away as a mistake. And as we now know, the NSA has enjoyed enormous success from this program." -Bruce Schneier
So when Rasmus Lerdorf checked in a change to PHP that broke crypt(), and then made a release without bothering to run the tests (he claimed that "This is mostly because we have too many test failures which is primarily caused by us adding tests for bug reports before actually fixing the bug."), was that actually because he was working for the NSA to install a giant backdoor in PHP, and not just completely incompetent and totally negligent? https://plus.google.com/113641248237520845183/posts/g68d9RvR...
"We have things like protected properties. We have abstract methods. We have all this stuff that your computer science teacher told you you should be using. I don't care about this crap at all." -Rasmus Lerdorf
"I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests." -Rasmus Lerdorf
Since the bug (if I understand it) would prevent anyone with an existing database from upgrading because then nobody could login, it doesn't seem like a particularly effective undetectable backdoor.
Maybe it is some very elaborate ruse by the PHP team to create regression bugs that break frequently used functions badly so when they slipped in the NSA bug it would go undetected but I doubt that. I lost count of how many times a PHP stable release introduced a massive regression bug.
"For all the folks getting excited about my quotes. Here is another - Yes, I am a terrible coder, but I am probably still better than you :)" -Rasmus Lerdorf
Basically, you can't avoid being surveilled without radically altering your lifestyle. If you did, then it'd be something like:
0. Don't use a cell phone.
1. Don't use Google.
2. Don't use Skype or any other VOIP or telephone service.
3. Don't use social networks.
4. Don't use electronic money, including the bank account you are presently being paid in to.
5. Don't use individually booked international flights or ships.
6. Don't use email.
7. Don't communicate regularly with the same set of people. If you must communicate, do it either using steganography or in brief and without revealing any identifying information (spelling, voice, writing style, etc.)
"Since I started working with Snowden's documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I'm not going to write about."
My take on that is that the author does not trust all the things he advises us to use, since he relies on other things he is not going to tell us about. Which means he is safe(r), we are not, and wont be, since he wont share.
Great. What use is all that then? None.
BTW, Just read that O2 UK are blocking VPN traffic.....
He is giving suggestions for improving your security, not for perfect security. And one additional way you can improve your security is to not led a potential adversary know all the steps you take - Schneier is high profile, and with an admission he's worked on Snowden document, even more so; there's every reason for him to assume that if he wasn't a direct NSA target before, he is now. It's a trade off.
> Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it's explained away as a mistake.
You have to wonder about the Android SecureRandom weakness just discussed in recent weeks:
"I bought a new computer that has never been connected to the internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my internet computer, using a USB stick."
Wasn't this one transfer mechanism for stuxnet? Once a computer is infected, you have to at least suspect that anything that touches it directly or indirectly is infected. It's like an STD.
It looks like you can DVDs in bulk for between 20c and 10c per disc. If you're really serious about protecting your airgapped computer, this is a very reasonable price to pay.
Sure, so is using a rewriteable DVD, but neither is much more secure than formatting thumb drives before mounting them. Convenience is the only argument, and it is not all that hard to set things up so that you always format thumb drives that are plugged in.
A thumb drive is, or can be, a tiny computer that can lie about its contents depending on what's asking. I don't see a particular exploit in Schneier's case, but a thumb drive could be made that behaved differently depending on whether a Mac or Windows PC was trying to talk to it.
APT in a flash drive is just a cute trick. Just because you wrote zeros over it doesn't mean that the APT thumb drive firmware won't give you a trojan next time you read a the right kind of file off of it.
Or maybe it waits a week before it starts injecting binary, just to wait out your test and validation procedures...
I may have misread. Did Bruce Schnier just admit that he has Snowden documents on an air gapped computer, likely sited at his place of residence? If he is a "US person reasonably believed to be inside the US" then I imagine that could put him on the list for a 2am wakeup call.
Am I missing something? What motive would the US have for raiding his house?
It's not like the US is desperate to know what's in the documents. They already know what's in them. They're the US's own documents. They just want to stop them being published. Raiding Schneier wouldn't do anything to help with that, it'd just mean he has to go over to the Guardian's offices to do the analyzing - it's not like the copies he has are the only ones.
The Guardian's already confirmed that they have copies at at least their New York and Brazil offices, and you can bet they have plans in case those are raided. (Last resort, they're almost certainly part of the august 2013 Wikileaks insurance release, which someone just needs to tweet the password for if somehow all the guardians' copies are simultaneously destroyed).
The US actually does not know the specific documents that Snowden took, or how much he took, because he was able to bypass their security. This is suspected to be the reason that Miranda was detained in the UK.
I'm imagining Schneier saying some of these things and then seriously monitoring his operations as something of a testing honeypot. Wouldn't surprise me if there was one somewhat secreted machine at his house that was rigged with triggers, and anything he was really doing with the docs and The Guardian was elsewhere and more heavily secured.
Why use symmetric cryptography over public-key? I thought RSA was theoretically secure as long as factoring sufficiently large prime numbers is impossible.
Second, factoring those composites at popular key sizes isn't impossible.
Third, public-key algorithms are much harder to get right; they involve direct mathematical operations on plaintexts and devolve to well-studied math problems much more readily than symmetric ciphers do.
You should absolutely avoid public-key crypto, including public-key key agreement schemes like Diffie-Hellman, if your needs don't absolutely require them.
You left out speed. The reason we do not use PKE exclusively is that we do not want to wait a week to encrypt a megabyte of plaintext. In fact, it is not so much that PKE is what is slow; rather it is that theoretical constructions, which are the only thing we have for PKE but which also exist for symmetric ciphers, hash functions, etc. are slow.
Also, while PKE is sensitive to implementation issues and parameter choices, you can have much higher assurance that there are no theoretical weaknesses than you can have with something like AES. With PKE you usually have a proof that the security of the system depends on the hardness of one problem, regardless of the specific attack strategy the adversary uses. We do not say that ElGamal is secure against chosen plaintext attacks because we ran a battery of tests designed to detect vulnerabilities to particular CPA strategies; we say it is secure because we can prove that any CPA attack on ElGamal can be used to solve the DLOG problem, and so we only really have to worry about the lower bound on solutions to one problem. With something like AES we only test for certain attack strategies and a few general heuristics that suggest a block cipher is secure.
This is not to say that AES is not secure, nor that PKE is a magic bullet.
Besides, symmetric schemes tend to perform better. Hence the idea of using a public-key scheme like DH for key exchange and then switch to symmetric crypto like AES.
Always avoiding public-key isn't really possible unless you have some sort of secure channel to agree on a key for some communication. But then again, if you have that channel, you might just as well communicate through that.
When encrypting files locally though, there is absolutely no reason I can think of to use public-key cryptography.
> You should absolutely avoid public-key crypto, including public-key key agreement schemes like Diffie-Hellman, if your needs don't absolutely require them.
Is there an alternative to public-key crypto? We all need to do stuff online.
This works if physical contact is available and parties are trusted (even the weakest literal reading of fourth amendment says "right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures," -- exchanging a key token counts as such).
Yet what of basic online communications where key-exchange is necessary and where we have very little say about the protocols involved?
You probably know what I was getting at. Public-key crypto is very important in today's world. If we can't use the current system, we need something to replace it. Pre-shared keys is for a different scenario.
You are missing both Schneier's point and mine. The point is that if you can contrive of a way to (perhaps inconveniently) pre-share static keys, you should consider doing that instead of relying on number theory to protect your secrets.
> The point is that if you can contrive of a way to (perhaps inconveniently) pre-share static keys, you should consider doing that instead of relying on number theory to protect your secrets.
No I got that just fine. As you know, I'm suggesting that not using public-key crypto or something else for the same purpose is impractical.
Nerdnit, but it's not like AES is based on some wholesome certified organic granola theory to the exclusion of number theory. Number theory didn't wait around 2+ millennia to be useful only to be dissed like that, man!
and you are conveniently missing his point. If we were to throw away public key technology today and say 'Only pre-shared keys' some significant portion of the internet would simply be unencrypted. What do we do about that?
> We could all start swapping 1TB drives full of random noise.
This is "sort-of" what I've come to think Miranda was facilitating for Greenwald - it makes little sense that the purpose was for him to mainly carry documents, as while the documents would be more secure if he was not stopped vs. transferring them via the internet, they would have had to consider the possibility that he might be.
But if you pass along a lot of random data, then if it never leaves your possesion, you can be reasonably sure that that data can be safely used as a source for one time pads or keys. If it gets intercepted, then so what? You just ignore that batch of data even if it is handed back to you.
This would be doubly interesting, as it would mean the key he was carrying for the one file would be a smoke screen of sorts.
Of course, with UK law saying they can detain you indefinitely until you hand over your keys if they think a file is encrypted, that might not be a good move... though obviously in retrospect they didn't make use of that here.
You probably didn't mean it that way but I feel like fixing that. Prime numbers have no factors except for 1 and themselves. That's the definition of a prime number. What you're thinking about are composites which are a product of two large prime numbers.
Among the specific accomplishments for 2013, the NSA expects the program to obtain access to "data flowing through a hub for a major communications provider" and to a "major internet peer-to-peer voice and text communications system".
What could this be: "major internet peer-to-peer voice and text communications system" ?!
We already know Skype is compromised and it's not P2P anymore anyway. So what are they talking about? Which other major P2P service for voice and text is there?
"Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it's explained away as a mistake."
I have to wonder if, maybe in some instances, this is why it takes so long for some vendors to patch the vulnerabilities in their software. Maybe some of the problems we report were really there intentionally?
This is of course the right answer. And the only answer if you are actually planning something you know to be nefarious. It's going to be an I retesting law of uninteneded consequences now that only the better criminals and terrorists will escape this dragnet.
I don't 'get it': For public key
cryptography, the basic math is
well known. Then what's needed
is some code. The core code for
just the en/decryption is quite
short. So, just write the core
code yourself, directly from the
math and/or open source code --
be 100% sure understand each
statement of code do write.
Then
turn the en/decryption code into
just a simple command line program
that reads a file and writes a
file -- no opportunity for the
spooks to corrupt that code either.
Then call it done. So, for e-mail,
type some text into a simple file, wash it through
own command line encryption software,
get out the file of simple text of the base
64 of the encryption, pull that
text into the e-mail program, and
send it.
I'm not 'getting it' on just where
the security vulnerability is here.
Again, the crucial point is that the
core en/decryption code is darned
short, quite close to the well
known math, and can be checked
against some open source code.
Then, if everyone writes their own
code in this way and discovers that
they are 'interoperable', then
everyone knows that they did good
work even if they never actually
share code.
Right, so you go on Wikipedia and implement RSA. You understand every single thing you do.
Oops. You forgot padding. Because of some strange identity that applies to the simple math you just implemented straight from the textbook, all your encrypted data can be trivially decrypted.
Or even if you get the algorithm correct, maybe you forgot to handle all the memory the key touches special and it's swapped to disk. Or maybe your implementation is vulnerable to timing attacks or something more obscure.
Just trying to send some text
via e-mail, so swaps to disk
and/or timing attacks should not
work. That is, not trying to
encrypt streams of voice or video,
and do assume that the computer
used for the de/encryption is
'secure'.
The problem with it being swapped to disk is that it means the decrypted form of your key (you're storing it encrypted by a passphrase, right?) is now persisting while your computer is off, which exposes it to more threats (someone images your disk when you take your computer in for repair).
Timing attacks would certainly be harder if you're never signing anything in a situation the attacker controls, but I'm leery about claiming nothing could be done.
> decrypted form of your key (you're storing it encrypted by a passphrase, right?)
Haven't yet implemented the little de/encryption
command line program l described
so don't know just how I'd
store my private RSA key. The private key would likely be
just on my computer some place as just ordinary
data maybe with a comment that clearly describes the
data as my private key.
I'm not sure what you mean by a "passphrase",
but I can guess; with my guess, no, I wouldn't
do that because (1) it makes life harder for
me and (2) doesn't really make decrypting my data
much more difficult for an attacker.
> when you take your computer in for repair
Right, if I lose physical control of my
computer, then all or nearly all the data
I encrypted can now be decrypted by others.
So, right, for anyone who would lose physical
control of his computer for any reason, the
'approach to computer security' I outlined
would have a huge hole in it.
In my case, I would never take my computer
for repair since I built and repair my own
computer.
Please let's not mindlessly post this comic and act that because rubber hose crypt-analysis exists we shouldn't even bother.
One huge point I keep trying to bring up to friends when I talk about security is: yes if you are the specific target of the state or any asymmetrically powerful adversary you are in a lot of trouble.
But with large scale surveillance the bigger concern is not becoming a target in the first place. Properly securing sensitive communications is a good first step in ensuring that you aren't picked up in sweep through the data.
Sure encryption in the first place may flag you, but at least in the current situation it isn't going to be enough to invoke any rubber hose techniques.
I propose that we do act like governments can detain you, until you reveal your password, because that does in fact happen.
I propose that people abandon the concept that they can archive their email forever, and instead use systems like a Mission Impossible Tape that self-destructs the instant you're done reading it.
I propose that the only way to stop the government from making you decipher your messages, is if it's impossible for you to decipher them.
Everyone should use Perfect Forward Secrecy systems.
I think you're right. But once you discuss evasion methods on hn, doesn't it flag you, me and other participants automatically? Or have you been smart enough to buy anonimous sim card (in a country where it is still possible) and to use it to register anonymous email, then access hn via anonymous cell line through anonymizing vpn or tor? And how often can you throw away sims/cells...
We do not live in an episode of 24, and a world where Jack Bauer exists. We live in reality. A reality where governments abuse mass surveillance to squash political movements all the time. If the NSA had the power they do today, back in the '60s, there would be no MLK Jr. or civil rights movement. It would be interesting to see what role the NSA played in the fizzle-out of the Occupy movement. My bet is, they had a prominent role in not letting it truly matter.
Quite. The actual trend in the US is towards greater freedom, on the whole (women in states with restrictive abortion laws may justifiably disagree). I find it ironic that Snowden has sought political asylum in a country where you can (as of very recently) be prosecuted for saying it's cool or natural to be gay.
Wouldn't it be relatively easy for the NSA to run ubiquitous MITM attacks using some certificates in the certificate chain that they have compromised? That way all they need to do is compromise certificates and get in the middle on a large scale (which could be done automatically perhaps) instead of breaking encryption.
No, because the anti-surveillance features of browsers like Chrome would flag those MITM attacks, which would compromise sources & methods, which is anathema to NSA.
I thought that the browser flags a MITM attack when the information that was sent from the supposed server doesn't agree with the server information received through a different channel (e.g., with the operating system).
If the NSA had compromised certificates on the operating system, how would Chrome detect that a MITM attack was being attempted?
I wouldn't trust Chrome's extra methods in this case. Nothing against Google whatsoever, but closed source, commercial software from a company the NSA has expressed a great deal of interest in fooling around with is probably the worst possible choice in this case. Same with Safari and IE.
Google and other corporations still represent a huge risk in that they can be compelled to degrade or modify the feature, and since all you get is a binary blob, you have no way of knowing.
The security provided by the feature, in this case, is then questionable. I'd trust Firefox and Chromium, but not Chrome for this reason.
You can't remain secure against an organization that has effectively unlimited resources. All you can do is make it harder for them -- and that is most easily accomplished by non-technical means rather than a more advanced version of existing technical means.
It's pretty funny that SilentCircle's website preaches privacy and information hiding. Yet their site is also littered with trackers for analytics, logging what OS you're using, what hardware, etc.
Most security researchers use Windows because that's where the 'action is' so to speak. If he wasn't running Windows, he'd be firing up a Windows VM more often than not.
seems not unreasonable for work. but there is also a level at which they can't see much of what's happening, because they simply don't have the source code, right? so for the home office it would make sense a lot to use Linux.
For now, we need to stop thinking and talking like this. This race is lost. The government is ready to record everything, but we are not ready to encrypt everything.
It's not enough to protect yourself, even if you could. You are not OK even if you manage to protect yourself, because your friends and family and loved ones and colleagues are mostly, if not all, compromised.
The long term goal should still be for everything to be encrypted. But the near-term goal should be to utterly dismantle the NSA. It should all be taken apart, brick from brick, and destroyed. The people who put it into place should be removed from power and prosecuted.
If data is easy to eavesdrop, then somebody will. This is also an engineering problem. We've got broadband, we've got capable CPUs, there's no excuse for not encrypting everything.
And if they do suspect you of something, they will water-board you in a country with no human rights to stop the "ticking bomb" they use to excuse themselves. Not sure how long I'd last, TBH. If you are lucky, you'll got to jail.
That's the way it is, and will be until people wake up and change politics.
You can't close pandora's box. Not only is dismantling the NSA almost hopelessly unfeasible, the technology has already been proven to exist. Much like you can't stop the invention of calculus by killing Newton, you won't kill the methodologies of cryptanalysis by destroying the NSA. That information is now within grasp of the UK, China, Russia and Security Firm X and they are more incentivised to discover them now that it has been proven possible.
We should aim to plug the holes now. The race isn't over just because another runner got a little bit ahead.
It isn't a secure product, but it's a crypto product something that has decent UX that isn't from a huge compromised company such as apple or google. In this space it makes it rare. It all depends on your adversary. For family members, stalkers and most employers, it's probably great.
Well it's more a social design. Google probably has a record of all of your emails, deleted or not somewhere and a court order can get those emails. Cryptocat will not. Gmail is also very integrated into most people's devices, so it's easier to get to someone's email records by getting at their computer, phone, cookies, etc.
SKETCHY ADVICE. Tor might make targeted attacks on you personally take more work, but because Tor attempts to provide anonymity and not confidentiality or integrity, it may make you more susceptible to dragnet surveillance.
2) Encrypt your communications. Use TLS. Use IPsec.
Great advice. But prefer TLS to IPsec, which is largely the provenance of commercial systems and the product of much more vendor-centric standardization.
3) Assume that while your computer can be compromised, it would take work and risk on the part of the NSA – so it probably isn't.
Sure, I guess.
4) Be suspicious of commercial encryption software, especially from large vendors.
Absolutely, great advice.
5) Try to use public-domain encryption that has to be compatible with other implementations.
This starts out great. This sentence is great. It continues:
Prefer symmetric cryptography over public-key cryptography.
Dead on, absolutely. Number-theoretic public-key crypto is terrifying. But then:
Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.
ARGH NO ARGH. First, conventional DLP systems also have suspect parameters. Second, the best ECC systems have parameters of well-known provenance. Third, ECC systems are known to be stronger than DLP systems. In fact, the most likely source of any claim about an NSA cryptographic breakthrough is that NSA has a viable attack on RSA-1024 (an IFP system, but six of one).
If you were going to change one thing about the way you decrypted in the wake of today's "new information", make it be that you abandon RSA.