Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Also, open-source is of course safer and some of us have been repeating for years that you can't trust binary blobs, security being just one reason out of many.

Only if you're auditing the code and building it yourself, or have a trusted place where that's done. I would wager that for the vast majority of users are just using the binary blob provided by someone. Otherwise, the "code" you're using could be as compromised as any closed source program.



Again with the vast majority of users fallacy? Bring up a story of grandma for a full picture too.

Binary blobs can be checked easily if they originated from the source code published. Major Linux distributions, like Debian, have people all over the world auditing the repository. If a backdoor is planted, with the repository being public, most projects have a full history of whom added what and when. Code reviews happen too.


You're right in that this mostly foils the NSA in its role as a global passive attacker. However, having a FOSS OS doesn't foil active, targeted attacks at all: the version of a package in the repo could be perfectly safe, while the version distributed to just your computer could be backdoored. (The NSA, after all, can FISA-order someone to give up their package signing key just as well as they can grab a decryption key, and then MITM just your connection to deliver an "automatic security update" that nobody else got.)


...while the version distributed to just your computer* could be backdoored.*

They would have to get the base distribution signing key, which is probably a shared secret that requires multiple people to unlock.

Have there been any hints of NSLs or FISC orders requiring the disclosure of private Linux distribution keys, some of which may not even be held in the US?


The funny thing is, you don't actually need to replace a package that's part of the base distribution. Every package format we have (RPM, DEB, etc.) runs its install- and upgrade-scripts as root, so every package can potentially do anything on install, including patching the installed files of other packages (or cleverer things, like adding SSL trust roots.)

So, since they don't specifically need to patch your openssl package to compromise your openssl library, they could just as well suborn the signing key of (presuming Ubuntu) some PPA author, as long as the user they're trying to bug has that PPA in their sources.list.


Solaris introduced some concepts in its IPS packaging to avoid packages changing the system. IMHO, it is very good but adds some pain to developers and sysadmins alike.

http://www.oracle.com/technetwork/server-storage/solaris11/t...

That coupled with system wide snapshots is a great tool to audit a system after a patch is applied.


Though I do use them, the PPA concept is kind of flawed, because there's no way of saying which packages you will accept from a particular PPA, and no way for a PPA signing key to limit the packages it covers.

Packaging systems do detect file collisions, however, and with something vaguely similar to checkinstall or fakeroot they could detect attempts to overwrite other packages' files in the pre- and post-install scripts.


FBI did just this and pushed a special update to bug a member of a mob's dumbphone a relatively long time ago.


I love how this story keeps getting better with time.


Is that why it only took two years for a critical crypto bug to be discovered?


The Debian OpenSSL bug was and is highly embarrassing, yes, but the fact that the source was open was absolutely instrumental in its discovery. It's also difficult to find a better example of the right way to react to such a bug.


Certainly open source benefits only so far from the review it receives. The NSA could conceivably submit a patch to openssh to Debian that once again subtly breaks security.

On the other hand, audit is possible, which is a hell of a lot better than the alternative, limiting their options to underhanded subversion is again better than the alternative, and to my knowledge nobody has actually ever picked up on a case of this happening (while we do know they have had the security of close-source software subverted).


If the NSA really wanted to help improve security, it'd be nice if they'd publicly do some of that auditing...


And you're auditing the compiler code, which you bootstrapped up yourself without using a possibly backdoored compiler on the way.

(I wonder if anyone's audited gcc/clang binaries installed in readily available linux/bsd/MacOSX distros to see if they're free of "Reflections on Trusting Trust" concerns?)


I don't know the answer to your question, but it's worth noting that "auditing the binaries" no longer has to mean looking through every line of disassembled byte-code, as was originally thought:

http://www.dwheeler.com/trusting-trust


Thanks, great link (that's gonna kill my work productivity this afternoon…).


Yeah, it blew my mind when I first encountered it. The basic idea is looking at compilers as both data and deterministic (for the individual compiler) transformations of data.


No, the fact that others can compare things means there's still some improvement in security. Even if it's not actually done, there's a greater threat of it being done, so it's a less tempting target.

To be sure, absent the measures you describe the effect is significantly weaker than with them, and other effects may dominate.


Are you positive that the binary blob of the open source project you just downloaded is 100% from the code in it's repo?

If you're not, then it absolutely is not an improvement. The attack vector has just shifted slightly.


Yes, because nothing can ever improve security unless it is 100%.

Using FLOSS, alone, is a marginal improvement in security, other things being equal. It can be combined with still other measures to make a bigger difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: