Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If your network is Man-In-The-Middle'd you are probably fucked in more than one way, and a backdoored php install script is the least thing you should worry about.

It's no different to just download an installer and double click it.

Providing verification methods in alternative channels is essential if you need make sure everything is clean.



  > If your network is Man-In-The-Middle'd you are probably
  > fucked in more than one way, and a backdoored php install
  > script is the least thing you should worry about.
Connecting to MITM'd networks is extremely common, and millions of people (even technical people) intentionally do so every day. If you've ever used a free Wifi hotspot with one of those click-through use agreements, you've used a MITM'd connection.

The only reasonable security stance towards the security of your local internet connection is to assume that it's got something malicious on the other end of it. That's why using HTTPS is so important.

  > It's no different to just download an installer and
  > double click it.
Most people install applications via app stores nowadays, which verify signatures on the application before installing. HTTPS is widely used in non-app-store distribution systems.


There's a small difference.

With a normal download, you're likely to wait until it's done before invoking it.

With a pipe-to-interpreter, the interpreter (php/sh/etc..) is possibly interpreting code as it receives in batches of (line/chunk/etc.).

A danger therein lies in the possibility of an unexpected pipe interruption (network, or software) feeding something that's technically runnable by the interpreter but logically broken.

Imagine coming down the pipe is "rm -rf /tmp/installer-data" but curl uses too much memory and the linux OOM killer nukes it and sh receives only "rm -rf /"


with "curl -s" at least, curl wont send anything onto the next process until it's done receiving (all) the data.

_If_ the pipe breaks up the incoming data, I assume its on 1 of these boundaries of 64kb-ish: http://unix.stackexchange.com/questions/11946/how-big-is-the...

http://stackoverflow.com/questions/4624071/pipe-buffer-size-...

In general, when I've seen this technique used in Ruby (RVM?, back in the day) and/or PHP (Composer recommends a similiar Curl install technique, but composer.phar is an digest-archive of sorts ....) the installer code tends to be about a paragraph worth of text.

All in all I understand your point. But your contrived example of rm -rf seems a little too contrived. In this case we're were talking about PHP code ... the chunked-piped code would still need to be valid PHP of what "rm -rf /" is not as it's missing important tokens to denote valid php expression(s).

It seems like it would be equally worrisome to worry about Brownian Motion flipping a bit in memory and catastrophically affecting my program execution -- possible but unlikely.


FWIW I used the word "pipe" very loosely to mean everything to the left of the interpreter. That means the unix pipe itself, but also the curl program, OS feeding it, network it's on, remote server feeding it, etc.

There are several scenarios where that logical pipe can fail. It doesn't have to happen cleanly on a 64-kb unix pipe buffer boundary. Some examples may include someone bouncing an office router and all TCP connection states getting reset, a neighbor microwaving lunch and WiFi radios croaking, a carrier having networking issues and connections timing out, the remote server restarting the web server after a software upgrade, etc..

I'm not sure curl (even with -s) will gracefully handle all of these cases defensively (doubly-so if the HTTP server was sending back chunked-encoding without explicitly specifying a Content-Length)

(anecdotally, since you mentioned flipping bits, it's possible but unlikely until you have to deal with it - http://mina.naguib.ca/blog/2012/10/22/the-little-ssh-that-so... )

At the end of the day, having not even considered what an active attacker could do but just what could naturally blow-up a small percentage of the time I'm inclined to say "that's bad, mmkay?" and try to persuade developers not to push that as a safe installation method. Even if they've triple-checked that the code they've written + interpreter can not possibly do harm in that case, the signal we should be sending (primarily to end-users) is that this may be an unsafe operation.


> curl wont send anything onto the next process until it's done receiving (all) the data.

Huh? How do you explain I curl | tar -zxvf gigabytes of file without consuming all of my RAM and swap?


> ... OOM killer nukes it and sh receives only "rm -rf /"

If curl was SIGKILLed, wouldn't that result in a "broken pipe", therefore causing the shell to abort? The only way the "rm -rf /" would be executed would be if the pipe shut down cleanly and the shell saw an EOF (since obviously there would not be a newline in the scenario you described).


The classical "broken pipe" typically occurs when a process is writing to a closed pipe, not reading (which usually gets a vanilla EOF)

See my longer reply to tenken for some other scenarios I could think off.


You have a good point.

It's good practice to download, verify checksum/cert, then run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: