Hacker Newsnew | past | comments | ask | show | jobs | submit | 1SaltwaterC's commentslogin


How many actual users suspect that something is wrong with the input, even without URL obfuscation? OTOH, with a permanent XSS it is pretty much game over, even though I doubt that's the case. XSS can do a lot of damage if used properly.


Per CloudWatch stats, it varies for machine and region. We have a 6 machines array in three regions receiving round robin traffic from ELB's. It is network bound. However, the CPU graphs say otherwise. Here's the output of three machines in three regions: http://i.imgur.com/nadGmDH.png while their siblings follow the same trend. The working set resides in RAM. The disk and network follow the same path. These are instance store machines sharing as little as possible. Failing over a multi-AZ RDS for maintenance reasons did not change the graphs.


They aren't "unsyncable" per se, but Google Drive has preferences. For example, I could not sync files from a stackable encrypted filesystem (EncFS or eCryptfs). Files that go just fine through Dropbox or SugarSync. Same "error". I guess there's no money for Google in files you can't take a peek at.



I'd argue a little with the "Apache is monolithic". If you know your elbow from your butt, you can make a decent Apache setup as any decent sysadmin should do. The thing that I hate the most in nginx is the lack of DSO modules. When I need a new module, I need a new nginx build. nginx itself has more than you ask for in a standard build. As example, I counted the "with" and "without" flags from out nginx package build script. It has 3 "with" flags (SSL, gzip static, PCRE JIT) vs 14 "without" flags. And we can part with gzip static since most of the static objects are pushed by CDN's now.

Apache is catching up with its evented MPM and proxy support, but I still wouldn't go back to Apache though. The main selling point that the OP should get is the fact that evented servers have much better memory usage under the same load as the threaded servers or (cough) process based servers.


I can provide you some pointers as we made it go away. Some data from one of our production virtual hosts:

  cat access.log.1 | wc -l
  1054423 # no static objects served here
  cat error.log.1 | grep timed
For PHP-FPM we use static pm and unix domain sockets. This virtual host if fairly busy with some slow (~200 ms) requests, therefore it uses 96 processes per pool. listen.backlog = -1 in php-fpm.ini for letting the kernel decide the size of the actual connection backlog. UDS is getting filled faster than TCP and nginx starts responding with 502's. Throw net.core.somaxconn = 65535 somewhere in /etc/sysctl.d for increasing the actual backlog since even if you specify a high listen.backlog value, the actual value is truncated to SOMAXCONN. Couple of years ago I wrote an article about stuff like this: http://www.saltwaterc.eu/nginx-php-fpm-for-high-loaded-websi... (shameless plug, I know, but you may get some useful info). As a side note, I am curious how the backend persistency for nginx is playing. Our production still uses the same config since 2011 as it isn't broken, but it may be more efficient.


I tried your suggestions listen backlog and somaxconn, are they dependent on unix socket or do I get benefit from it either way?

Thanks.


Both TCP and UDS depend on it. UDS uses an API that follows the BSD sockets standard. However, TCP makes sense when you use nginx as load balancer between multiple PHP-FPM backends. A really tricky setup if you ask me. For UDS, besides lower latency, the namespace is cleaner (filesystem paths vs numeric ports - much easier to automate the configuration), and (at least under Linux) they follow the filesystem ACL. For example, under my setup, only nginx's user is allowed to read / write to the PHP-FPM sockets.


> to uninstall something installed from a .dmg file all you need to do is delete the ApplicationName.app "file"/folder.

AppCleaner / AppZapper / AppDelete / CleanApp / etc beg to differ. A lot of packages leave a lot of crap behind that doesn't get removed with the .app directory itself.


> To do otherwise is to suffer a whole day trying to install ImageMagick

Sure, you could do that, or issue [pkg-manager] install imagemagick. It took me yesterday around 15 seconds to install imagemagick without even leaving iTerm. Even brew knows that.


The CPU doesn't get that hot. On a machine with an Athlon II 240e cgminer uses ~18% CPU vs the GPU that sits at 99% load. scrypt is indeed viable on GPUs, but it depends onto the available memory bandwidth which presumably lessens the impact of FPGA/ASIC if people start to push those as LTC mining solutions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: