Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You don' know what you're talking about. I lead the web team for local.ch, we have 4mil unique clients a month and handle 10k requests per minute in peak traffic. Our site is developed in Ruby on Rails, with average response times from in the 150ms range. We replaced our legacy PHP system last year and will never look back. Ruby 2.0 will be our future, and I'm happy about that.


We're having millions of unique clients per month and our peak traffic is above 200k requests per minute. Ruby delivers, although you need to be very careful what you deploy to the main app codebase. Especially to realize the speed differences between stuff like uniq vs. uniq! or the cost of creating new objects. It's enormous with big traffic.

There are some things I wouldn't do with Ruby here. Like some concurrent background jobs; better solution would be Clojure, Erlang or any language where the concurrency constructs are better thought and easier to manage. Although we're having threaded Ruby running, it's not very elegant and you can do pretty nasty bugs in there.


that's about 3.3k requests per second. How many hosts is that spread across?


Maybe too many, I guess. Not giving any real numbers, but I suppose with JVM this could be much less. Oh hell, even with some refactoring here and there we reduced the overall CPU usage a lot. Like I said, with real traffic, the bang methods are a much better choice even though the functional programmer in me is against them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: