Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why are they measuring requests/sec? Any server can accept connections at a high rate but what matters is responding in a timely manner.

I doubt the requests number too. Writing a dummy socket server (evented, threaded, ...) that just returns "HTTP/1.1 200 OK" will not get you anywhere close to 120k requests/sec. The system call becomes the bottleneck.



Requests per second implies downloading the complete request for each request.


It's labelled badly, what is actually measured is req/resp per second.

I.e from request to corresponding response and how many of those it can do per second.

If you doubt the numbers please feel free to run them yourselves, all code is in github


Could you clarify that? Are you saying that if the response is sent within the same second that the request came in that it contributes to the metric?

Or would a response that is sent 30 seconds after the request came in contribute to the metric too?


It doesn't matter whether a request straddles a second or not in a throughput measuring benchmark when you saturate the system. A client would only count a request when its request call has returned. Runs it for N minutes, count up how many requests have completed, then divide the total with the time and you got req/sec.

Besides the benchmark has run for a minute. I doubt each request lasts 30 seconds.


+1.

The system is in steady state, i.e. queues of requests/responses aren't growing. Therefore it doesn't actually matter if you count the requests or the responses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: