Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You sound like you doubt the readings. This is netflix, I assume they have some pretty hefty kit backing up their service.

For a massive server the CPU reading might not be unusual. Maybe it has 32+ CPU cores and a multi-threaded java app is spinning most of them.

Also remember that on a heavily loaded system the task that is reading CPU use is itself competeing for time. Timing issues and other monitoring vagueries can make such readings noticably imprecise (though for CPU time usually in the downwards direction by missing tasks that started+worked+ended between readings).

The 233GB of memory seems high but there are possible explanations for this. A server with 32+ cores is not unlikely to have a lot of RAM too, the motherboard in my home server supports up to 128GB so perhaps that large java process genuinely does use more then 200. Also all that memory might not really be in use: it could have been allocated but never accessed so it isn't yet holding pages in RAM or swap for all of it.



The 233GB is VIRT, meaning virtual memory. It need not all be backed by physical memory. For example, if you mmap a file, and then access only a small portion.

The RES column shows how much memory is resident (eg, currently backed by physical memory), and that is a much more reasonable 12GB.


The r3.8xlarge instance type has 32 vCPUs and 244GiB RAM


  For a massive server the CPU reading might not be unusual. Maybe it has 
  32+ CPU cores and a multi-threaded java app is spinning most of them.
Yes, the article states that

  The %CPU column is the total across all CPUs;
  1591% shows that that java processes is consuming almost 16 CPUs.


Probably r3.8xlarge.


jinx!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: