Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your comment might win you the argument on a random non tech forum but not here.

much more efficient in what? mops the floor by what? which year's i7?

Don't get me wrong, I 100% believe what happened, but if you mean "my macbook is faster than my i7 thinkpad" you should use those exact words, but not bring RAM into this discussion. If you want to make a point about RAM, you need to be clear about what workflow you were measuring, the methodology you were using, and what the exact result is. Otherwise your words have no meaning.



Repeating what I just commented elsewhere, but Mac uses several advanced memory management features: apps can share read-only memory for common frameworks, it will compress memory instead of paging out, better memory allocation, less fragmentation.

Bandwidth for copying things into memory is also vastly faster than what you get on Intel/AMD, for example on the Max chips you get 800GB/s which is the rough equivalent of 16 channels of DDR5-6400, something simply not available in consumer hardware. You can get 8 channels with AMD Epyc, but the motherboard for that alone will cost more than a Mac mini.


Sharing read-only/executable memory and compressed memory are also done on Windows 10+ and modern Linux distributions. No idea what "better memory allocation" and "less fragmentation" are.

800GB/s is a theoretical maximum but you wouldn't be able to use all of it from the CPU or even the GPU.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...


>apps can share read-only memory for common frameworks

How is that different from plain shared libraries?


System design and stability. On MacOS a lot is shared between applications compared to the average Linux app. Dynamic linking has fallen out of favor in Linux recently [1], and the fragmentation in the ecosystem means apps have to deal with different GUI libraries, system lib versions etc, whereas on Mac you can safely target a minimum OS version when using system frameworks. Apps will also rarely use third party replacements as the provides libraries cover everything [2], from audio to image manipulation and ML.

[1] https://lore.kernel.org/lkml/CAHk-=whs8QZf3YnifdLv57+FhBi5_W...

[2] https://developer.apple.com/library/archive/documentation/Ma...


People who need 64GB+ RAM are not running 1000 instances of native Apple apps. They run docker, VMs, they run AI models, compile huge projects, they run demanding graphics applications or IntelliJ on huge projects. Rich system libraries are irrelevant in these cases.


This thread started as question on how MacOS is more efficient, not the usefulness of more RAM. In any case, you might still benefit from the substantial increase in bandwidth and lower system / built-in apps memory usage, plus memory compression, making 16GB on Mac more useful than it seems.


I can run apps with 4 distinct toolkits on Linux and memory usage will barely go past the memory usage of opening one Facebook or Instagram tab in a browser. Compared to compiling a single semi-large source file with -fsanitize=addresses which can cause one single instance of GCC or Clang to easily go past 5G of memory usage no matter the operating system...


I'm talking about memory bandwidth - maybe your workloads don't take advantage of that but most do and that's why apple designed their new chips to take advantage of it.


Bandwidth doesn't replace size, these are orthogonal parameters.


I never said it did.


So why are you commenting in a thread about size?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: