Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It really depends on what you are doing. If you’re writing an application running on an operating system you don’t need out of memory handling, it will even make programming harder.


Like I said, if your allocator is actually the system allocator then yes maybe you're right. If instead you're doing something like using an arena allocator then OOM isn't a huge deal, because all you've done is exhaust a fixed buffer rather than system RAM; totally recoverable. There are huge performance gains to be had with using custom allocators where appropriate.


Sure, and you can do that with Rust today. There's nothing stopping you from writing a custom data structure with its own custom allocator. The "abort on OOM" policy is not a property of the language, it's a property of certain collections in libstd that use the global allocator.


I think the point here is that users would like to use Vec, HashMap etc using that arena allocator and handle OOM manually instead of having to write their own collection types.


The their problem is not a lack of try_, it's a lack of custom allocators.


I think you missed "Add support for custom allocators in Vec": https://github.com/rust-lang/rust/pull/78461


If that's the first time you touch the last bits of your private arena, you can trigger the OOM killer.


True, but also worse than that: if an unrelated application starts consuming slightly more memory it can trigger an oom kill of your application


That's not necessarily the case though. It may be worth it for, say, a document viewer to catch the OOM condition and show a user-friendly error instead of dying. Of course, linux with overcommit memory can't do this. But on Windows, that's totally a thing that can happen.


I was curious so I did a Brave search to find out if that behavior can be changed. You can supposedly (I haven't tried it) echo 2 into /proc/sys/vm/overcommit_memory and the kernel will refuse to commit more memory for a process than the available swap space and a fraction of available memory (which is also configurable). See https://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6 for more details.

I usually write my programs to only grab a little more memory than is actually needed so I might play around with this at home. I wonder if this has lead to a culture of grabbing more memory than is actually needed since mallocs only fail at large values if everything is set the traditional way.

Defaulting to overcommit seems risky. I'd much rather the system tell me no more memory is available than just having something segfault. I could always wait a bit and try again or something or at the very least shut down the program in a controlled manner.


That's a terrible idea to disable overcommit on a generalist Linux system because:

* exec

* some tools or even libs map gigantic area of anon mem but only touch few bits of it.


You can add enough swap space that fork+execve always works in practice (although vfork or vfork-style clone is obviously better if the goal is to execve pretty much immediately anyway). Linux allows reserving address space with PROT_NONE, populating it later with mprotect or MAP_FIXED, and many programs do it like that.

However, I stopped using vm.overcommit_memory=2 because the i915 driver has something called the GEM shrinker, and that never runs in that mode. That means all memory ends up going to the driver over time, and other allocations fail eventually. Other parts of the graphics stack do not handle malloc failures gracefully, either. In my experience, that meant I got many more desktop crashes in mode 2 than in the default mode with the usual kernel OOM handler and its forced process termination.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: