Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, this is still misunderstanding.

Overcommit means that the act of memory allocation will not report failure, even when the system is out of memory.

Instead, failure will come at an arbitrary point later, when the program actually attempts to use the aforementioned memory that the system falsely claimed had been allocated.

Allocating all at once on startup doesn't help, because the program can still fail later when it tries to actually access that memory.





Which is why I said "allocate and pin". POSIX systems have mlock()/mlockall() to prefault allocated memory and prevent it from being paged out.

Aha, my apologies, I overlooked that.

Random curious person here: does mlock() itself cause the pre-fault? Or do you have to scribble over that memory yourself, too?

(I understand that mlock prevents paging-out, but in my mind that's a separate concern from pre-faulting?)


FreeBSD and OpenBSD explicitly mention the prefaulting behavior in the mlock(2) manpage. The Linux manpage alludes to it in that you have to explicitly pass the MLOCK_ONFAULT flag to the mlock2() variant of the syscall in order to disable the prefaulting behavior.

To be fair, you can enforce this just by filling all the allocated memory with zero, so it's possible to fail at startup.

Or, even simpler, just turn off over-commit.

But if swap comes into the mix, or just if the OS decides it needs the memory later for something critical, you can still get killed.


I would be suprised if some os detects the page of zeros and removes that allocation until you need it. this seems like a common enough case as to make it worth it when memory is low. I'm not aware of any that do, but it wouldn't be that hard and so seems like someone would try it.

There's also KSM, kernel same-page merging.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: