Memory management goop in Windows & Linux
Regardless of the operating system you use, you are going to get roughly the same services from each of them. In particular, process and memory isolation, managing the hardware, etc. It can sometimes be really interesting to see the difference between the operating systems approach to solving the same problem. Case in point, how both Windows and Linux manage memory. Both of them run on the same hardware and do roughly the same thing. But they have very different styles, this end up having profound implications on the application using them.
Consider what appears to be a very simple question, what stuff do I have in my RAM? Linux keeps track of Resident Set Size on a per mapping basis, which means that we are able to figure out how much of a mmap file is actually in memory. Further more, we can figure out how of the mmap data is clean, which means that it is easily discardable and how much is dirty and needs to be written to disk. Linux exposes this information via the /proc/[pid]/smaps
On the other hand, Windows doesn’t seem to bother to do this tracking. You can get this information, but you need to ask it for each page individually. This means that it isn’t feasible to check what percentage of the system memory is clean (mmap pages that hasn’t been modified and can be cheaply discarded). Windows expose this via the QueryWorkingSetEx method.
As a result, we have to be more conservative on Windows when the system reports high memory usage. We know that our usage pattern means that high amount of memory in use (coming from mmap clean pages) is fine. It is a small detail, but it has caused us to have to jump through several hurdles when we are running under load. I guess that Windows doesn’t need this information, so it isn’t exposed, while on Linux it seems to be used by plenty of callers.
Comments
Could it be because Linux tends to kill off processes when low on RAM whereas Windows tends to struggle on? I can see that different mindset causing developers on the platforms to be much more/less interested in the amount of RAM that can be discarded at a given point in time.
Ian, Windows doesn't do over commit, but it has a hard limit that you can't go beyond. We certainly got a lot more interested in memory management because of the OOM killer. The key here is that we also need to know what amount of memory is being used that the OS should consider free to discard at need, which is common for our scenario. Windows make it hard to figure out.
Comment preview