Kelly Sommers had this twit, that caught my attention.
While half a minute pause is not something that we have really seen, we have seen production systems that spend a lot of their cycles on GC. That is better than a pause, but only in the same way that you prefer to stab your toe rather than break your leg.
As per the title of this post, the most challenging tasks in a database engine isn’t actually how to fetch the data, or how it is stored, but the management of the various resources that are available. In particular, CPU utilization, memory usage and I/O. Most databases handle I/O pretty much from the get go, because it is such a painful topic. And it is relatively easy and obvious to see how you need to handle CPU allocations.
But memory is tricky. It is tricky because you typically want to share as much of it as possible, to reduce memory usage, except you don’t, because of thread safety and “is someone still using that?” issues. The GC is a really great invention in this regard, because you can offload all of this behavior (well, not the thread safety one, but deciding when to throw stuff out can also be hard) to a well tested component and pretty much forget about it.
That would be a mistake. See Kelly’s post above for the reasons why. By not managing memory yourself, you are letting a standard component, which you have no control over, take over a critical part of your system. With potentially dramatic results.
But we need to distinguish between different types of memory here. Modern GCs are typically using some notion of generations (Gen0, Gen1, Gen2) and are really good in collecting memory that isn’t held for long. The major problem is when you have memory that is retained for a long period of time (thus pushed to Gen1, Gen2), which caused the problems.
The good thing about a managed language and the GC is that you can write trivially simple code, and it would typically run at a faster speed than a similarly trivially simple native code. A good experiment on that is Raymond Chen’s dictionary series, notice Rico’s analysis there.
The solution is to take active control on that, and make clear distinctions between the different types of memory. Being able to have cheap allocations for short term objects (such as handling of a request) is a great productivity boost, but the database needs to know that anything that is either large or potentially long living cannot really be treated as standard managed objects. The database needs to allocate them by itself, preferably in the native heap, and manage that directly.
Once you do that, the actual managed memory you’ll use is almost all in the context of handling requests / other short term operations, and any GC collections will only impact that request (typically at the end of it).
Most managed databases are headed in that direction, and for a good reason.
As for RavenDB, see the previous posts I had recently.