The design of RavenDB 4.0Separation of indexes and documents
In my last post on the topic, I discussed physically separating documents of different collections. This post is about the same concept, but applied at a much higher level. In RavenDB, along with the actual indexing data, we also need to keep track of quite a few details. What did we last index, what is our current state, any errors that happened, keep track of referenced documents, etc. For map/reduce indexes, we have quite a bit more data that we need to work with, all the intermediate results of the map/reduce process, along with bookkeeping information about how to efficiently reduce additional values, etc.
All of that information is stored in the same set of files as the documents themselves. As far as the user is concerned, this is mostly relevant when we need to delete an index. Because on large databases the deletion of a big index can take a while, this was an operational issue. In RavenDB 3.0 we changed things so index deletion would be async, which improved matters significantly. But on large databases with many indexes, that still got us into problems.
Because all the indexes were using the same underlying storage, that meant that the number of values that we had to track was high. And it was proportional to the number of indexes and the amount of documents they indexed. That means that in a particular database with a hundred million documents, and three map/reduce indexes, we had to keep track of over half a billion entries. B+Trees are really amazing creatures, but one of their downsides is that once they get to a certain size, they slow down as the cost of traversing the tree become very high.
In relational terms, we put all the indexing data into a single table, and had a IndexId column to distinguish between the different records. And once the table got big enough, we had issues.
One of the design decisions we made in the build up to RavenDB 4.0 was to remove multi threaded behavior inside Voron, so that led to an interesting problem with having everything in the same Voron storage. We wouldn’t be able to index and accept new documents at the same time (I’ll have another post about this design decision).
The single threaded nature and the problems with index deletion has led us toward an interesting decision. A RavenDB database isn’t actually composed from a single Voron storage. It is composed of multiple of those, each of them operating independently of one another.
The first one, obviously, is for the documents. But each of the indexes now have its own Voron storage. That means that they are totally independent from one another, which leads to a few interesting implications:
- Deleting an index is as simple as shutting down the indexing for this index and then deleting the Voron directory from the file system.
- Each index has its own independent data structures, so having multiple big indexes isn’t going to cause us to pay the price of all of them together.
- Because each index has a dedicated thread, we aren’t going to see any complex coordination between multiple actors needing to use the same Voron storage.
This is important, because in RavenDB 4.0, we are also storing the actual Lucene index inside the Voron storage, so the amount of work that we now require it to deal with is much higher. By splitting it along each index line, we have saved ourselves a whole bunch of headache on how to manage them properly.
As a reminder, we have the RavenDB Conference in Texas shortly, which would be an excellent opportunity to discuss RavenDB 4.0 and see what we already have done.
More posts in "The design of RavenDB 4.0" series:
- (26 May 2016) The client side
- (24 May 2016) Replication from server side
- (20 May 2016) Getting RavenDB running on Linux
- (18 May 2016) The cost of Load Document in indexing
- (16 May 2016) You can’t see the map/reduce from all the trees
- (12 May 2016) Separation of indexes and documents
- (10 May 2016) Voron has a one track mind
- (05 May 2016) Physically segregating collections
- (03 May 2016) Making Lucene reliable
- (28 Apr 2016) The implications of the blittable format
- (26 Apr 2016) Voron takes flight
- (22 Apr 2016) Over the wire protocol
- (20 Apr 2016) We already got stuff out there
- (18 Apr 2016) The general idea
Comments
Hi Oren, does this new storage design - specifically different Voron storages per each collection - have any implications on transactions spanning multiple collections? In general, I'm thinking about the coordination of multiple operations on multiple different physical storages (for example in scenarios like a transaction roolback or recovering after a crash).
How much mapped memory each index uses by having its own voron storage? Would this work out well with many indexes?
What about splitting busiest indexes to separate disk(s) now that they use separate storages?
@njy Not really. Because storage for each collection resides all in the same "Voron context" (Documents). The indexing process is not ACID (it is BASE) so there is no coordination whatsoever necessary (besides of the usual -- read the documents in ascending etag order).
@Stan Memory Mapped mechanism by the OS is actually quite clever (being bitten by it doing some performance analysis on IO performance myself). Your OS is currently handling many MMF as you write this comment without much issue. The biggest issue is localization of the page access (how far appart at the pages you intend to access and if you can prefetch ranges in particular) which is a very intensive area of work for 4.0 (even though some improvements had also hit the 3.5 branch).
@Marko I for once havent think about that level of customization but it is certainly a possibility (if there is a use case which justify us to deal with that level of complexity).
njy, No, all collections share the same transactional boundary. Modifying documents from multiple collections is a single ACID transaction, with all that it implies.
Stan, Even now, we actually use memory mapped indexes implementation. Each index is going to have an overhead of a few MB because of Voron, so I don't expect that to be a problem.
Marko, That is actually possible right now, and it will be possible in 4.0 as well
Comment preview