The design of RavenDB 4.0Separation of indexes and documents

time to read 4 min | 629 words

In my last post on the topic, I discussed physically separating documents of different collections. This post is about the same concept, but applied at a much higher level. In RavenDB, along with the actual indexing data, we also need to keep track of quite a few details. What did we last index, what is our current state, any errors that happened, keep track of referenced documents, etc. For map/reduce indexes, we have quite a bit more data that we need to work with, all the intermediate results of the map/reduce process, along with bookkeeping information about how to efficiently reduce additional values, etc.

All of that information is stored in the same set of files as the documents themselves. As far as the user is concerned, this is mostly relevant when we need to delete an index. Because on large databases the deletion of a big index can take a while, this was an operational issue. In RavenDB 3.0 we changed things so index deletion would be async, which improved matters significantly. But on large databases with many indexes, that still got us into problems.

Because all the indexes were using the same underlying storage, that meant that the number of values that we had to track was high. And it was proportional to the number of indexes and the amount of documents they indexed. That means that in a particular database with a hundred million documents, and three map/reduce indexes, we had to keep track of over half a billion entries. B+Trees are really amazing creatures, but one of their downsides is that once they get to a certain size, they slow down as the cost of traversing the tree become very high.

In relational terms, we put all the indexing data into a single table, and had a IndexId column to distinguish between the different records. And once the table got big enough, we had issues.

One of the design decisions we made in the build up to RavenDB 4.0 was to remove multi threaded behavior inside Voron, so that led to an interesting problem with having everything in the same Voron storage. We wouldn’t be able to index and accept new documents at the same time (I’ll have another post about this design decision).

The single threaded nature and the problems with index deletion has led us toward an interesting decision. A RavenDB database isn’t actually composed from a single Voron storage. It is composed of multiple of those, each of them operating independently of one another.

The first one, obviously, is for the documents. But each of the indexes now have its own Voron storage. That means that they are totally independent from one another, which leads to a few interesting implications:

  • Deleting an index is as simple as shutting down the indexing for this index and then deleting the Voron directory from the file system.
  • Each index has its own independent data structures, so having multiple big indexes isn’t going to cause us to pay the price of all of them together.
  • Because each index has a dedicated thread, we aren’t going to see any complex coordination between multiple actors needing to use the same Voron storage.

This is important, because in RavenDB 4.0, we are also storing the actual Lucene index inside the Voron storage, so the amount of work that we now require it to deal with is much higher. By splitting it along each index line, we have saved ourselves a whole bunch of headache on how to manage them properly.

As a reminder, we have the RavenDB Conference in Texas shortly, which would be an excellent opportunity to discuss RavenDB 4.0 and see what we already have done.

image

More posts in "The design of RavenDB 4.0" series:

  1. (26 May 2016) The client side
  2. (24 May 2016) Replication from server side
  3. (20 May 2016) Getting RavenDB running on Linux
  4. (18 May 2016) The cost of Load Document in indexing
  5. (16 May 2016) You can’t see the map/reduce from all the trees
  6. (12 May 2016) Separation of indexes and documents
  7. (10 May 2016) Voron has a one track mind
  8. (05 May 2016) Physically segregating collections
  9. (03 May 2016) Making Lucene reliable
  10. (28 Apr 2016) The implications of the blittable format
  11. (26 Apr 2016) Voron takes flight
  12. (22 Apr 2016) Over the wire protocol
  13. (20 Apr 2016) We already got stuff out there
  14. (18 Apr 2016) The general idea