The design of RavenDB 4.0The general idea

time to read 5 min | 825 words

This series of posts are excerpts from the design work we have done for RavenDB 4.0. It isn't everything, and some of that have been edited so it would make sense to outside people, without the benefit of the internal discussions. This is a high level design, that outline things at a very high level. Lower level features and practical design were left off and were done on a just in time basis. Don't get too hang up on what this this contains, the idea is that we'll use this as the basis for initial implementation, not concrete decisions.

 

The overall idea is that we now have been working on RavenDB for the past 8 years or so, and we have amassed over half a decade of experience of running RavenDB nodes in production. We are running on literally millions of machines. That is not a typo, mind.

One of our customers has deployed to hundreds of thousands of machines. And we have multiple customers that have deployed to tens of thousands of machines. In fact, Rodrigo is going to talk about running RavenDB at Massive Scale in the RavenDB Conference in Texas in a few months. And obviously we'll be speaking about RavenDB 4.0 there quite heavily as well.

image

What this means is that we have quite a bit of experience in running code on a variety of environments, from the "24/7 on site operations team for extremely powerful servers" to "maybe see a user once a day, mostly sitting on a crappy 8 years old machine running Windows XP in some warehouse doing useful work in the dark."

That experience have taught us quite a bit. You might have heard me before talk about all the taxes that you need to pay to actually release a feature. Those taxes are paid because if you don't pay them, someone will give you call a 2 AM, and the newest engineer on the rotation (somehow it always happens this way) is going to have to gulp a cup of coffee and try to figure out what is wrong with a server on the other side on the world over the crappiest VPN connection that money can buy.

But what is more important, it also taught us what doesn't work. Over the past couple of years we have been analyzing the pain points in our software, the areas where we always seem to be finding bugs, and typically nasty, hard to figure out, happen only in production, call us at 2 AM kind of bugs. You have seen some of the post mortem analysis on this blog, where we look at a production problem and try to get to the root cause.

Behind the scenes, there have been a lot of work done to find the rootest cause (not a word, I know, but perfectly express my point). Basically, try to find the common weak points. In many cases, it came back to certain places in our code where we had to do some really complex things to get things perform well.

Indexing is a huge part of RavenDB, obviously, and we had several issues there, but when it boiled down, we have a complex system that know how to make sure that indexing are running as fast as we can possible make them, and it involves prefetching, parallel processing, heuristics, adaptive behavior and a lot of other really cool stuff. And those things work, in almost all case. But when they don't… the complexity of all those interlaced features means that we need to first figure out what happened, realize that there is some faulty assumption along the way, fix that, then verify that the fix didn't have any unintended consequences.

We have been doing that almost from day one, and by this point, the system is really smart, and it is able to take into account multiple resource types, unexpected behavior from hardware, unpredictable user behavior, surprising documents and some truly nasty indexes. So this is a very mature piece of code, with all that implies.

RavenDB 4.0 design is based on brain storming, "if we had to do it all over again, rewind the clock to 2008, and design RavenDB from the ground up with all the knowledge that we currently have, what would we do differently?".

This post is getting too long, and I have a lot to talk about, but I think that this set the stage well enough. Next post I'll deal with the fact that I haven't invented a time machine yet, and given that I'll be sure to tell my past self if I had, it is a safe bet that I will not invent a time machine in the future. Which is sad, and funny. My first thought about using a time machine is to improve software architecture, not getting some stock exchange tips.

More posts in "The design of RavenDB 4.0" series:

  1. (26 May 2016) The client side
  2. (24 May 2016) Replication from server side
  3. (20 May 2016) Getting RavenDB running on Linux
  4. (18 May 2016) The cost of Load Document in indexing
  5. (16 May 2016) You can’t see the map/reduce from all the trees
  6. (12 May 2016) Separation of indexes and documents
  7. (10 May 2016) Voron has a one track mind
  8. (05 May 2016) Physically segregating collections
  9. (03 May 2016) Making Lucene reliable
  10. (28 Apr 2016) The implications of the blittable format
  11. (26 Apr 2016) Voron takes flight
  12. (22 Apr 2016) Over the wire protocol
  13. (20 Apr 2016) We already got stuff out there
  14. (18 Apr 2016) The general idea