The design of RavenDB 4.0We already got stuff out there

time to read 3 min | 466 words

I mentioned in my previous post that one of the things that pushed the design of RavenDB 4.0 is retrospective analysis. In particular, , "if we had to do it all over again, rewind the clock to 2008, and design RavenDB from the ground up with all the knowledge that we currently have, what would we do differently?". The problem with this approach is that we have shipping software, we have stuff that customers have been using for over six years, so it isn’t like we can start from scratch, and as tempting a decision that would be, it is almost always the wrong decision. RavenDB 4.0 isn’t a re-write, it is a major architectural change, driven by own experience in what is painful.

That is a good guideline, but what does this mean? We had a few rounds of thoughts around this, and we ended up with the following decisions.

As a major version release, we aren’t bound by backward compatibility, and we are going to take full advantage of that. That means that a 4.0 server cannot be accessed by a 3.0 client, and a 3.0 server can’t replicate to a 4.0 server. From the point of view of the wire protocol, we have taken the chance to fix some long standing issues, but I’ll have another post about that. Internally, an upgrade from 3.0 to 4.0 will probably be done by a dedicated migration tool, unlike the previous “in place” upgrade procedure that we previously had. The reason for those decisions is that this gives us a lot of flexibility in fixing our implementation and change how we are doing things.

At the same time, from an external point of view, users of RavenDB should see as little change as we can get away with. Ideally, the process of upgrading a new piece of software to RavenDB 4.0 should be:

  • Update the Nuget package
  • Recompile

And that would be it. In practice, I’m sure that we’ll have edge cases and things that would require a bit more work from the user, but that is the goal, that as far as users are concerned, they don’t have to do a lot of extra work to upgrade.

We’ll probably have a lot of discussions around what exactly we can change, and what we must absolutely have. In some of the discussion we had with our customers we already learn that running on 32 bits is a hard requirement for some of them, which means that RavenDB 4.0 will support that, even if that makes our life quite a bit more complex.

As a reminder, we have the RavenDB Conference in Texas in a few months, which would be an excellent opportunity to learn about RavenDB 4.0 and the direction in which we are going.

image

More posts in "The design of RavenDB 4.0" series:

  1. (26 May 2016) The client side
  2. (24 May 2016) Replication from server side
  3. (20 May 2016) Getting RavenDB running on Linux
  4. (18 May 2016) The cost of Load Document in indexing
  5. (16 May 2016) You can’t see the map/reduce from all the trees
  6. (12 May 2016) Separation of indexes and documents
  7. (10 May 2016) Voron has a one track mind
  8. (05 May 2016) Physically segregating collections
  9. (03 May 2016) Making Lucene reliable
  10. (28 Apr 2016) The implications of the blittable format
  11. (26 Apr 2016) Voron takes flight
  12. (22 Apr 2016) Over the wire protocol
  13. (20 Apr 2016) We already got stuff out there
  14. (18 Apr 2016) The general idea