A map to Reviewing RavenDB
Afif asked me a very interesting question:
And then followed up with a list of detailed questions. I couldn’t be happier. Well, maybe if I found this place, but until then…
First question:
I am curious about how much Lucene concepts does one need to be aware of to grasp the RavenDB code base? For e.g. will not knowing Lucene concept X limit you from understanding module Y, or is the concept X interviewed in such a manner that you simply won't get why the code is doing what its doing in dribs and drabs over many places.
In theory, you don’t need to know any Lucene to make use of RavenDB. We do a pretty good job of hiding it when you are using the client API. And we have a lot of code (tens of thousands of lines, easy) dedicated to making sure that you needn’t be aware of that. The Linq query providers, in particular, do a lot of that work. You can just think in C#, and we’ll take care of everything.
In practice, however, you need to know some concepts if you want to be able to really make full use of RavenDB. Probably the most important and visible concept is the notion of the transformations we do from the index output to the Lucene entries, and the impact of analyzers on that. This is important for complex searching, full text search, etc. A lot of that is happening in the AnonymousObjectToLuceneDocumentConverter class, and then you have the Lucene Analyzers, which allow to do full text searches. The Lucene query syntax is also good to know, because this is how we are actually processing queries. And understanding how the actual queries work can be helpful as well. But usually that isn’t required.
Some advanced features (Suggestions, More Like This, Facets, Dynamic Aggregation) are built on top of Lucene features directly, and understanding how they work is helpful, but not mandatory to making real use of them.
Second question:
Oren has often referred to RavenDB as an ACID store on top of which is an eventually consistent indexing store built. Is this a mere conceptual separation or is it clearly manifested in code? If so, how tight is the coupling, can you understand and learn one without caring much about the other?
Yes, there is a quite clear separation in the code between the two. We have ITransactionalStorage which is how the ACID store is accessed. Note that we use the concept of a wrapper method, Batch(Action<IStorageActionsAccessor>) to actually handle transactions. In general, the DocumentDatabase class is responsible for coordinating a lot of that work, but it isn’t actually doing most of it. Indexing are handled by the IndexStorage, which is mostly about maintaining the Lucene indexes properly. Then you have the IndexingExecuter, which is responsible for actually indexing all the documents. The eventual consistency aspect of RavenDB comes into play because we aren’t processing the indexes in the same transaction as the writes, we are doing that in a background thread.
In general, anything that comes from the transactional store is ACID, and anything that comes from the indexes is BASE.
Third question:
Often operational concerns add a lot of complexity (think disaster recovery, optimizations for low end machines). When looking at feature code, will I know constructs a,b,c are intermingled here to satisfy operational feature y, so I can easily separate the wheat from the chaff.
Wow, that is really hard to answer. It is also one of our recurring pain points. Because .NET is a managed language, it is much harder to manage some things with it. I would love to be able to just tell the indexing to use this size limited heap, instead of having to worry about it using too much memory. Because of that, we’re often having to do second order stuff and guesstimate.
A lot of the code for that is in auto tuners, like this one. And we have a lot of code in the indexing related to handling that. For example, catching an OutOfMemoryException and trying to handle that. Disaster recover is mostly handled by the transactional storage, but we do have a lot of stuff that is meant to help us with indexing. Commit points in indexes is a case where we try to be prepared for crashing, and store enough information to recover more quickly. At startup we also do checks for all indexes, to make sure that they aren’t corrupted.
A lot of other stuff is already there an exposed, the page size limits, the number of requests per session, etc. We also have a lot of configuration options that allow the users on low end machines to instruct us how to behave, but we usually want to have that handled automatically. You can also see us taking into account size & count for documents when loading them. Usually we try to move them out of the mainline code, but we can’t always do so. But it is hard for me to point at a feature code and say, this is there to support operational concern X.
That said, metrics is operational concern, an important one, and you can see how we use that throughout the code, by trying to measure how certain things are going, we get a major benefit down the road when we need to figure out what is actually going on. Another aspect of operational concern is the debug endpoints, which expose a lot of information about RavenDB internal behavior. This is also the case for debugging how an index is built, for which we have a dedicated endpoint as well.
In the replication code, you can see a lot of error handling, since we expect and handle the other side to be down a lot. A lot of thought an experience has gone into the replication, and you can see a lot of that there. The notion of batches, back off strategies, startup notifications, etc.
One thing you’ll notice that match your question is that we have a lot of stuff like this:
using (LogManager.OpenMappedContext("database", Name ?? Constants.SystemDatabase))
This is there to provide us with context for the logs, and it is usually required when we do things in the background.
Fourth question:
Is there a consistent approach to handle side effects, for e.g when x is saved, update y. Pub/sub, task queue, something else? I am hoping if I am made aware of these patterns I will more easily be able to discover/decipher such interactions.
Yes, there is! It is an interesting one, too. The way it works, every transaction has a way of saying that it did something that other pieces of RavenDB that something happened. This is done via the WorkContext’s method ShouldNotifyAboutWork, which will eventually raise the work notification when the transaction is completed. The other side there is the waiting for work, obviously.
That means that a lot of the code in RavenDB is actually sitting in a loop, like so:
while (context.DoWork) { DoWork(); var isIdle = context.WaitForWork(); if(context.DoWork == false) break; if(idIdle) DoIdleWork(); }
There are variants, obviously, but this is how we handle everything, from indexing to replication.
Note that we don’t bother to distinguish between work types. Work can be a new document, a deleted index or indexing completing a run. Either one of those would generate a work notification. There is some code there to optimize for the case where we have a lot of work, and we won’t take a lock if we can avoid it, but the concept is pretty simple.
Fifth question:
Should I expect to see a lot of threading/synchronization code for fulfilling multi core processing concerns, or true to .NET's more recent promises, are these concerns mostly addressed by good usage of Tasks, async await, reactive extensions etc.
Well, this is a tough one. In general, I want to answer no, we don’t. But it is a complex answer. Client side, ever since 3.0, we are pretty much all async. All our API calls are purely running via the async stuff. We have support for reactive extensions, in the sense of our Changes() API, but we don’t make any use of it internally. Server side, we have very little real async code, mostly because we found that it made it very hard to debug the system under certain conditions. Instead, we do a lot of producer consumer kind of stuff (see transaction merging) and we try to avoid doing any explicit multi threading. That said, however, we do have a lot of work done in parallel. Indexing, both per index and inside each index. A lot of that work is done through our own parallel primitives, because we need to be aware of what is actually going on in the system.
We have some places where we have to be aware of multi threaded concerns, sometimes in a very funny ways (how we structure the on disk data to avoid concurrency issues) and sometimes explicitly (see how we handle transport state).
For the most part, we tend to have a dedicated manager thread per task, such as replication, indexing, etc. Then that actual work is done in parallel (for each index, destination, etc).
I hope that this give you some guidance, and I’ll be very happy to answer any additional questions, either from Afif or from anyone else.
Comments
Thank you for your detailed answer with my questions earlier Oren. I believe this can help immensely for all us consumers in starting a two way relationship with RavenDB where in we actively seek benefit from the fact that its Open Source. Just two more questions if I may add that came to mind recently.
First, would it be possible to outline the Core abstractions in RavenDB, something similar to this post below where Jimmy is outlining the different concepts that are at play in his app.
http://lostechies.com/jimmybogard/2014/03/20/successful-ioc-container-usage/
I believe this can help in understanding the core touch points and different contracts that are at play from a 5000 feet overview. This doesn't have to be exhaustive as I can imagine with a database there can be too many to enumerate.
Second, I was also wondering, when you have new starters on the RavenDB code base, is there a pattern to how you introduce the code base to them, or to the questions they ask from their first trip in the wild. Perhaps sharing some of that wisdom will give us a head start. Or perhaps even some insight on the approach a new member would take to get productive on the code base.
Afif, The major concepts you'll see in the RavenDB codebase is the notion of Controllers, Action Classes and Background Tasks. Controllers represent the edges of the system, they sometimes do the actual work, but in many cases they delegate to classes that actually do that. For example, the IndexController.IndexGet action will select what action you want to do on the index. If you want to see the stats for an index, that sits directly in the controller, But if you want to actually execute a query, that goes to the QueryActions.Query method, and that actually end up being implemented by DatabaseQueryOperation.
The architecture used to be Http Endpoints (responders, at the time) and the Database (which was one big class). Whenever it was too complex to sustain, we broke into sub class for just that particular behavior.
And then we have the concept of the background tasks themselves. Which is how we handle things like replication, indexing, etc.
A major concept that is widely used is the notion of the bundle triggers. That allow us to demarcate parts of the system as extension points and then implement functionality that way.
When we have new people come in, we usually assign them issue at various places in the codebase, and they learn through that. We also do a rotating support engineer position, which means that someone is responsible for watching the mailing list and support channels, and should be able to answer them. That usually give people a better familiarity with the common areas where we work. That is in addition for the actual training that we do for new guys, of course.
Comment preview