Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,525
|
Comments: 51,161
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 530 words

This question on Stack Overflow is a fairly common one. Here is the data:

image

And the question was about how to get RavenDB to create an index that would have the following results:

{
   CarId: "cars/1",
   PersonId: "people/1235",
   UnitId: "units/4321",
   Make: "Toyota",
   Model: "Prius"
   FirstName: "Ayende",
   LastName: "Rahien"
   Address: "Komba 10, Hadera"
}
{
   CarId: "cars/2",
   PersonId: "people/1236",
   UnitId: "units/4321",
   Make: "Toyota",
   Model: "4runner"
   FirstName: "test",
   LastName: "test"
   Address: "blah blah"
}
 
same unit different person owns a different car

Now, if you try really hard, you can probably try to get something like that, but that is the wrong way to go about this in RavenDB.

Instead, we can write the following index:

image

Note that this index is a simple multi map index, it isn’t a multi map/reduce index. There is no need.

This index can return one of three types.

  • Car – just show the car to the user
    image
  • Person – now that we have a person, we have the id, and we can query for that:
    image image
  • Unit – now that we have a unit, we have the id, and we can query for that:
    image  image

This method means that we have to generate an additional query for some cases, but it has a lot of advantages. It is simple. It requires very little work from both client and server and it doesn’t suffer from the usual issues that you run into when you attempt to query over multiple disjointed data sets.

Now, the bad thing about this is that this won’t allow me to query for cross entity values, so it would be hard for me to query for the cars in Hadera owned by Ayende. But in most cases, that isn’t really a requirement. We just want to be able to search by either one of those, not all of them.

time to read 3 min | 540 words

I have been blogging for a long time now, and I am quite comfortable in expressing myself, but I was still blown away by this post to the RavenDB mailing list. Mostly because this thread sums up a lot of the core points that led me to design RavenDB the way it is today.

Rasmus Schultz has been able to put a lot of the thought processes behind the RavenDB design into words.

Back when I took my education in systems development, basically, I was taught to build aggregates as large, as complete and as connected as possible. But that was 14 years ago, and I'm starting to think, what they taught me back then was based on the kind of thinking that works for single-user, typically desktop applications, where the entire model was assumed to be in-memory, and therefore had to be traversible, since there was no "engine" you could go back to and ask for another piece of the model.

I can see now why that doesn't make sense for concurrent applications with large models persisted in the background. It just never occurred to me, and looked extremely wrong to me, because that's not how I was taught to think.

Yes. That is the exact problem that I see people run into over and over. The create highly connected object model, without regards to how they are persisted, and then they run into problems using them. And the assumption that everything is equally costly to read from memory is hugely expensive.

Furthermore, I'm starting to see why NHibernate doesn't really work well for me. So here's the main thing that's starting to dawn on me, and please confirm or correct me on this:

It seems that the idea behind NH is to configure the expected data-access strategies for the model itself. You write configuration-files that define the expected data-access strategies, but potentially, you're doing this based on assumptions about how you might access the data in this or that scenario.

The problem I'm starting to see, is that you're defining these assumptions statically - and while it is possible to deviate from these defined patterns, it's easy to think that once you've defined your access strategies, you're "done", and the model "just works" and you can focus on writing business logic, which too frequently turns out to be untrue in practice.

To be fair, you can specify those things in place, with full context. And I have been recommending to do just that for years, but yeah, that is a very common issue.

This contrasts with RavenDB, where you formally define the access strategies for specific scenarios - rather than for the model itself. And of course the same access strategy may work in different scenarios, but you're not tempted to assume that a single access strategy is going to work for all scenarios.

You're encouraged to think and make choices about what you're accessing and updating in each scenario, rather than just defining one overriding strategy and charging ahead blindly on the assumption that it'll always just work, or always perform well, or always make updates that are sufficiently small to not cause concurrency problems.

Am I catching on?

Precisely.

time to read 4 min | 653 words

RavenDB supports many types of searches, and in this case, I want to show something that belongs to the cool parts of the pile, but also on the “you probably don’t really want to do this”.

First, let me explain why this is cool, then we will talk about why you probably don’t want to do that (and finally, about scenarios where you actually do want this).

Here is an index that will allow you to search over all of the values of all of the properties in the user entity:

public class Users_AllProperties : AbstractIndexCreationTask<User, Users_AllProperties.Result>
{
    public class Result
    {
        public string Query { get; set; }
    }
    public Users_AllProperties()
    {
        Map = users =>
              from user in users
              select new
              {
                  Query = AsDocument(user).Select(x => x.Value)
              };
        Index(x=>x.Query, FieldIndexing.Analyzed);
    }
}

This can be easily query for things like:

s.Query<Users_AllProperties.Result, Users_AllProperties>()
    .Where(x=>x.Query == "Ayende") // search first name
    .As<User>()
    .ToList()


s.Query<Users_AllProperties.Result, Users_AllProperties>()
    .Where(x=>x.Query == "Rahien") // search last name
    .As<User>()
    .ToList()

The fun part is that because we are actually going to index all the properties values into the Query field, which then allow us to easily query for every one of the values without any trouble.

The problem with that is that this is also quite wasteful and likely to lead to bad results down the road. Why?

For two major reasons. First, because this is going to index everything, and would result in larger index, more IO, etc. The second reason is that it is going to lead to bad results because you are now searching over everything, including the “last login date” and the “password hint”. That means that your search results relevancy is going to be poor.

So why would you ever want to do something like that if it is bad?

Well, there are a few scenarios where this is applicable. You need to do that if you want to be able to search over completely / mostly dynamic entities. And you want to do that if you have entities which are specifically generated for the purpose of being searched.

Both cases are fairly rare (the first case is usually covered by dynamic indexing, anyway), so I wanted to point this out, and also point out that it is usually far better to just specify what are the fields that actually matter for you.

time to read 4 min | 755 words

The final aspect of RavenDB’s x7 jump in indexing performance is the fact that we made it freakishly smart.

During standard operation, most indexes only update when new information comes in, we are usually talking about a small number of documents for every indexing run. The problem is what happens when you have a sudden outpour of documents into RavenDB? For example, during nightly ETL batch, or just if you suddenly have a flood of users doing write operations.

The problem here is that we actually have to balance a lot of variable at the same time:

  • The number of documents that we have to index*.
  • The current memory utilization**.
  • How any cores I have available to do the index work with?
  • How much time do I have to do this?

Basically, the idea goes like this, if I have a small batch size, I am able to index more quickly, ensuring that we have fresher results. If I have big batch size, I am able to index more documents, and my overall indexing times goes down.

There is a non trivial cost associated with every indexing run, so reducing the number of indexing run is good, but the more documents I shove into a single run, the more memory will I use, and the more time it will take before the results are visible to the users.

* It is non trivial because there is no easy way for us to even know how many documents we have left to index (to find out is costly).

** Memory utilization is hard to figure out in a managed world. I don’t actually have a way to know how much memory I am using for indexing and how much for other stuff, and there is no real way to say “free the memory from the last indexing run”, or even estimate how much memory that took.

What we have decided on doing is to start from a very small (low hundreds) indexing batch size, and see what is actually going on live. If we see that we have more documents to index than the current batch size, we will slowly double the size of the batch. Slowly, because bigger batches requires more memory, and we also have to take into account current utilization, memory usage, and a bunch of other factors as well. We also go the other way around, able to reduce the indexing batch size on demand based on how much work we have to do right now.

We also provide an upper limit, because at some point it make sense to just do a big batch and make the indexing results visible than to try to do everything all at once.

The fun part in all of that is that once we have found the appropriate algorithm for this, it means that RavenDB will automatically adjust itself based on real production load. If you have an low update rate, it will favor small indexing batches and immediately execute indexing on the new documents. However, if you suddenly have a spike in traffic and the update rate goes up, RavenDB will adjust the indexing batch size so it will be able to keep up with your rate.

We have done some (read, a huge amount) testing with regards to this new optimization, and it turns out that under slow update frequency, we are seeing an average of 15 – 25 ms between a document update and it showing up in the indexes. That is pretty good, but what is going on when we have data just pouring in?

We tested this with a 3 million documents and 3 indexes. And it turn out that under this scenario, where we are trying to shove data into RavenDB as fast as it can accept it, we do see an increase in index latency. Under those condition, latency rose all the way to 1.5 seconds.

This is actually something that I am very happy about, because we were able to automatically adjust to the changing conditions, and were still able to index things at a reasonable rate (note that under this scenario, the batch size was usually 8 – 16 thousands documents, vs. the 128 – 256 that it is normally).

Because we were able to adjust the batch size on the fly, we could handle sustained writes at this rate with no interruption in service and no real need to think about this from the users perspective.. Exactly what the RavenDB philosophy calls for.

time to read 2 min | 327 words

As I noted in my previous post, we have done major optimizations for RavenDB. One of the areas where we improved the performance was reading the documents from the disk for indexing.

In Pseudo Code, it looks like this:

while database_is_running:
  stale = find_stale_indexes()
  lastIndexedEtag = find_last_indexed_etag(stale)
  docs_to_index = get_documents_since(lastIndexedEtag, batch_size)
  

As it turned out, we had a major optimization option here, because of the way the data is actually structured on disk. In simple terms, we have an on disk index that lists the documents in the order in which they were updated, and then we have the actual documents themselves, which may be anywhere on the disk.

Instead of loading the documents in the orders in which they were modified, we decided to try something different. We first query the information we need to find the document on disk from the index, then we sort them based on the optimal access pattern, to reduce disk movement and ensure that we have as sequential reads as possible. Then we take those results in memory and sort them based on their last update time again.

This seems to be a perfectly obvious thing to do, assuming that you are aware of such things, but it is actually something that is very easy not to notice. The end result is quite promising, and it contributed to the 7+ times improvements in perf that we had for indexing costs.

But surprisingly, it wasn’t the major factor, I’ll discuss a huge perf boost in this area tomorrow.

time to read 2 min | 348 words

Two years after the launch of RavenDB 1.0, (preceded by several years of working on 1.0, of course). We are now starting to actually plan and work on RavenDB 1.2.

You can read the planned roadmap here. RavenDB 1.2 is a big release, for several reasons.

  • We are going to break RavenDB into several distinct editions, from the RavenDB Basic, suitable for small apps to RavenDB Standard which is the current version and all the way up to RavenDB enterprise, which is going to get some awesome features (windows clustering, index encryption, etc). We are also going to have plans for ISVs, which will allow them royalty free distribution of RavenDB for their customers.
  • We are going to update our pricing structure. You’ll hear more about this when we have finalized pricing.

Because I am well aware of the possible questions, I suggest reading the thread discussing both editions and pricing in the mailing list:

I will repeat again that we haven’t yet made final pricing decisions, so don’t take the numbers thrown around in those threads as gospel, but they are pretty close to what we will have.

This is the boring commercial stuff, but I am much more interested in talking about the new RavenDB roadmap. In fact, you can actually read all of our plans here. The major components for RavenDB 1.2 are:

  • Better integration with C# 5.0 – much better support for async in general, async replicaiton, async sharding, etc.
  • Enterprise level features – Windows Clustering, Full Database Encryption, Indexing Priorities, Compression, etc.
  • Installer and server console - so you can manage your RavenDB installation more easily.
  • Better Admin support – scheduled backups, S3 Backups, live restores, etc.
  • Internalizing commonly used bundles – you shouldn’t have to take additional steps to make use of common functionality.

There are other stuff, of course, but those are the main pillars.

As mentioned, you can read all of that yourself, and we would welcome feedback on our current plans and suggestions for the new version.

time to read 2 min | 336 words

One of the major dangers in doing perf work is that you have a scenario, and you optimize the hell out of that scenario. It is actually pretty easy to do without even noticing it. The problem is that when you do things like that, you are likely to be optimizing a single scenario to perform really well, but you are hurting the overall system performance.

In this example, we have moved heaven and earth to make sure that we are indexing things as fast as possible, and we tested with 3 indexes, on an 4 cores machine. As it turned out, we actually had improved things, for that particular scenario.

Using the same test case on a single core machine was suddenly far more heavy weight, because we were pushing a lot of work at the same time. More than the machine could process. The end result was that it actually got there, but much more slowly than if we would have run things sequentially.

Of course, I give you the outliers, but those are good indicators for what we found out. Initially, we thought that we could resolve that by using the TPL’s MaxDegreeOfParallelism, but it turned out to be more complex than that. We have IO bound and we have CPU bound tasks that we need to execute, and trying to execute IO heavy tasks with this would actually cause issues in this scenario.

We had to manually throttle things ourselves, both to ensure limited number of parallel work, and because we have a lot more information about the actual tasks than the TPL have. We can schedule them in a way that is far more efficient because we can tell what is actually going on.

The end result is that we are actually using less parallelism, overall, but in a more efficient manner.

In my next post, I’ll discuss the auto batch tuning support, which allows us to do some really amazing things from the point of view of system performance.

time to read 2 min | 258 words

One of the things that we are doing during the index process for RavenDB is applying triggers and deciding what, if and how a document will be indexed. The actual process is a bit more involved, because we have to do additional things (like figure out which indexes have already indexed those particular documents).

At any rate, the interesting thing is that this is a process which is pretty basic:

for doc in docs:
    matchingIndexes = FindIndexesFor(doc)
    if matchingIndexes.Count > 0:
       doc = ExecuteTriggers(doc) 
       if doc != null:
          yield doc

The interesting thing about this is that this is a set of operations that only works on a single document at a time, and the result is the modified documents.

We were able to gain significant perf boost by simply moving to a Parallel.ForEach call.  This seems simple enough, right? Parallelize the work, get better benefits.

Except that there are issues with this as well, which I’ll touch on my next post.

time to read 3 min | 422 words

The actual process done by RavenDB to index documents is a fairly complex one. In order to understand what exactly happened, I decided to break it apart to pseudo code.

It looks something like this:

while database_is_running:
  stale = find_stale_indexes()
  lastIndexedEtag = find_last_indexed_etag(stale)
  docs_to_index = get_documents_since(lastIndexedEtag, batch_size)
  
  filtered_docs = execute_read_filters(docs_to_index)
  
  indexing_work = []
  
  for index in stale:
    
    index_docs = select_matching_docs(index, filtered_docs)
    
    if index_docs.empty:
      set_indexed(index, lastIndexedEtag)
    else
      indexing_work.add(index, index_docs)
      
  for work in indexing_work:
  
     work.index(work.index_docs)

And now let me show you the areas in which we did some perf work:

while database_is_running:
  stale = find_stale_indexes()
  lastIndexedEtag = find_last_indexed_etag(stale)
  docs_to_index = get_documents_since(lastIndexedEtag, batch_size)
  
  filtered_docs = execute_read_filters(docs_to_index)
  
  indexing_work = []
  
  for index in stale:
    
    index_docs = select_matching_docs(index, filtered_docs)
    
    if index_docs.empty:
      set_indexed(index, lastIndexedEtag)
    else
      indexing_work.add(index, index_docs)
      
  for work in indexing_work:
  
     work.index(work.index_docs)

All of which gives us a major boost in the system performance. I’ll discuss each part of that work in detail, don’t worry Winking smile

FUTURE POSTS

  1. Fun with bugs: Advanced Dictionary API - one day from now

There are posts all the way to Nov 15, 2024

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}