RavenDB’s Querying Streaming: Unbounded results

time to read 6 min | 1143 words

By default, RavenDB make it pretty hard to shoot yourself in the foot with unbounded result sets. Pretty much every single feature has rate limits on it, and that is a good thing.

However, there are times where you actually do want to get all where all actually means everything damn you, really all of them. That has been somewhat tough to do, because it requires you to do paging, and if you are trying to do that on a running system, it is possible that incoming data will impact the way you are exporting, causing you to get duplicates or miss items.

We got several suggestions about how to handle that, but most of those were pretty complex. Instead, we decided to go with the following approach:

  • We will utilize our existing infrastructure to handle exports.
  • We don’t want to do that in multiple requests, because that means that state has to be kept on both client & server.
  • The model has to be a streaming based model, because otherwise we might get memory errors if you are trying to load millions of records out.
  • The stream you get out is frozen, that means that what you read (both indexes and data) is a snapshot of the data as it was when you started reading it.

And now, let me show you the API for that:

   1: using (var session = store.OpenSession())
   2: {
   3:     var query = session.Query<User>("Users/ByActive")
   4:                        .Where(x => x.Active);
   5:     var enumerator = session.Advanced.Stream(query);
   6:     int count = 0;
   7:     while (enumerator.MoveNext())
   8:     {
   9:         Assert.IsType<User>(enumerator.Current.Document);
  10:         count++;
  11:     }
  12:  
  13:     Assert.Equal(1500, count);
  14: }

As you can see, we use standard Linq to limit our search, and the new method we have is Stream(), which allows us to get an IEnumerator, which will scan through the data.

You can see that we are able to get more than the default 1,024 limit of items from RavenDB. There are overloads there for getting additional information about the query as well (total results, timestamps, etags, etc).

Note that the values returning from the Stream() are not tracked by the session. And that if we had a users #1231 that was deleted 2 ms after the export began, you would still get it, since the data is frozen at the time of the export start.

If you want, mind, you can specify paging, by the way, and all the other indexing options are also available for you as well (transform results, for example).