Ayende @ Rahien

It's a girl

RavenDB’s Querying Streaming: Unbounded results

By default, RavenDB make it pretty hard to shoot yourself in the foot with unbounded result sets. Pretty much every single feature has rate limits on it, and that is a good thing.

However, there are times where you actually do want to get all where all actually means everything damn you, really all of them. That has been somewhat tough to do, because it requires you to do paging, and if you are trying to do that on a running system, it is possible that incoming data will impact the way you are exporting, causing you to get duplicates or miss items.

We got several suggestions about how to handle that, but most of those were pretty complex. Instead, we decided to go with the following approach:

  • We will utilize our existing infrastructure to handle exports.
  • We don’t want to do that in multiple requests, because that means that state has to be kept on both client & server.
  • The model has to be a streaming based model, because otherwise we might get memory errors if you are trying to load millions of records out.
  • The stream you get out is frozen, that means that what you read (both indexes and data) is a snapshot of the data as it was when you started reading it.

And now, let me show you the API for that:

   1: using (var session = store.OpenSession())
   2: {
   3:     var query = session.Query<User>("Users/ByActive")
   4:                        .Where(x => x.Active);
   5:     var enumerator = session.Advanced.Stream(query);
   6:     int count = 0;
   7:     while (enumerator.MoveNext())
   8:     {
   9:         Assert.IsType<User>(enumerator.Current.Document);
  10:         count++;
  11:     }
  12:  
  13:     Assert.Equal(1500, count);
  14: }

As you can see, we use standard Linq to limit our search, and the new method we have is Stream(), which allows us to get an IEnumerator, which will scan through the data.

You can see that we are able to get more than the default 1,024 limit of items from RavenDB. There are overloads there for getting additional information about the query as well (total results, timestamps, etags, etc).

Note that the values returning from the Stream() are not tracked by the session. And that if we had a users #1231 that was deleted 2 ms after the export began, you would still get it, since the data is frozen at the time of the export start.

If you want, mind, you can specify paging, by the way, and all the other indexing options are also available for you as well (transform results, for example).

Tags:

Posted By: Ayende Rahien

Published at

Originally posted at

Comments

njy
03/12/2013 01:34 PM by
njy

Pretty darn good approach, very well done

Brian Vallelunga
03/12/2013 03:30 PM by
Brian Vallelunga

Thanks. This will be very useful for the times when people want me to "export everything to a spreadsheet."

Yuriy
03/12/2013 04:04 PM by
Yuriy

Just out of curiosity - why not return IEnumerable?

Dan
03/12/2013 04:54 PM by
Dan

Will this lock the database or cause writes to be queued as long as the export is running?

Ayende Rahien
03/12/2013 06:07 PM by
Ayende Rahien

Yuriy, Because it is very easy to do a .ToList() on IEnumerable, which will force the entire thing to go into memory.

Ayende Rahien
03/12/2013 06:07 PM by
Ayende Rahien

Dan, This will NOT lock the database in any way.

Daniel Lang
03/12/2013 10:05 PM by
Daniel Lang

I find it kind of sad that I can now replace my TakeAllAsList() extension method with another one that runs an iterator to populate a list. I would have preferred to have IEnumerable on which I can just call ToList().

There are power users who know what they're doing and who are just fine having all of the documents in memory. Now that we aren't limited by the size of http messages, I think we should really have a simple IEnumerable and let the users decide what to do with them.

Mouhong
03/13/2013 05:30 AM by
Mouhong

I agree with Daniel. If one doesn't know what he is doing, he won't use the Stream api, but use the Query<>().ToList(). But that is safe by default. If one knows that there's a Stream() api, he must know what he wants to do, so no need to limit him so much

Ayende Rahien
03/13/2013 06:30 AM by
Ayende Rahien

Daniel, One of the things that we are trying to do in the API is to have no easy pitfalls that will kill you. This API is really easy if you want to do streaming, as in push them to a file / network / process all of them one at a time. It is slightly hard if you want to get them all in memory. This is by design.

Ayende Rahien
03/13/2013 06:31 AM by
Ayende Rahien

Mouhong, Or, what is likely to happen. A user will search for this. And change:

session.Query().ToList()

To:

session.Advanced.Stream(session.Query()).ToList(); /// Fix RavenDB Bug

without ever doing any actual thinking whatsoever.

Mouhong
03/13/2013 06:52 AM by
Mouhong

@Ayende, If he use this to fix the "bug": session.Advanced.Stream(session.Query()).ToList(); /// Fix RavenDB Bug

That's his problem, he should learn from the server carshing (if it crashed). There's no ideal solution. Just like Daniel said, if he found that he can't use Stream().ToList(), he would search google and finally found Daniel's TakeAllAsList(). So I think limit Stream api will mostly make "power users" feel inconvenient. It can't stop newbies from using TakeAllAsList()

Ayende Rahien
03/13/2013 06:56 AM by
Ayende Rahien

Mouhong, That is a very true statement. In practice, it isn't really true. Sure it would be the user's fault. But the problem would typically still be blamed on RavenDB. If he goes and uses TakeAllAsList(), this is not just blindly calling ToList() and not noticing the implications, this is an explicit decision. We like to think of our API as a safety net, not a straitjacket.

But you can go and check the number of times Linq delayed eval caused problems, and the number of times a ToList() caused even more problems...

Mouhong
03/13/2013 07:13 AM by
Mouhong

@Ayende, Yea, you're right. But anyway, I can accept that the Stream() returns an enumerator. In my usage, I often don't need to do in memory filtering (can be done before calling Stream) or processing in this case. If I have a way to retrieve all records, it's already very good and enought. :p

Daniel Schilling
03/13/2013 01:44 PM by
Daniel Schilling

Power users will realize it is trivial to us the yield keyword to turn this IEnumerator into an IEnumerable, if that's what they need.

Lev Gorodinski
03/13/2013 10:25 PM by
Lev Gorodinski

Is there a way to "freeze" results with the 2261 Client API if paging is used to retrieve very large subsets?

configurator
03/14/2013 10:34 PM by
configurator

Why session.Advanced.Stream(query) and not query.Advanced.Stream()?

Also, how is this actually implemented? Specifically, I'm interested in the finer details such as what happens if I stop reading halfway through, and whether or not the IEnumerator needs to be disposed when enumeration is finished. And whether this sort of query has a large impact on server performance because it takes a snapshot of the data.

Ayende Rahien
03/15/2013 04:09 AM by
Ayende Rahien

Configurator,

Because there is no query.Advanced.

This is implemented as a single stream request from the server. The enumerator has to be disposed, yes. If you stop reading, you need to dispose the connection, which will stop it on the server. It doesn't have a heavy impact, beyond the usual cost of having to access all of that data.

Richard
03/20/2013 07:27 AM by
Richard

When will this feature be released? I cant find an actual build containing it.

Ayende Rahien
03/20/2013 07:58 AM by
Ayende Rahien

Richard, This is part of the 2.5 branch, and it is available here under the unstable builds: http://hibernatingrhinos.com/builds/ravendb-unstable-v2.5

Comments have been closed on this topic.