Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,007 | Comments: 44,761

filter by tags archive

Production postmortemThe case of the Unicode Poo

time to read 4 min | 789 words

We got an error report from a customer about migration issue from 2.5 to 3.0. A particular document appear to have been corrupted, and caused issues.

We have an explicit endpoint to expose the database raw bytes to the client, so we can troubleshoot exactly those kind of errors. For fun, this is a compressed database, so the error was hiding beneath two level of indirection, but that is beside the point.

When looking at the raw document’s byte, we saw:


Which was… suspicious. It took a while to track down, but the end result was that this error would occur when you have:

  • A large string (over 1KB), and it is the first large string in the document.
  • At the 1023 position of the string (bytewise), you have a multi byte and multiple character value.

In those cases, we wouldn’t be able to read the document.

The underlying reason was an optimization we made in 3.0 to reduce buffer allocations during deserialization of documents. In order to properly handle that, we used an Encoding Decoder directly, without any intermediate buffers. This works great, except in this scenario, and the way JSON.Net calls us.

When JSON.Net find a large string, it will repeatedly read characters from the stream until it reached the end of the stream, and only then it will process it. If the string size is more than the buffer size, it will increase the buffer.

Let us imagine the following string:


When we serialize it, it looks like this:

var bytes = new byte[] { 65, 66, 67, 0xF0, 0x9F, 0x92, 0xA9 };

And let us say that we want to read that in a buffer of 4 characters. We’ll use it like so:

 int bytesPos = 0;
 int charsPos = 0;
 var chars = new char[4];
 while (bytesPos < bytes.Length) // read whole buffer
    while (charsPos < chars.Length) // process chars in chunks
         int bytesUsed;
         int charsUsed;
         bool completed;
         decoder.Convert(bytes, bytesPos, bytes.Length - bytesPos, chars, charsPos, chars.Length - charsPos, false,
             out bytesUsed, out charsUsed, out completed);
         bytesPos += bytesUsed;
         charsPos += charsUsed;
     Console.WriteLine(new string(chars));

On the first call, the Convert will convert the first three bytes into three characters, and stop. The JSON.Net code will then ask it to fill to the end of the buffer (simulated by the inner loop), but at that point, the Convert method will throw, because it has just one character available in the buffer to write to, but it can’t write that character.

Why is that? Look at the poo string above. How many character does it take?

If you answered four, you are correct visually, and wrong in buffer sense. This string actually takes 5 characters to represent. As I mentioned, in order to hit this error, we have to had a particular set of things align just right (or wrong). Even a single space difference would align things so no multi byte character would span the 1KB boundary.

The solution, by the way, was to drop the optimization, sadly, we’ll revisit this at a later time, probably, but now we’ll have a way to confirm that this scenario is also covered.

Unsafe operations terminology and operational hazards, the end result

time to read 2 min | 244 words

Dave had an interesting comment about the previous post in this topic.

Actually I would have kept the original terms. Clarity is way more important than 'protecting' an click and point administrator. If an administrator is so incredible stupid to experiment with an production cluster, than it is his right!

To protect against accidental hits on the very big 'leave cluster' button, you can ask the admin to enter 3 digit number that is displayed to confirm the action. But leaving and joining an cluster are defacto industry terms which makes it easier for admins coming from other data storage solutions to get an handle on RavenDB.

I think that there is some confusion regarding the actual terms. Here is the current UI, after the changes I discussed in the previous post:


As you can see, we have “Add another server to cluster”, and “Leave cluster”, which are standard and common operations, they are what you’ll use in pretty much all cases.

The advanced cluster operations are unsafe, they are there to enable the operator to recover from a disaster that took the majority of the cluster down. Those aren’t standard operations, they are hidden by default under “advanced”, and even then we want to make sure that users are thinking about them.

Unsafe operations terminology and operational hazards

time to read 2 min | 327 words

One of the features we are working on have the notion of a consensus cluster, as well as the ability to force a new cluster if a majority of the nodes in the cluster are down. The details aren’t important, but the first iteration of the UI went something like this:


Initialize new cluster is an unsafe operation, it make the current node into a single node cluster (which obviously has its own majority), and Take over a node will force a node that is part of an existing cluster to joint the current cluster, bypassing the usual safety measures.  The Leave cluster command is for usual behavior, when you want to safely remove a node from the cluster.

We had a few problems with this UI (note that it was there simply to make it easy to test the behavior of the system, so don’t get too hang up on the first draft).

One problem we had is that this is shown front and center. It isn’t an operation that we want to make it easy for the admin to run accidently (maybe through just exploring the interface).

That is easy, just drop it into an “Advanced” section, right? But I also had an issue with the terminology. It is too.. bland.

Instead, we are going to rename the buttons as follow:

  • Go AWOL from cluster – step down into a single node cluster.
  • Kidnap node into cluster – force a node to the current cluster.

The idea with this terminology is that it is obvious (hopefully) that those aren’t standard operations, and that you should consider them carefully.

I’m not sure about Go AWOL, because that might be a very US centric term, other things we consider are:

  • Abrogate cluster
  • Repudiate cluster

For the same logic.


Critique this code: The memory mapped file

time to read 4 min | 654 words

We got an error in the following code, in production. We are trying hard to make sure that we have good errors, which allows us to troubleshoot things easily.

In this case, the code… wasn’t very helpful about it. Why? Take a look at the code, I’ll explain why below…

public CodecIndexInput(FileInfo file, Func<Stream, Stream> applyCodecs)
        this.file = file;
        this.applyCodecs = applyCodecs;

        fileHandle = Win32NativeFileMethods.CreateFile(file.FullName,
            Win32NativeFileShare.Read | Win32NativeFileShare.Write | Win32NativeFileShare.Delete,

        if (fileHandle.IsInvalid)
            const int ERROR_FILE_NOT_FOUND = 2;
            if (Marshal.GetLastWin32Error() == ERROR_FILE_NOT_FOUND)
                throw new FileNotFoundException(file.FullName);
            throw new Win32Exception();

        mmf = Win32MemoryMapNativeMethods.CreateFileMapping(fileHandle.DangerousGetHandle(), IntPtr.Zero, Win32MemoryMapNativeMethods.FileMapProtection.PageReadonly,
            0, 0, null);
        if (mmf == IntPtr.Zero)
            throw new Win32Exception();

        basePtr = Win32MemoryMapNativeMethods.MapViewOfFileEx(mmf,
            0, 0, UIntPtr.Zero, null);
        if (basePtr == null)
            throw new Win32Exception();

        stream = applyCodecs(new MmapStream(basePtr, file.Length));
    catch (Exception)

Did you see it?

This code has multiple locations in which it can throw Win32Exception. The problem with that is that Win32Exception in this mode is pretty much just a code, and we have multiple locations inside this method that can thrown.

When that happens, if we don’t have the PDB files deployed, we have no way of knowing, just from the stack trace (without line numbers), which of the method calls had caused the error. That is going to lead to some confusion after the fact.

We solved this by adding description text for each of the options, including additional information that will let us know what is going on. In particular, we also included not only the operation that failed, but even more important, we included the file that failed.

RavenDB 3.0 New Stable Release

time to read 4 min | 782 words

We have just released build 3785 of RavenDB 3.0. This build has quite a bit of changes (for the full gory details, see the pull request).

This release includes about 3 months of bug fixes, performance improvements and the like. We have been testing this on our own systems for a few weeks now, as well as on multiple live production sites, and the results have been nothing but encouraging.

Major changes

  • Voron & map/reduce optimizations. We have done major work to optimize how RavenDB uses map/reduce on Voron. As a result, map/reduce performance on Voron has improved tremendously. However, this require a migration step during the first startup. If you have a large RavenDB database using Voron, and you are making heavy use of map/reduce, take into account that on first start, RavenDB will need to perform an internal migration, which can take a while.
  • Lucene & memory allocation reduction on queries. We have drastically reduced the amount of memory that is allocated per query, and improved the performance of queries substantially.


  • Many small perf optimizations, memory allocations reductions, object pooling, etc. Drastic reduction in memory allocations on common code paths.
  • Better handling of buffer allocations in websockets, reduces memory fragmentation.
  • Better handling of Take() / Skip() inside an index.
  • Allow only a single index to use the fast precomputation optimization at a time (reduce memory usage if multiple medium sized indexes are changed concurrently).
  • Better handling of concurrent addition of multiple indexes to large databases, will now run in the same set of indexing batches, instead of each having their own.
  • Re-implemented memory statistics checks using native calls to avoid expensive allocations.
  • Provide more detailed information when an index is corrupted.
  • Adding endpoint for stopping / starting just reduce work.
  • Less aggressive changes to the batch size at scale, being more cautious gives us a bit slower perf but more stable system under load.
  • Optimized Voron recovery code heavily to support slow I/O systems on large databases.
  • Allow to mark individual databases as development / staging /production.
  • Better handling of Lucene file usage, using mmap to avoid all allocations when querying the indexes. Significant improvements to both memory usage and querying speeds.
  • Subscriptions can now start from a given etag.
  • Subscriptions now offer more robust handling for attempting to open an existing subscription.
  • Subscriptions will now send a “no results found” so we won’t time out for mostly idle subscriptions.
  • Allow to manage scripted index scripts from code using AbstractScriptedIndexCreationTask.
  • Fixing replication issue with RavenFS with large number of files being modified all the time.
  • Track the query parse time as well, for certain queries the expensive part is parsing the query, rather than executing it.
  • Can lock transformers for modifications now as well as indexes.
  • Better heuristics for calculating how much memory (native & managed) we are actually using.
  • Add debug endpoint to track how much map/reduce work we still have to do.

Bug Fixes:

  • Don’t update a side by side index if it already exits.
  • Allow to update a side by side index while it is still running.
  • Fixing index compilation error on .NET 4.6 using “new string[0]”.
  • Fixed an NRE when the index definition was removed forcibly when using dynamic queries.
  • Fixed error handling during disposal causing an exception to escape thread boundary and crashing.
  • Fixed FIPS licensing issue on embedded dbs.
  • Admin logs are not capturing logs protected by IsDebugEnabled statement
  • Fixed a finalizer usage bug causing us to try to read from a closed handle.
  • Prevent corrupted index warning when creating a map-reduce index and indexing is disabled.
  • Preventing code from trying to use disposed internal transactions.
  • Installed fix - check and revoke URL reservation options when Use existing website is selected.
  • Properly dispose of timer instance when shutting down a database using expiration bundle.
  • Prevent an error loading ICSharpCode.NRefactory from killing RavenDB client startup.
  • When disk space is very low, stop indexing and warn about it, rather than index to full disk error (and probable index corruption).
  • Fixing stack trace generation in generate debug info when we have spaces in the temp path.
  • Moved default db locations outside of the IIS directory to avoid IIS bug causing restarts.
  • When out of memory, replication will back off and retry, rather than fail continuously.
  • Better handling of deleted then created indexes and transformer replicating to sibling nodes.
  • Fixing case sensitivity issue when returning document ids.
  • Fixing an issue with file names in RavenFS containing #.

Production postmortemThe case of the memory eater and high load

time to read 9 min | 1628 words

This is a recent case. One of our customers complained that every now and then they started to see very high memory utilization, escalating quickly until the system would bog down and die. They were able to deploy a mitigation strategy of a sort, when they detected this behavioral pattern, they would force RavenDB to reject client requests for a short while, which would fix this issue.

imageThis went on for  a while, because the behavior was utterly random. It didn’t seem to relate to load, peek usage time on the system didn’t correlate to this in any way. Eventually the customer raised another issue, that a certain spatial query was behaving very slowly in the logs.

We tested that, and we found that the customer was correct. More properly, the query executed just fine when run independently. But when we run this query tens or hundreds of times concurrently, we will see very high response times (and getting worse), and we would see the server memory just blowing up very quickly. So we have a memory leak, we figured out, let us see what is going on… We dumped the data, and tried to figure out what it was exactly that we were leaking.

But there wasn’t anything there!

In fact, looking at the problem, it became even curiouser.  Take a look at what we saw during one of the test runs:


Note that this is all running with no other work, just a lot of queries hitting the server.

Somehow, we had a lot of data going into the Gen2 heap. But when we checked the Gen2, it was pretty much empty. In fact, we had a 100% fragmentation. Something was very strange here. We enabled memory allocation tracking and started to look into what was going on. We found this very suspicious (note that this is from a different run from the one above):


So FileStream.Read is allocating GBs over GBs of memory? What is going on?! It took a while to figure out what was going on. The underlying issue was within Lucene. Actually, an intersection of a few things inside Lucene.

Here is how Lucene reads from a file on disk:


What you’ll notice is that Lucene is holding a lock on the file, and then issuing I/O. In other words, it is going to hold that lock for a while. This is a very strange thing to do, why is Lucene doing it?

It does this because of a strange decision on how to do concurrent I/O.


Basically, whenever Lucene needs to do concurrent I/O, it will clone the relevant input object, and then use it concurrently. The idea, I gather, is that Lucene didn’t want to have a separate file handle for each multi threaded operation, instead it created one file handle, and used it concurrently. Since concurrent I/O takes careful usage, they slapped a lock on it and call it a day. I’m being unfair, I know, this is explicitly called out in the docs:


And in the Java version, there is an NIOFSDirectory that is presumably much better. Such doesn’t exist in the Lucene.Net version we are using. In fact, I was curious and I checked the upcoming version, they do have a NIOFSDirectory implementation, which had the following code in it:


This is a single global lock for everything. Thank you, I’ll take the lock per file. Now, to be fair again, work in progress, etc.

We noticed this issue a long while ago, and we solved it by using multiple FileStreams. It used more resources, but it meant that we were far more concurrent. Note that all of this actually happened years ago, and we had no problems in this area. Note that Linux program typically worry a lot more about the number of open file handles than Windows programs do.

The problem was definitely related to the use of multiple FileStream. But it didn’t have anything to do with holding multiple handles to the same file. Instead, the issue was in the usage pattern that the query exhibited.

In particular, and I’m going to get deep into Lucene here, the problem was inside the SegmentReader.Terms() method:


This seem innocuous, right? Here is how this is implemented:


And all the way down until we gets to the input.Clone() method. Now, in a standard Lucene system, using concurrent queries, this would result in a fair amount of locking. In RavenDB, this just meant that we were creating new FileStream objects.

The problem was that this particular query had a list of terms that it needed to check, and it called the Terms() method many times. How much is many times? 12,000 times!

Still not a problem, except that it called FileStream.Read on each and every one of those. And FileStream.Read does the following:


And the _bufferSize is set to the default of 4KB.

In other words, processing a single instance of this particular query will result in the system allocating about 48MB of memory!

And when we have concurrent queries of this type? Each of them is allocating 48 MB of memory, and because they allocate so much, we have GC runs, which cause the memory (which is still in use) to be sent to Gen 1, and eventually park in Gen 2. There is languish (because it is temporary memory, but we don’t clear Gen 2 very often).

We changed the implementation to use overlapped I/O and tested that, and the memory consumption dropped by a significant number. But we still saw more allocations than we liked. We ended up tracking that down the this call (in BufferedIndexInput  in the Lucene codebase):


The buffer size in this case is 1KB. So allocation for this query was actually 60 MB(!), and we only managed to drop it by 48MB.

After fighting with this for a long while, we ended scratch the whole buffered index input idea. It is just not sustainable in terms of allocations. Instead, we created a memory map input class, that map the input data once, and doesn’t use a buffer (so no allocations). With that option, our cost to process this query was drastically lower.

We profile the change, to see whatever there are additional issues, and we found that the newly optimized code was much better, but still had an issue. The memory map code used the UnmanagedMemoryStream class to expose the file to the rest of the application. Unfortunately, this class appears to be intended for concurrent usage, which is a rarity for Streams. Here is ReadByte method from that class:image

As you can see, this method is doing quite a lot. And it showed up as a hot spot in our profiling. The rest of the class is pretty complex as well, and does significantly more than what we actually need it to do.

We replaced this with a MmapStream class, and here is the comparable implementation.


You can safely assume that this is much faster Smile.

We have tested this using 5000 concurrent requests, without caching. Memory consumption is steady, and doesn’t increase. We show marked improvement across the board, in memory utilization, CPU usage and I/O rates. Note that while this issue was caused by a particular query whose pattern of operation caused tremendous number of allocations, this change has wider reaching implications.

We now allocate less memory for all queries, and previous experience has shown us that reducing a single 4Kb allocation in query processing can improve overall performance but 30%. We haven’t run those tests yet, but I’ll be surprised if we’ll see negative results.

Optimizing I/O throughput

time to read 7 min | 1277 words

We got a customer request about performance issues they were seeing on startup on a particular set of machines.

Those machine run in a cloud environment, and they have… peculiar, one might say deviant, I/O characteristics. In particular, the I/O pipeline on those machines is wide, but very slow. What do I mean by that? I meant that any particular I/O operation on those is likely to be slow, but the idea is that you can get much better performance if you issue concurrent I/O. The system is supposed to be able to handle that much better, and overall you’ll see the same relative performance as elsewhere.

This is pretty big issue for us, because for many things, we really do care about serial I/O performance. For example, if we are committing a transaction, we really have no other way to handle it except to wait until the I/O is fully completed.

That said, the particular scenario where we had the problem was startup. If the database was under heavy load at the time it shut down, the recovery logs would be full, and the database would need to replay the recent actions that happened. Note that shutdown performance is important, because it many cases we are running in an environment where shutdown comes with a ticking clock (in IIS or as a Windows Service).

At startup, we usually have more time, and it is expected that we’ll take a while to get up to speed. If nothing else, just bringing enough of the database to memory is going to take time, so on large databases, startup time is expected to be non trivial.

That said, the startup time on those set of machines was utterly atrocious. To figure out what is going on, I pulled out Process Monitor and looked at the File I/O. We go this:


We are reading from a journal, and that is pretty serial I/O (in the image, I’m running of a remote network drive, to simulate slow responses). Note that we need to read the log in a serial fashion, and the way the OS reads things, we read 32Kb at a time.

Remember, we are reading things in a serial fashion, and that means that we have a lot of page faults, and we have a slow I/O system, and we execute them serially.

Yes, that is a killer for perf. By the way, when I’m talking about slow I/O system, I’m talking about > 0.5 MS per disk read for most requests (ideally, we would have latency of 0.05 – 0.15). And we have quite a few of those, as you can imagine.

Since I know that we are going to be reading the whole journal, I used the PrefetchVirtualMemory() method and passed it the entire file (it is a maximum of 64MB, and we are going to need to read it all anyway). This let the OS have the maximum amount of freedom when reading the data, and it generate big, concurrent I/O. Here is how this looks like:


This also give the wide I/O bandwidth a chance to play. We load the I/O subsystem with a lot of stuff that it can try to do in an optimized fashion.

The next part that was expensive was that we need to apply the data from the journal files to the data file, and sync it.

The performance of syncing a file is related to the size of the file, unfortunately. And the file in question was large, over 45GB. Especially on such a system, we saw a lot of latency here, as in multiple minutes. One obvious optimization was to not sync per journal file, but sync once per the whole recovery process. That helped, but it was still too expensive.

Next, we tried pretty much everything we could think about.

  • Switching to WriteFile (from using mmap and then calling FlushViewOfFile)
  • Using async I/O (WriteFileEx)
  • Using scatter / gather I/O with no buffering (saves the need to do sync in the end)
  • Completion ports
  • Asking a 4 months old baby girl what she think about it (she threw up on the keyboard, which is what I wanted to do at the time, then she cried, and I joined her)

Nothing seems to have worked. The major issue was that in this workload, we have a large file (45GB, as I said) and we are writing 4KB pages into it in effectively random places. In the workload we were trying to work with, there were roughly 256,000 individual 4KB writes (most of them weren’t consecutive, so we couldn’t get the benefit of that). That is about 1 GB of writing to do.

And nothing we could do would get us beyond 3MB/sec or so. Saturating the I/O subsystem with hundreds of thousands of small writes wouldn’t work, and we were at a loss. Note that a small test we made, just copying data around manually has resulted in roughly 10MS/sec peek performance on those machines. This is a very lame number, so there isn’t much that we can do.

Then I thought to ask, why are we seeing this only during startup? Surely this happens also on a regular basis. Why didn’t we notice?

The reason for that is pretty simple, we didn’t notice because we amortize the cost. Only on startup did we had to actually sit and wait for it to complete. So we dropped that requirement. We used to read all the journals, apply them to the data file, sync the data files and then delete the journals. Now we read the journals, apply them (via a memory map) to the data file, and only remember what is the last journal file we applied in memory.

There is a background process running that will take care of syncing the data file (and deleting the old journals). If we crash again, we’ll just have to replay the logs that we aren’t sure were synced before. This saves even more time.

But we still have another issue. Writing to memory mapped file require the OS to page the relevant pages into memory. And again, we are on slow I/O, and the OS will only page the stuff that we touch, so this is again a serial process that this time require us to load to memory about 1GB of data at 3MB/sec. That is… not a good place to be at. So the next step was to figure out all the addresses we’ll be writing to, and letting the OS know that we’ll be fetching them. We do some work to make sure that we load those values (and neighboring pages) to memory, then we can write to them without paging for each page individually.

A nice side effect of this is that because this is running on the latest changes in the system, this has the effect of preloading to memory the pages that are likely to be in used after the database has started.

That is a lot of work, but to be perfectly frank, this is mostly optimizing in a bad environment. The customer can’t walk away form their current machine easily, but the I/O rates those machines have would make any database sit in a corner and cry.

Dedicated operations road bypasses

time to read 2 min | 398 words

We care very deeply about the operations side of RavenDB. Support calls are almost never about “where are you? I want to send you some wine & roses”, and they tend to come at unpleasant timing. One of the things that we had learnt was that when stuff breaks, it tend to do so in ways that are… interesting.

Let me tell you a story…

A long time ago, a customer was using an index definition that relied on the presence of a custom assembly to enrich the API available for indexing. During the upgrade process from one major version of RavenDB to the next, they didn’t take into account that they need to also update the customer assembly.

When they tried to start RavenDB, it failed because of the version mismatch, since they weren’t actually using that index anyway. The customer then removed the assembly, and started RavenDB again. At this point, the following sequence of events happened:

  • The database started, saw that it is using an old version of the storage format, and converted to the current version of the storage format.
  • The database started to load the indexes, but the index definition was invalid without the customer assembly, so it failed. (Index definitions are validated at save time, so the code didn’t double check that at the time).

The customer was now stuck, the database format was already converted, so in order to rollback, they would need to restore from backup. They could also not remove the index from the database, because the database wouldn’t start to let them do so. Catch 22.

At this point, the admin went into the IndexDefinitions directory, and deleted the BadIndex.index-definition file, and restarted RavenDB again. The database then recognized that the index definition is missing, but the index exists, deleted the index from the server, and run happily ever after.

Operations road bypass is our terminology for giving administrators a path to changing internal state in our system using standard tools, without requiring the system to be up and functioning. The example with the index definition is a good one, because the sole reason we keep the index definition on disk is to allow administrators the ability to touch them without needing RavenDB in a healthy state.

What do you do in your system to make it possible for the admin to recover from impossible situations?

Presenting, Highly Available & Scalable Solutions at GOTO Copenhagen

time to read 1 min | 168 words

I’ll be presenting at the GOTO Copenhagen conference in Oct 7 – 8 this year. The full session summary is:

Presentation: Highly Available & Scalable Solutions with RavenDB

Track: Solutions Track 1 / Time: Monday 13:20 - 14:10 / Location: Rosenborg

RavenDB is a 2nd generation document database, with built-in load distribution, seamless replication, disaster recovery and data-driven sharding.

In this session, we are going to explore how RavenDB deals with scaling under load and remain highly available even under failure conditions.

We'll see how RavenDB's data-driven sharding allows to increase the amount of the data in our cluster without giving up the benefits of data locality.

We are are going to execute complex distributed map-reduce queries on a sharded cluster, giving you lightning-fast responses over very large data volumes.

Hibernating Rhinos will also be presenting at a booth, and we’ll have a few members of the core team there to talk about RavenDB and the cool things that you can do with it.

Production postmortemThe case of the man in the middle

time to read 3 min | 553 words

One of the most frustrating things when you dealing with production issues is when the problem is not in our product, but elsewhere. In particular, this post is dedicated to the hard work done by many anti virus products, in particular, to make our life harder.

Let us take a look at the following quote, taken from the ESET NOD32 Anti Virus knowledge base (emphasis mine):

By default, your ESET product automatically detects programs that are used as web browsers and email clients, and adds them to the list of programs that the internal proxy scans. This can cause loss of internet connectivity or other undesired results with applications that use network features but are not web browsers/email clients.

Yes, it can. In fact, it very often does.

Previously, we looked at a similar issue with Anti Virus slowing down I/O enough to cause us to slowly die. But in this case, the issue is a lot more subtle.

Because it is doing content filtering, it tends to put a much higher overhead on the system resources, which means that as far as the user is concerned, RavenDB is slow. We actually developed features specifically to handle this scenario. The traffic watch mode will tell you how much time you spend on the server side, and we have added a feature that will make RavenDB account for the internal work each query is doing, so we can tell where the actual cost is.

You can enable that by issuing:

GET databases/Northwind/debug/enable-query-timing

And one that is setup, you can get a good idea about what is costly in the query, as far as RavenDB is concerned. Here is an example of a very slow query:


You can see that the issue is that we are issuing a very wide range query, so most of the time is spent in inside Lucene. Other examples might be ridicilously complex queries, which result in high parsing time (we have seen queries in the hundreds of KB range). Or loading a lot of big documents, or… you get the drift. If we see that the server thinks that a query is fast, but the overall time is slow, we know to blame the network.

But an even more insidious issue is that this would drop requests,  consistently and randomly (and yes, I know that those are contradictions, it was consistently dropping requests in a random pattern that seemed explicitly designed to thwart figuring out what is going on). Leading to things breaking, and escalated support calls. “RavenDB is broken” leads to a lot of headache, and a burning desire to hit something when you figure out that not only isn’t it your fault, but the underlying reason is actively trying to prevent you from figuring it out (I assume it is to deal with viruses that try to shut it off), which lead to really complex find facting sessions.

That is more annoying because it seems that the issue there was a bug in respecting keep alive sessions for authenticated requests under some scenarios, in the AV product in question! Absolutely not fun!


No future posts left, oh my!


  1. Speaking (3):
    23 Sep 2015 - Build Stuff 2015 (Lithuania & Ukraine), Nov 18 - 24
  2. Production postmortem (11):
    22 Sep 2015 - The case of the Unicode Poo
  3. Technical observations from my wife (2):
    15 Sep 2015 - Disk speeds
  4. Find the bug (5):
    11 Sep 2015 - The concurrent memory buster
  5. Buffer allocation strategies (3):
    09 Sep 2015 - Bad usage patterns
View all series



Main feed Feed Stats
Comments feed   Comments Feed Stats