Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,026 | Comments: 44,842

filter by tags archive


Frank Quednau

A Comparator having a side-effect while being called a gazillion times? Reminds me of the guys who did Database queries in Command.CanExecute implementations...

Ayende Rahien

Frank, How much time does it take to call it once?

Frank Quednau

Never mind the number of times it's called, but still wondering why a comparison would have to do a write - it does sound like something that shouldn't have a side-effect.

Dalibor Carapic

You have some sort of a list implementation which has a method to find a value which is greater then or equal to the given value and this FindGreaterOrEqual method performs some sort of an insert which gets then written to storage/disk. Since I have no idea how RavenDB works I can only say that you should remove this insert and write from this method in order to speed it up :)

Ayende Rahien

Frank, Where do you see comparison making a write?


all time is spend in internal methods


68 million calls in 5s? I'd say it's pretty fast.


FindGreaterOrEqual is called by all 'subroutines', this can probably be optimized so that it is called only once?

Frank Quednau

@Ayende D'oh, sorry, was reading the stack traces backwards (><)


Your Raven.Storage.Impl.StorageWriter+ is the bottleneck within the scope of your code.

The calls to this method chain: FindGreaterOrEqual() -> Insert() -> Apply() -> MoveNext() are common to all 3 levels of the trace stack that you printed out there. The middle method-chain with 11,690 calls is the "expensive" one

The (RunInternal) is where the time is being eaten - I assume that is the background worker thread that is executing the delegate/lambda constructed by Insert/Apply/MoveNext, while the implementation is setting up for receiving the next write in parallel?

If the times are cpu-time, I would say you have a thread synch / context switching overhead in there somewhere - depends on how many threads you threw at your test.

If the times shown are wall-clock time, then you might have a locking / blocking issue... - not enough threads in the threadpool?

Ayende Rahien

Neil, I think you are reading the stack trace backward.


Maybe it's context switching?

André Minelli

I think the problem is on ExecutionContext.RunInternal, which is restoring context from original thread, and this is probably unnecessary on this scenario (hence using ConfigureAwait(false) could help)?

Ayende Rahien

Gleb, You can assume no multi threading actually taking place.


Doh - same mistake as Frank. (Used to assuming a call tree starts from the caller).

Ok - assuming no multi-threading gives a good clue...

Why is the Compare called 68,000,000 times? Is the implementation of the List comparator checking every value in the list (like a table scan?). Maybe that needs the equivalent of an index or an more efficient algorithm (binary-chop or something faster)?


Scratch that last comment - I just saw that we are trying to find GreaterThanOrEqual to...

A sorted list would probably help more than index / algorithm


I give up :) Was typing before I check. A Skip List is already sorted :P


Last try...

The comparer being used a CaseInsensitiveComparer. Assuming your keys have to be strings... Would an OrdinalIgnoreCase be faster (not sure on your key constraints...)

That would approximately halve the cost

Jim T

I'd assume there's some sort of O(n2) algorithm going on here. FindGreaterOrEqual is called 11691 times, where the other calls are made ~68million times ~ (11692^2)/2 ~ the handshake equation. Every value is being compared against every other value.

Matt Warren

I think that even though it's being called 68 million times, 5 secs spent in CaseInsensitiveComparator(..) is high. Plus it's always best to try and optimise the most expansive thing first.

Assuming it's just comparing 2 arrays or chars, they're might be a faster way?


So basically, you're not going to get anywhere fixing and other performance issues until you use a sorting algorithm that's more efficient than bubble sort. That 68Million number needs to come way way down. How about a nice Shell sort?


Totally agree with the rest, it is needed to drop the number of comparisons.

Patrick Huizinga

Following up on what Jim T found:

A proper skip list would only need log(n) comparisons to insert an item. So I would conclude your skip list is actually a linked list. (And you're importing something that's already in order.)

Michael Yeaney

Aside from the other suggestions....better sort algorithms, rewriting to reduce calls to a method, etc....I had another though:

One thing that sticks out to me is that when the "SkipList+Next" method is called almost the same number of times as Compare (which, if that's an On^2 comparison (naïve), that makes sense), the time taken accounts for ~50% of the comparison time. . What this could possibly mean is that the comparison itself may not necessarily be the slow part (aside from perhaps a less-than optimal implementation as others have pointed out).

Initial thoughts: "+Next" is doing standard linked-list style "pointer chasing" which is possibly killing any change the processor has of utilizing L1/L2 + L3 cache lines effectively (each point is a random point in memory...in other words, not contiguous and more than likely not pre-loaded into CPU cache). So, 68+million calls that are mostly cache-misses are not going to run quick at all. laugh

I haven't gone back through the posts to see the details of the Managed LevelDB SkipList implementation, but if each "level" is not stored as a contiguous array, may be one spot to fix. That way, the CPU could load a much larger range into cache at a time, and - depending on the "level" - perhaps get very high cache hit rates from the CPU.

Or...none of that is right, and I've bumped my head somewhere!!! laugh

Alois Kraus

5s for 68 million calls 4s for 11K calls 2s for 68 million calls The time is spent in ExecutionContext.RunInternal which looks like the write was underway but it was not yet completed hence the I guess wait time for the writer thread/s to complete the write operation.

You do sort on stuff which is about to be written which could lead to wrong or unexpected results but you do certainly wait much longer for your search results while stuff is beeing written.

To solve the issues you should exclude from the sort any writes which are not yet completed to get your speed back.

Luigi Berrettini

Skip lists are sorted and (hence) have logaritmic complexity, so I think there could be a problem in the implementation of FindGreaterOrEqual

Alois Kraus

Ups I did misread the call stack as well. Seems to be the issue you did post a few days ago where the binary search was degenerate which is causing the 68m compares.

Asher Kohn

There are three outcomes from calling FindGreaterOrEqual

  1. Comparer is called 68millions times in 5s
  2. Next is called 68 million times in 2s
  3. Nothing else is called 11million times in 4s

What could #3 be doing that is worse performance than the first 2?

If #3 is inserting in the front or back is slow can you keep track of first and last and run that super fast comparerat the beginning of FindGreaterOrEqual

Comment preview

Comments have been closed on this topic.


No future posts left, oh my!


  1. Technical observations from my wife (3):
    13 Nov 2015 - Production issues
  2. Production postmortem (13):
    13 Nov 2015 - The case of the “it is slow on that machine (only)”
  3. Speaking (5):
    09 Nov 2015 - Community talk in Kiev, Ukraine–What does it take to be a good developer
  4. Find the bug (5):
    11 Sep 2015 - The concurrent memory buster
  5. Buffer allocation strategies (3):
    09 Sep 2015 - Bad usage patterns
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats