Ayende @ Rahien

Refunds available at head office

Optimize this, dude!

So, we have a perf problem in the managed leveldb implementation. I pulled out dotTrace and looked at the numbers:

image

Do you see where the problem is?

Tags:

Posted By: Ayende Rahien

Published at

Originally posted at

Comments

Frank Quednau
07/04/2013 09:22 AM by
Frank Quednau

A Comparator having a side-effect while being called a gazillion times? Reminds me of the guys who did Database queries in Command.CanExecute implementations...

Ayende Rahien
07/04/2013 09:25 AM by
Ayende Rahien

Frank, How much time does it take to call it once?

Frank Quednau
07/04/2013 09:27 AM by
Frank Quednau

Never mind the number of times it's called, but still wondering why a comparison would have to do a write - it does sound like something that shouldn't have a side-effect.

Dalibor Carapic
07/04/2013 09:28 AM by
Dalibor Carapic

You have some sort of a list implementation which has a method to find a value which is greater then or equal to the given value and this FindGreaterOrEqual method performs some sort of an insert which gets then written to storage/disk. Since I have no idea how RavenDB works I can only say that you should remove this insert and write from this method in order to speed it up :)

Ayende Rahien
07/04/2013 09:29 AM by
Ayende Rahien

Frank, Where do you see comparison making a write?

Sander
07/04/2013 09:31 AM by
Sander

all time is spend in internal methods

Sérgio
07/04/2013 09:45 AM by
Sérgio

68 million calls in 5s? I'd say it's pretty fast.

Pieter
07/04/2013 09:49 AM by
Pieter

FindGreaterOrEqual is called by all 'subroutines', this can probably be optimized so that it is called only once?

Frank Quednau
07/04/2013 09:56 AM by
Frank Quednau

@Ayende D'oh, sorry, was reading the stack traces backwards (><)

Neil
07/04/2013 11:19 AM by
Neil

Your Raven.Storage.Impl.StorageWriter+ is the bottleneck within the scope of your code.

The calls to this method chain: FindGreaterOrEqual() -> Insert() -> Apply() -> MoveNext() are common to all 3 levels of the trace stack that you printed out there. The middle method-chain with 11,690 calls is the "expensive" one

The (RunInternal) is where the time is being eaten - I assume that is the background worker thread that is executing the delegate/lambda constructed by Insert/Apply/MoveNext, while the implementation is setting up for receiving the next write in parallel?

If the times are cpu-time, I would say you have a thread synch / context switching overhead in there somewhere - depends on how many threads you threw at your test.

If the times shown are wall-clock time, then you might have a locking / blocking issue... - not enough threads in the threadpool?

Ayende Rahien
07/04/2013 11:21 AM by
Ayende Rahien

Neil, I think you are reading the stack trace backward.

Gleb
07/04/2013 11:30 AM by
Gleb

Maybe it's context switching?

André Minelli
07/04/2013 11:31 AM by
André Minelli

I think the problem is on ExecutionContext.RunInternal, which is restoring context from original thread, and this is probably unnecessary on this scenario (hence using ConfigureAwait(false) could help)?

Ayende Rahien
07/04/2013 11:31 AM by
Ayende Rahien

Gleb, You can assume no multi threading actually taking place.

Neil
07/04/2013 11:52 AM by
Neil

Doh - same mistake as Frank. (Used to assuming a call tree starts from the caller).

Ok - assuming no multi-threading gives a good clue...

Why is the Compare called 68,000,000 times? Is the implementation of the List comparator checking every value in the list (like a table scan?). Maybe that needs the equivalent of an index or an more efficient algorithm (binary-chop or something faster)?

Neil
07/04/2013 11:54 AM by
Neil

Scratch that last comment - I just saw that we are trying to find GreaterThanOrEqual to...

A sorted list would probably help more than index / algorithm

Neil
07/04/2013 11:55 AM by
Neil

I give up :) Was typing before I check. A Skip List is already sorted :P

Neil
07/04/2013 12:00 PM by
Neil

Last try...

The comparer being used a CaseInsensitiveComparer. Assuming your keys have to be strings... Would an OrdinalIgnoreCase be faster (not sure on your key constraints...)

That would approximately halve the cost

Jim T
07/04/2013 12:18 PM by
Jim T

I'd assume there's some sort of O(n2) algorithm going on here. FindGreaterOrEqual is called 11691 times, where the other calls are made ~68million times ~ (11692^2)/2 ~ the handshake equation. Every value is being compared against every other value.

Matt Warren
07/04/2013 12:20 PM by
Matt Warren

I think that even though it's being called 68 million times, 5 secs spent in CaseInsensitiveComparator(..) is high. Plus it's always best to try and optimise the most expansive thing first.

Assuming it's just comparing 2 arrays or chars, they're might be a faster way?

JimT
07/04/2013 12:33 PM by
JimT

So basically, you're not going to get anywhere fixing and other performance issues until you use a sorting algorithm that's more efficient than bubble sort. That 68Million number needs to come way way down. How about a nice Shell sort?

Martin
07/04/2013 12:46 PM by
Martin

Totally agree with the rest, it is needed to drop the number of comparisons.

Patrick Huizinga
07/04/2013 01:57 PM by
Patrick Huizinga

Following up on what Jim T found:

A proper skip list would only need log(n) comparisons to insert an item. So I would conclude your skip list is actually a linked list. (And you're importing something that's already in order.)

Michael Yeaney
07/04/2013 05:13 PM by
Michael Yeaney

Aside from the other suggestions....better sort algorithms, rewriting to reduce calls to a method, etc....I had another though:

One thing that sticks out to me is that when the "SkipList+Next" method is called almost the same number of times as Compare (which, if that's an On^2 comparison (naïve), that makes sense), the time taken accounts for ~50% of the comparison time. . What this could possibly mean is that the comparison itself may not necessarily be the slow part (aside from perhaps a less-than optimal implementation as others have pointed out).

Initial thoughts: "+Next" is doing standard linked-list style "pointer chasing" which is possibly killing any change the processor has of utilizing L1/L2 + L3 cache lines effectively (each point is a random point in memory...in other words, not contiguous and more than likely not pre-loaded into CPU cache). So, 68+million calls that are mostly cache-misses are not going to run quick at all. laugh

I haven't gone back through the posts to see the details of the Managed LevelDB SkipList implementation, but if each "level" is not stored as a contiguous array, may be one spot to fix. That way, the CPU could load a much larger range into cache at a time, and - depending on the "level" - perhaps get very high cache hit rates from the CPU.

Or...none of that is right, and I've bumped my head somewhere!!! laugh

Alois Kraus
07/04/2013 05:49 PM by
Alois Kraus

5s for 68 million calls 4s for 11K calls 2s for 68 million calls The time is spent in ExecutionContext.RunInternal which looks like the write was underway but it was not yet completed hence the I guess wait time for the writer thread/s to complete the write operation.

You do sort on stuff which is about to be written which could lead to wrong or unexpected results but you do certainly wait much longer for your search results while stuff is beeing written.

To solve the issues you should exclude from the sort any writes which are not yet completed to get your speed back.

Luigi Berrettini
07/04/2013 06:01 PM by
Luigi Berrettini

Skip lists are sorted and (hence) have logaritmic complexity, so I think there could be a problem in the implementation of FindGreaterOrEqual

Alois Kraus
07/04/2013 06:48 PM by
Alois Kraus

Ups I did misread the call stack as well. Seems to be the issue you did post a few days ago where the binary search was degenerate which is causing the 68m compares.

Asher Kohn
07/05/2013 04:32 AM by
Asher Kohn

There are three outcomes from calling FindGreaterOrEqual

  1. Comparer is called 68millions times in 5s
  2. Next is called 68 million times in 2s
  3. Nothing else is called 11million times in 4s

What could #3 be doing that is worse performance than the first 2?

If #3 is inserting in the front or back is slow can you keep track of first and last and run that super fast comparerat the beginning of FindGreaterOrEqual

Comments have been closed on this topic.