Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,026 | Comments: 44,842

filter by tags archive

Raven Storage, early perf numbers

time to read 1 min | 157 words

So, our managed implementation of leveldb is just about ready to go out and socialize. Note that those are just for a relatively short duration, but they give us good indicator of where we are. We are still running longer term perf tests now. Also note that they are early numbers. We did performance work, but it is still not done.

The following tests were done on a HDD, and all include writing a million records (16 bytes key, 100 bytes values) to storage.

  • Writing 1 million sequential keys - 52,152 op/s
  • Writing 1 million random keys -  11,986 op/s
  • Writing 1 million sequential keys with fsync - 17,225 op/s

And now, for the reads portion:

  • Sequential reads - 104,620 op/s
  • Reverse sequential reads - 57,932 op/s
  • Random reads - 3,191 op/s

Note that I am pretty sure that the reason for the later performance is that it is using an HDD, instead of SSD.



How does that compare with the old Microsoft DB engine?

Ayende Rahien

Bob, We haven't tested that yet, right now we are just looking at the raw numbers

Howard Chu

How much RAM did the test machine have, what percentage of data was in cache?

Khalid Abuhakmeh

I'm with Bob, I have no idea how to read these numbers. We need the benchmark of the old system to make sense of it as end users. Currently they just look like big numbers.

Kelly Sommers

I think a hint something is wrong somewhere is the weird occurrence is your random writes are quite a bit faster than your random reads.

This might be due to disk caching. The missing test of random writes with fsync would probably clarify this.


Your RSS feed just dumped a fresh copy of all of your recent posts out again. The provided links seem to be invalid:


(From the Feed)

Resolves to:


Ayende Rahien

Howard, That was on a machine with ~4GB, I don't recall the memory numbers. Re-running the test on a 8GB laptop (about 3 GB free) with SSD gives (and used about 1.3GB) : Writing 1 million sequential keys - 40,073 op/s Writing 1 million random keys - 16,243 op/s Writing 1 million sequential keys with fsync - 119,600 op/s

Greg young

@ayende you are also leaving off another important item here. Latency.

It's quite easy to get very high numbers even with fsync if you are say fsyncing 512kb at a time.

Also if in windows fsyncing is meaningless with many ssds as they still cache internally. Power pulls can help find that.


Howard Chu

I don't see how write with fsync can be 3x faster than async write. These numbers are not credible.


How can you fsync faster than a disk can seek or rotate? I'm seeing the same numbers with my desktop machine, but not sure how that is possible.

Greg Young

@howard @tobi the answer is simple. Try turning off disk caching.

Chris Wright

Cross-platform support was a goal of this, no? Looking forward to seeing ravendb on mono in linux.


@GregYoung "fsync"/FILEFLAGWRITE_THROUGH is not supposed to use any caches. Also, the disk is not configured to nut "use buffer flushing" (it is called like that in the properties).

Greg Young

@tobi disk level caching (eg at controller) is still enabled. And FLAGWRITETHROUGH very often does not write through.

I can basically tell you there is caching in between because of:

Writing 1 million random keys 11,986 op/s

Let's imagine the random keys were really really lucky and all picked the same key to write (so no seeks) probability (astronomical) and we were writing 1 byte (ok 512 at lower level). Then the write would occur once per revolution of the disk so 12k IOPS would mean that we have a 720000 RPM drive. Just hope it doesn't escape your laptop, it will kill many people.

Once you put seeks in this however it will completely fall apart. On my low grade spindle here a quick test put seeks at +- 60ms. Given the same numbers it would mean that the drive can do 12k seeks/second must be an awesome drive!

As such yes. Caching must be enabled at some level. I'd be happy to run the code here to show but can't find any link to it.

Greg Young

@tobi to be clear "random keys" may still be just sequential writes (reading back through it).

Even 3kiops (4k reads) on a spindle won't happen. These numbers have caching in them.

Greg Young



Just to let you know that it would be nice if some explanation of these numbers was published here. Maybe the original leveldb has a sync write mode that could be tested and compared with Ayende's results?

Comment preview

Comments have been closed on this topic.


No future posts left, oh my!


  1. Technical observations from my wife (3):
    13 Nov 2015 - Production issues
  2. Production postmortem (13):
    13 Nov 2015 - The case of the “it is slow on that machine (only)”
  3. Speaking (5):
    09 Nov 2015 - Community talk in Kiev, Ukraine–What does it take to be a good developer
  4. Find the bug (5):
    11 Sep 2015 - The concurrent memory buster
  5. Buffer allocation strategies (3):
    09 Sep 2015 - Bad usage patterns
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats