Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,125 | Comments: 45,492

filter by tags archive

Raven Storage, early perf numbers

time to read 1 min | 157 words

So, our managed implementation of leveldb is just about ready to go out and socialize. Note that those are just for a relatively short duration, but they give us good indicator of where we are. We are still running longer term perf tests now. Also note that they are early numbers. We did performance work, but it is still not done.

The following tests were done on a HDD, and all include writing a million records (16 bytes key, 100 bytes values) to storage.

  • Writing 1 million sequential keys - 52,152 op/s
  • Writing 1 million random keys -  11,986 op/s
  • Writing 1 million sequential keys with fsync - 17,225 op/s

And now, for the reads portion:

  • Sequential reads - 104,620 op/s
  • Reverse sequential reads - 57,932 op/s
  • Random reads - 3,191 op/s

Note that I am pretty sure that the reason for the later performance is that it is using an HDD, instead of SSD.



How does that compare with the old Microsoft DB engine?

Ayende Rahien

Bob, We haven't tested that yet, right now we are just looking at the raw numbers

Howard Chu

How much RAM did the test machine have, what percentage of data was in cache?

Khalid Abuhakmeh

I'm with Bob, I have no idea how to read these numbers. We need the benchmark of the old system to make sense of it as end users. Currently they just look like big numbers.

Kelly Sommers

I think a hint something is wrong somewhere is the weird occurrence is your random writes are quite a bit faster than your random reads.

This might be due to disk caching. The missing test of random writes with fsync would probably clarify this.


Your RSS feed just dumped a fresh copy of all of your recent posts out again. The provided links seem to be invalid:


(From the Feed)

Resolves to:


Ayende Rahien

Howard, That was on a machine with ~4GB, I don't recall the memory numbers. Re-running the test on a 8GB laptop (about 3 GB free) with SSD gives (and used about 1.3GB) : Writing 1 million sequential keys - 40,073 op/s Writing 1 million random keys - 16,243 op/s Writing 1 million sequential keys with fsync - 119,600 op/s

Greg young

@ayende you are also leaving off another important item here. Latency.

It's quite easy to get very high numbers even with fsync if you are say fsyncing 512kb at a time.

Also if in windows fsyncing is meaningless with many ssds as they still cache internally. Power pulls can help find that.


Howard Chu

I don't see how write with fsync can be 3x faster than async write. These numbers are not credible.


How can you fsync faster than a disk can seek or rotate? I'm seeing the same numbers with my desktop machine, but not sure how that is possible.

Greg Young

@howard @tobi the answer is simple. Try turning off disk caching.

Chris Wright

Cross-platform support was a goal of this, no? Looking forward to seeing ravendb on mono in linux.


@GregYoung "fsync"/FILEFLAGWRITE_THROUGH is not supposed to use any caches. Also, the disk is not configured to nut "use buffer flushing" (it is called like that in the properties).

Greg Young

@tobi disk level caching (eg at controller) is still enabled. And FLAGWRITETHROUGH very often does not write through.

I can basically tell you there is caching in between because of:

Writing 1 million random keys 11,986 op/s

Let's imagine the random keys were really really lucky and all picked the same key to write (so no seeks) probability (astronomical) and we were writing 1 byte (ok 512 at lower level). Then the write would occur once per revolution of the disk so 12k IOPS would mean that we have a 720000 RPM drive. Just hope it doesn't escape your laptop, it will kill many people.

Once you put seeks in this however it will completely fall apart. On my low grade spindle here a quick test put seeks at +- 60ms. Given the same numbers it would mean that the drive can do 12k seeks/second must be an awesome drive!

As such yes. Caching must be enabled at some level. I'd be happy to run the code here to show but can't find any link to it.

Greg Young

@tobi to be clear "random keys" may still be just sequential writes (reading back through it).

Even 3kiops (4k reads) on a spindle won't happen. These numbers have caching in them.

Greg Young



Just to let you know that it would be nice if some explanation of these numbers was published here. Maybe the original leveldb has a sync write mode that could be tested and compared with Ayende's results?

Comment preview

Comments have been closed on this topic.


  1. RavenDB 3.5 Whirlwind tour: I need to be free to explore my data - 5 hours from now
  2. RavenDB 3.5 whirl wind tour: I'll have the 3+1 goodies to go, please - 3 days from now
  3. The design of RavenDB 4.0: Voron has a one track mind - 4 days from now
  4. RavenDB 3.5 whirl wind tour: Digging deep into the internals - 5 days from now
  5. The design of RavenDB 4.0: Separation of indexes and documents - 6 days from now

And 11 more posts are pending...

There are posts all the way to May 30, 2016


  1. The design of RavenDB 4.0 (14):
    05 May 2016 - Physically segregating collections
  2. RavenDB 3.5 whirl wind tour (14):
    04 May 2016 - I’ll find who is taking my I/O bandwidth and they SHALL pay
  3. Tasks for the new comer (2):
    15 Apr 2016 - Quartz.NET with RavenDB
  4. Code through the looking glass (5):
    18 Mar 2016 - And a linear search to rule them
  5. Find the bug (8):
    29 Feb 2016 - When you can't rely on your own identity
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats