Ayende @ Rahien

Hi!
My name is Ayende Rahien
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

@

Posts: 5,947 | Comments: 44,540

filter by tags archive

Rob’s SprintThe cost of getting data from LevelDB


We are currently investigating the usage of LevelDB as a storage engine in RavenDB. Some of the things that we feel very strongly about is transactions (LevelDB doesn’t have it) and performance (for a different definition of the one usually bandied about).

LevelDB does have atomicity, and the rest of CID can be built atop of that without too much complexity (already done, in fact). But we run into an issue when looking at the performance of reading. I am not sure if that is unique or not, but in our scenario, we typically deal with relatively large values. Documents of several MB are quite common. That means that we are pretty sensitive to memory allocations. It doesn’t help that we have very little control on the Large Object Heap, so it was with great interest that we looked at how LevelDB did things.

Reading the actual code make a lot of sense (more on that later, I will probably go through a big review of that). But there was one story that really didn’t make any sense to us, reading a value by key.

We started out using LevelDB Sharp:

Database.Get("users/1");

This in turn result in the following getting called:

image

A few things to note here. All from the point of view of someone who deals with very large values.

  • valuePtr is not released, even though it was allocated by us.
  • We copy the value from valuePtr into a string, resulting in two copies of the data and twice the memory usage.
  • There is no way to get just partial data.
  • There is no way to get binary data (for example, encrypted)
  • This is going to be putting a lot of pressure on the Large Object Heap.

But wait, it actually gets better. Let us look at the LevelDB method that get called:

image

So we are actually copying the data multiple times now. For fun, the db->rep->Get() call also copy the data. And that is pretty much where we stopped looking.

We are actually going to need to write a new C API and export that to be able to make use of that in our C# code. Fun, or not.

More posts in "Rob’s Sprint" series:

  1. (08 Mar 2013) The cost of getting data from LevelDB
  2. (07 Mar 2013) Result Transformers
  3. (06 Mar 2013) Query optimizer jumped a grade
  4. (05 Mar 2013) Faster index creation
  5. (04 Mar 2013) Indexes and the death of temporary indexes
  6. (28 Feb 2013) Idly indexing

Comments

Rei Roldan

Reading the limitations list, do you really think it would be a good match for RavenDB?

Specifically:

Only a single process (possibly multi-threaded) can access a particular database at a time.

James Nugent

@Rei - AFAIK Esent (as currently used by RavenDB) is also accessible from one process at a time.

njy
njy

@James: yep, it does seem so "... The database file cannot be shared between multiple processes simultaneously..."

source: http://blogs.msdn.com/b/windowssdk/archive/2008/10/23/esent-extensible-storage-engine-api-in-the-windows-sdk.aspx

Rob Ashton

Leveldb handles concurrency for most operations across threads - which is what is required.

Matt Johnson

Why write a new API? Why not contribute changes to the existing projects instead?

Rob Ashton

Matt - it's a rabbit hole - when you only need 10% of the functionality and you'd have to do 100% of the work to get that 10% into the other projects.

Especially the LevelDB C api - do you really think they're going to take pull requests for this?

JDice

Here's something from the LevelDB benchmarks:

LevelDB doesn't perform as well with large values of 100,000 bytes each. This is because LevelDB writes keys and values at least twice: first time to the transaction log, and second time (during a compaction) to a sorted file. With larger values, LevelDB's per-operation efficiency is swamped by the cost of extra copies of large values.

Ayende Rahien

JDice, Sure, but that is something that you are going to have in any ACID db with large values.

Ido Samuelson

Instead of going the platform invoke way. Work with Managed C++ and the IJW. It will not only be easier to integrate but also perform better.

Ayende Rahien

Ido, The problem is that managed C++ won't work with Mono.

Ido Samuelson

Gotcha,

Take a look at this solution, looks interesting... http://tirania.org/blog/archive/2011/Dec-19.html

Comment preview

Comments have been closed on this topic.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Sharding (2):
    21 May 2015 - Adding a new shard to an existing cluster, the easy way
  2. The RavenDB Comic Strip (2):
    20 May 2015 - Part II – a team in trouble!
  3. Challenge (45):
    28 Apr 2015 - What is the meaning of this change?
  4. Interview question (2):
    30 Mar 2015 - fix the index
  5. Excerpts from the RavenDB Performance team report (20):
    20 Feb 2015 - Optimizing Compare – The circle of life (a post-mortem)
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats