Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,130 | Comments: 45,556

filter by tags archive

Reviewing HyperLevelDB–Concurrent Writes

time to read 10 min | 1965 words

As mentioned earlier, HyperDex has made some changes to LevelDB to make it work faster for their scenarios. I was curious to see what changed, so I took a look at the code. In my previous post, I dealt with compaction, but now I want to deal exclusively with the changes that were made to leveldb to make writes more concurrent.

Update: it appears that I made a mistake in reading the code (I didn't check the final version). See this post for the correction.


Another change that was made that I am really not sure that I am following is the notion of concurrent log writer. This relies on the pwrite() method, which allows you to write a buffer to a file at a specified position. I have not been able to figure out what is going on if you have concurrent writes to that file. The HyperDex modifications include synchronization on the offset where they will actually  make the write, but after that, they make concurrent calls. It make sense, I guess, but I have several problems with that. I was unable to find any details about the behavior of the system when making concurrent calls to pwrite() at the end of the file, especially since your position might be well beyond the current end of file.

I couldn’t figure out what the behavior was supposed to be under those conditions, so I fired up a linux machine and wrote the following code:

   1: int main() {
   2:   char* str1 = "1234\n";
   3:   char* str2 = "5678\n";
   4:   int  file_descriptor;
   5:   int  ret;
   6:   char fn[]="test";
   8:   if ((file_descriptor = creat(fn, S_IRUSR | S_IWUSR)) < 0)
   9:   {
  10:     perror("creat() error");
  11:     return -1;
  12:   }
  13:   else {
  14:     ret = pwrite(file_descriptor, str2, 5, 5);
  15:     printf("Wrote %d\n", ret);
  17:     ret = pwrite(file_descriptor, str1, 5, 10);
  18:     printf("Wrote %d\n", ret);
  20:     if (close(file_descriptor)!= 0)
  21:        perror("close() error");
  23:   }
  25:   return 0;
  26: }

I’ll be the first to admit that this is ugly code, but it gets the job done, and it told me that you could issues those sort of writes, and it would do the expected thing. I am going to assume that it would still work when used concurrently on the same file.

That said, there is still a problem, let us assume the following sequence of events:

  • Write A
  • Write B
  • Write A is allocated range [5 – 10] in the log file
  • Write B is allocated range [10 – 5] in the log file
  • Write B is done and returns to the user
  • Write A is NOT done, and we have a crash

Basically, we have here a violation of durability, because when we read from the log, we will get to the A write, see it is corrupted and stop processing the rest of the log. Effectively, you have just lost a committed transaction. Now, the log format actually allows that to happen, and a sophisticated reader can recover from that, but I haven’t seen yet any signs that that was implemented.

To be fair, I think that the log reader should be able to handle zero'ed data and continue forward. There is some sort of a comment about that. But that isn't supported by the brief glance that I saw, and more importantly, it isn't won't help if you crashed midway through writing A, so you have corrupted (not zero'ed) data on the fie. This would also cause the B transaction to be lost.

The rest appears to be just thread safety and then allowing concurrent writes to the log, which I have issue with, as I mentioned. But I am assuming that this will generate a high rate of writes, since there is a lot less waiting. That said, I am not sure how useful that would be. There is still just one physical needle that writes to the disk. I am guessing that it really depends on whatever or not you need the call to be synced or not. If you do, there are going to be a lot more fsync() than before, when it was merged into a single call.


Comment preview

Comments have been closed on this topic.


  1. How to waste CPU and kill your disk by scaling 100 million inefficiently - one day from now
  2. RavenDB Conference 2016–Slides - about one day from now

There are posts all the way to Jun 01, 2016


  1. The design of RavenDB 4.0 (14):
    26 May 2016 - The client side
  2. RavenDB 3.5 whirl wind tour (14):
    25 May 2016 - Got anything to declare, ya smuggler?
  3. Tasks for the new comer (2):
    15 Apr 2016 - Quartz.NET with RavenDB
  4. Code through the looking glass (5):
    18 Mar 2016 - And a linear search to rule them
  5. Find the bug (8):
    29 Feb 2016 - When you can't rely on your own identity
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats