Ayende @ Rahien

It's a girl

Reviewing HyperLevelDB–Concurrent Writes

As mentioned earlier, HyperDex has made some changes to LevelDB to make it work faster for their scenarios. I was curious to see what changed, so I took a look at the code. In my previous post, I dealt with compaction, but now I want to deal exclusively with the changes that were made to leveldb to make writes more concurrent.

Update: it appears that I made a mistake in reading the code (I didn't check the final version). See this post for the correction.

 

Another change that was made that I am really not sure that I am following is the notion of concurrent log writer. This relies on the pwrite() method, which allows you to write a buffer to a file at a specified position. I have not been able to figure out what is going on if you have concurrent writes to that file. The HyperDex modifications include synchronization on the offset where they will actually  make the write, but after that, they make concurrent calls. It make sense, I guess, but I have several problems with that. I was unable to find any details about the behavior of the system when making concurrent calls to pwrite() at the end of the file, especially since your position might be well beyond the current end of file.

I couldn’t figure out what the behavior was supposed to be under those conditions, so I fired up a linux machine and wrote the following code:

   1: int main() {
   2:   char* str1 = "1234\n";
   3:   char* str2 = "5678\n";
   4:   int  file_descriptor;
   5:   int  ret;
   6:   char fn[]="test";
   7:  
   8:   if ((file_descriptor = creat(fn, S_IRUSR | S_IWUSR)) < 0)
   9:   {
  10:     perror("creat() error");
  11:     return -1;
  12:   }
  13:   else {
  14:     ret = pwrite(file_descriptor, str2, 5, 5);
  15:     printf("Wrote %d\n", ret);
  16:     
  17:     ret = pwrite(file_descriptor, str1, 5, 10);
  18:     printf("Wrote %d\n", ret);
  19:     
  20:     if (close(file_descriptor)!= 0)
  21:        perror("close() error");
  22:     
  23:   }
  24:   
  25:   return 0;
  26: }

I’ll be the first to admit that this is ugly code, but it gets the job done, and it told me that you could issues those sort of writes, and it would do the expected thing. I am going to assume that it would still work when used concurrently on the same file.

That said, there is still a problem, let us assume the following sequence of events:

  • Write A
  • Write B
  • Write A is allocated range [5 – 10] in the log file
  • Write B is allocated range [10 – 5] in the log file
  • Write B is done and returns to the user
  • Write A is NOT done, and we have a crash

Basically, we have here a violation of durability, because when we read from the log, we will get to the A write, see it is corrupted and stop processing the rest of the log. Effectively, you have just lost a committed transaction. Now, the log format actually allows that to happen, and a sophisticated reader can recover from that, but I haven’t seen yet any signs that that was implemented.

To be fair, I think that the log reader should be able to handle zero'ed data and continue forward. There is some sort of a comment about that. But that isn't supported by the brief glance that I saw, and more importantly, it isn't won't help if you crashed midway through writing A, so you have corrupted (not zero'ed) data on the fie. This would also cause the B transaction to be lost.

The rest appears to be just thread safety and then allowing concurrent writes to the log, which I have issue with, as I mentioned. But I am assuming that this will generate a high rate of writes, since there is a lot less waiting. That said, I am not sure how useful that would be. There is still just one physical needle that writes to the disk. I am guessing that it really depends on whatever or not you need the call to be synced or not. If you do, there are going to be a lot more fsync() than before, when it was merged into a single call.

Tags:

Posted By: Ayende Rahien

Published at

Originally posted at

Comments

No comments posted yet.

Comments have been closed on this topic.