Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,520
|
Comments: 51,141
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 337 words

In the past two posts in the series, I talked about ways to store phone book records in a file. During the candidates review process, I noticed that many candidates failed to make their lives significantly easier by placing limits on themselves.

For example:

  • Using variable length records.
  • Using a single file.
  • Choosing simple algorithm to do the whole task.

If we force fixed length records, either directly or via record splitting (if each record is 64 bytes, a record that is bigger than that would reside in some multiple of the record size), the task become much easier. I’ve mostly ignored that in my code so far because I’m using binary offsets, but it can really make the code a lot simpler.

Using a single file lead to complications, because you have to do internal space management (where do the records live, where is the metadata?). It also make it much harder to recover used space in many cases.

The last one is probably the most interesting limitation, and not something that I would expect a junior developer to figure out. The use of a single option is typically limiting you to whatever a particular algorithm is providing, but you can extend on that significantly.

Let’s see another approach to building a persistent phone book. I’m going to effectively build an LSM here. You can see the code here.

I called it a pretty horrible LSM (Log Structure Merge), but all the relevant pieces are there. It is just horribly inefficient. The key problem, by the way, is around the number of times it will open a file handle. That can be really slow on Windows and end up being a real issue for any significant size.

There are also probably a lot of other bugs there, but also enough to figure out how this is actually built.

And with this post, I can can say that I explicitly scratched this itch.

A fun task to take this further, by the way, is to try to implement a persistent trie for the phone book.

time to read 3 min | 422 words

In the previous post, I discussed how I can (fairly naively) solve the phone book problem. In this post, I want to take this a bit further. Whenever a developer hears the terms sorted or ordered, I expect them to think about tree. Indeed, trees are the very first thing that I would expect to pop to mind.

Assuming that they aren’t versed with persistent data structures, they are likely going to look at in memory trees and map the idea directly to a file. I decided to take the same approach. For algorithms, I find Python to be the best language for me to grok. Mostly because it looks so much like pseudo code. Searching for AVLTree Python got me to this implementation. One issue with AVLTrees is their code size. Even in Python, the code is about 200 lines of code. Here is the structure of a node, which we’ll need to store on the disk.

image

You can see the full code here. It is about twice as long as the previous implementation (around 300 lines of code). I’m not going to cover that in depth, mostly because this is an AVL tree, with the only funny thing here is that I’m using that I’m writing the nodes to the file, not holding them in memory.

For example, I have to have some way to find the root node. I do that by writing its position to the end of the file after each write. That means that there are some limits to what I’m doing, but nothing too bad.

I don’t have a way to recover disk space, and updates to the data will use new space, not the old one. That is because we have to take into account that the size of the data may change.

This implementation is also going to be quite wasteful in terms of the disk seeks, given that it is an AVL Tree with a branching factor of 2. One of the reasons that binary search trees aren’t really used with persistent data structures is that the cost of seeking to another location in the file is enormous. B+Tree solves the problem by having a much higher branching factor.

A proper B+Tree, however, is probably going to take about 1,500 lines of code to implement, I think. So if you want to read that code, go ahead and peek into Voron Smile.

time to read 5 min | 891 words

A couple of weeks ago I asked you to rate an interview question that we had sent to candidates. The idea is to build a persistent phone book, with the idea that we care about the amount of I/O traffic that is used more than anything else.

The scenario we presented to the candidates was:

The rules are that you can’t maintain any state in the class itself and that the code should be prepared to handled a lot of records. Of particular interest is the IterateOrderedByName() call, which allows you to do an ordered iteration (by name) from a given name. That pretty much forces us to store the data in a sorted format.

Note that we don’t explicitly state that in the requirements for the task, we expect the candidates to understand that this is a requirement given the requirements for the operations. The most naïve option to complete this challenge is to write a CSV file. Each entry in its own line and you are done. However, that wouldn’t allow us to meet the other requirements.

In other words, reading the whole file to memory, adding the item, sorting the whole thing and writing it back again is a no go.  As a note, this is a task that we give to Junior developers. We expect zero to very little experience.

After going over dozens such answers, I can tell you that this task does its primary job, which is to filter people out. That is an explicit goal, by the way. We have had over 200 applicants to this position and the time it takes to go through the that many CVs, interviews, etc is horrendous. I found that this question filters enough people to make it far more manageable. And given the answers I got to my previous post, this is absolutely the kind of task that I would consider a junior developer suitable for. I remember solving similar problems in high school.

I like this problem because it is a good way to see if a person is able to do several things at once:

  • Can take a non trivial problem and break it to manageable pieces.
  • Can figure out several pitfalls along the way and handle them.
  • Can recognize what the impact of the various requirements are for the solution.
  • Can apply knowledge from one field to another. In this case, in memory data structures vs. persistent files.
  • Understand that data and representations are different things.

I have to admit that I was quite surprised by the results. Pretty much no one chose to use binary format, they all went to the textual format. This makes the problem harder. Also, there were a number of bytes vs. chars errors (that isn’t an issue for a junior position, though).

My first attempt is going to be a bit brute force. Instead of trying to implement any fancy data structures, I’m going to simply write the data out to the file in an append only manner. At the end of the file, however, I’ll keep a sorted array of the positions of the items in the file. Here is what this looks like:

image

To do a search in this format, I can use the sorted positions at the end of the file. Whenever I add a new record, I add it at the end of the records (overwriting the sorted positions sections and then writing the new sorted positions at the end). The overhead per write is the size of the sorted array, basically. The bigger the file, the more we’ll spend writing to the array. For example, assuming each record is around 64 bytes, when we get to 10 million records, we’ll have data size of 610MB, but the metadata we’ll have will be around 38 MB (and we’ll need to write all of it each time we modify a record).

The advantages, however, is that the code is simple and there are several ways that we can optimize the code without too much trouble. Here is what this looks like in code, there are some tests attached that you can run and the entire code runs at roughly 150 lines of code.

That isn’t a great solution, I’ll admit, but it is a straightforward one. It seems like it would be the natural consequence of moving from appending each record to the file to having a sorted access. There are some nice things there, nevertheless. You can “batch” several inserts together and close them by calling Flush() once, instead on each record, for example.

Given that the biggest issue here is the size of the sorted positions, we can seek to reduce it in several ways:

  • We can compress the data on write, you can see the changes required for that here.
  • We can create multiple sorted positions and merge them occasionally.

For that matter, we can store the sorted positions in another file entirely, which will simplify things like supporting efficient in order inserts.

Things that a junior developer might not do here? I’m using BinaryReader and BinaryWriter. That simplify things for me, since I can ignore the cost of textual parsing. That said, I’m not using them in any interesting way and the concepts translates 1:1 to textual interface, with a bit more work.

Can you do better than this implementation?

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}