Big Data Search
I got tired of the old questions that we were asking candidates, so I decided to add a new one. This one is usually something that we’ll give the candidates to do at home, at their leisure. Let us imagine the following file:
As you can see, this is a pretty trivial CSV file. However, let assume that it is a small example of a CSV file that is 15 TB in size. The requirement is to be able to query on that file. We need to be able to query by email or all the people with in a particular zip code. Because of the size, the solution can be composed of two parts, a prepare part (which can run for as long as it is needed) and answer to queries part. Maximum time to answer any query must be under 30 seconds.
- You can assume that the file never changes, and that once the prepare part is done, it will never need to be run again.
- The answer to a query is the full CSV row.
- You can assume a machine with a single machine 100TB disk, 16 GB RAM and 8 CPU cores.
- The solution cannot use any existing databases.
- The solution needs to include explanation of the various options that were available and why this specific solution was chosen.
- After the prepare phase is done, the solution has to take less than 30TB of data (including the original file).
- The solution should be easy to apply to different CSV file.
I decided that it wouldn’t be fair to ask candidates to do something like that without doing it myself. Mostly because the fact that I have a good idea about how to do something doesn’t meant that I understand the actual implementation issues that might pop up.
I actually gave myself a somewhat harder task, do the above mention task, but do it without access to any library other than the BCL and do so with a minimal amount of memory usage. The entire thing took less than a day, and it solves the problem quite a bit more efficiently than I actually anticipated.
But I’ll discuss the details of this in my next post.