Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,523
|
Comments: 51,144
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 330 words

The usual caveat applies, I am running this in process, using small data size and small number of documents.

This isn’t so much as real benchmarks, but they are good enough to give me a rough indication about where i am heading, and whatever or not i am going in completely the wrong direction.

Those two should be obvious:

image image

This one is more interesting, RDB doesn’t do immediate indexing, I chose to accept higher CUD throughput and make indexing a background process. That means that the index may be inconsistent for a short while, but it greatly reduce the amount of work required to insert/update a document.

But, what is that short while in which the document and the DB may be inconsistent. The average time seems to be around 25ms in my tests, with some spikes toward 100 ms in some of the starting results. In general, it looks like things are improving the longer the database run. Trying it out over a 5,000 document size give me an average update duration of 27ms, but note that I am testing the absolute worst usage pattern, lot of small documents inserted one at a time with index requests coming in as well.

image

Be that as it may, having inconsistency period measured in a few milliseconds seems acceptable to me. Especially since RDB is nice enough to actually tell me if there are any inconsistencies in the results, so I can chose whatever to accept them or retry the request.

time to read 6 min | 1053 words

I am currently doing the production-ready pass through the Rhino DivanDB code base, and I thought that this change was interesting enough to post about:

public void Execute()
{
    while(context.DoWork)
    {
        bool foundWork = false;
        transactionalStorage.Batch(actions =>
        {
           var task = actions.GetFirstTask();
           if(task == null)
           {
               actions.Commit(); 
               return;
           }
           foundWork = true;

           task.Execute(context);

           actions.CompleteCurrentTask();

           actions.Commit();
        });
        if(foundWork == false)
            context.WaitForWork();
    }
}

This is “just get things working” phase. When getting a piece of code ready for production, I am looking for several things:

  • If this is running in production, and I get the log file, will I be able to understand what is going on?
  • Should this code handle any exceptions?
  • What happens if I send values from a previous version? From a future version?
  • Am I doing unbounded operations?
  • For error handling, can I avoid any memory allocations?

The result for this piece of code was:

public void Execute()
{
    while(context.DoWork)
    {
        bool foundWork = false;
        transactionalStorage.Batch(actions =>
        {
            var taskAsJson = actions.GetFirstTask();
            if (taskAsJson == null)
            {
                actions.Commit();
                return;
            }
            log.DebugFormat("Executing {0}", taskAsJson);
            foundWork = true;

            Task task;
            try
            {
                task = Task.ToTask(taskAsJson);
                try
                {
                    task.Execute(context);
                }
                catch (Exception e)
                {
                    if (log.IsWarnEnabled)
                    {
                        log.Warn(string.Format("Task {0} has failed and was deleted without completing any work", taskAsJson), e);
                    }
                }
            }
            catch (Exception e)
            {
                log.Error("Could not create instance of a task from " + taskAsJson, e);
            }

            actions.CompleteCurrentTask();
            actions.Commit();
        });
        if(foundWork == false)
            context.WaitForWork();
    }
}

The code size blows up really quickly.

time to read 5 min | 935 words

Here is a unit test testing Rhino DivanDB:

image

Here is a test that tests the same thing, using scenario based approach:

image

What are those strange files? Well, let us take a pick at the first one:

0_PutDoc.request 0_PutDoc.response

PUT /docs HTTP/1.1
Content-Length: 283

{
    "_id": "ayende",
    "email": "ayende@ayende.com",
    "projects": [
        "rhino mocks",
        "nhibernate",
        "rhino service bus",
        "rhino divan db",
        "rhino persistent hash table",
        "rhino distributed hash table",
        "rhino etl",
        "rhino security",
        "rampaging rhinos"
    ]
}

HTTP/1.1 201 Created
Connection: close
Content-Length: 15
Content-Type: application/json; charset=utf-8
Date: Sat, 27 Feb 2010 08:12:08 GMT
Server: Kayak

{"id":"ayende"}

Those are just test files, corresponding to the request and the expected response.

RBD’s turn those into tests, by issuing each request in turn and asserting on the actual output. This is slightly more complicated than it seems, because some requests contains things like dates, or generated guids. The scenario runner is aware of those and resolve those automatically. Another issue is dealing with potentially stale requests, especially because we are issuing requests on the same data immediately Again, this is something that the scenario runner handles internally, and we don’t have to worry about it.

There are some things here that may not be immediately apparent. We are doing pure state base testing, in fact, this is black box testing. The scenarios define the external API of the system, which is a nice addition.

We don’t care about the actual implementation, look at the unit test, we need to setup a db instance, start the background threads, etc. If I modify the DocumentDatabase constructor, or the initialization process, I need to touch each test that uses it. I can try to encapsulate that, but in many cases, you really can’t do that upfront. SpinBackgroundWorkers, for example, is something that is required in only some of the unit tests, and it is a late addition. So I would have to go and add it to each of the tests that require it.

Because the scenarios don’t have any intrinsic knowledge about the server, any require change is something that you would have to do in a single location, nothing more.

Users can send me a failure scenarios. I am using this extensively with NH Prof (InitializeOfflineLogging), and it is amazing. When a user runs into a problem, I can tell them, please send me a Fiddler trace of the issue, and I can turn that into a repeatable test in a matter of moments.

I actually thought about using Fiddler’s saz files as the format for my scenarios, but I would have to actually understand them first. :-) It doesn’t look hard, but flat files seemed easier still.

Actually, I went ahead and made the modification, because now i have even less friction, just record a Fiddler session, drop it in a folder, and I have a test. Turned out that the Fiddler format is very easy to work with.

time to read 1 min | 92 words

Rhino Divan DB is going to come in at least two forms, embedded, and remote. The following is a full example of starting DivanDB, defining a view, adding some documents and then querying the database.

Note that here we want to ensure that we get the most up to date result, so we refuse to accept a potentially stale query.

image

This outputs the right result, by the way :-)

time to read 3 min | 474 words

One of the things that I wanted to do with RDB is to create an explicit actor model inside the codebase. I have been using a similar structure inside NH Prof, and it has been quite successful. The design goals for RDB is:

Assumptions for the database cosntruction

Get / Put / Delete semantics for Json documents.

All those operations can access batches of documents to work on. Those operations fully implement ACID. Which means that if you got a successful response for a document Put, you can rely on the document always being there.

Those operations should be considered cheap.

Reboot / crash resistant

The DB can crash / restart, but no lose of functionality may occur, but as soon as it restarts, everything goes on as usual. There can be no in memory data structures / work that cannot be recovered from persistent structure.

Views for searching

The DB use views, defined using linq expressions, for supporting search capabilities. Those views are background indexed (so no holding up request processing for views). When you get a result from a queue you always know if the result is stale or not.

Adding a view to an existing database is a cheap operation, regardless of the database size. During view construction, the view can be queried (but its results will be considered stale). Reboot during view construction will not impact the construction process.

Indexing a document twice is a stable operation, which means that a view can always choose to re-index things if it so choose.

Overall design

image

RDB stores two major pieces of information in transactional storage.

Documents, obviously, which are stored in a format that allows to send the document content to the user quickly, and tasks.

Tasks are how RDB maintains state over crashes / reboots, and they also form the base of async work of the database. Any work that is going to take some time for the database to perform is written to transactional storage as a task. Those tasks are things like: “View ‘peopleByName’ should index documents 1 – 42'”.

There are background threads working of off this tasks queue, performing the work and removing the task when they are completed.

The results of each view is written to a Lucene index (one per view).

So far i have the entire structure done, I need to some polishing, and I have a different OSS strategy to go with, but thinks are looking good.

time to read 1 min | 194 words

No, this isn’t a post about how to do UI delegation in software. This is me, looking at HTML code and realizing that I have a deep and troubling disadvantage, while I know how to build UI, I am just not very good at it.

For a certain value of done, Rhino Divan DB is done. Architecture is stable, infrastructure is in place, everything seems to be humming along nicely. There is still some work to be done (error handling, production worthiness, etc) but those are relatively minor things.

The most major item on the table, however, is providing a basic UI on top of the DB. I already have a working prototype, but I came to realize that I am tackling something that I am just not good at.

This is where you, hopefully, comes into play. I am interested in someone with good graph of HTML/JS to build a very simple CRUD interface on top of a JSON API. That is simple enough, but the small caveat is that my dead lines are short, I would like to see something working tomorrow.

Anyone is up for the challenge?

time to read 1 min | 119 words

It is interesting, I have been thinking about Divan DB for a long time now, and tonight I decided to give it some love and finally try to see exactly how hard it is going to be to write it.

It turns out that it isn’t really hard at all. You can see my progress here (http://github.com/ayende/rhino-divandb/tree/master). It now supports adding and retrieving documents, defining views, view caching, and (very simple) view application.

So you can now push a bunch of views and documents to the database and get the result back. There is a lot more to do (for example, handling edits with the views), but it seems to be fairly straight forward so far.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}