Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,495
Comments: 51,046
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 96 words

Originally posted at 11/27/2010

Put simply, is is not this:

The main idea is that RavenDB is a NoSQL database that aims unabashedly to be something that you can just pick up and use, and the database will take care of things like that for you, because for the most part, you can trust the defaults.

That range from having a good story for the Client API, having a Linq Provider, making sensible defaults and working very hard on making sure that everything fits (things like ad hoc queries, for example).

time to read 2 min | 312 words

Originally posted at 11/27/2010

One of the features that people asked for RavenDB is the notion of multi tenancy. The idea is that I can (easily) create a new database (there is the RavenDB Server, and the server contains Databases, which contain indexes and documents) for each tenant. As you can imagine from the feature name, while it is actually an implementation of several databases on the same server, the usage goal is to have this for different tenants. As such, the RavenDB implementation is aimed to handle that exact scenario.

As such, it is expected that some users will have hundreds / thousands of databases on a single instance (think shared hosting or hosted raven). In order to support this, we need to have a good handle on our resources.

RavenDB handles this scenario by loading databases on demand, and unloading them again when they aren’t used for a long enough period. Using this approach, the only databases that consume resources are the ones actually being used.

In order to use the multi tenancy features, all you need to do is call:


var northwindSession = documentStore.OpenSession("Northwind");

You can now work in a separate database from all other databases, with no potential for data to leak from one tenant to another. Creating a database on the fly s a very cheap operation, as well, so you can create as many of them as you want.

time to read 1 min | 100 words

Originally posted at 11/27/2010

A while ago I had to make a decision regarding how to approach building the multi tenancy feature for RavenDB. Leaving aside the actual multi tenancy approach, we had an issue in how to allow access to it.

We have the following options for accessing the northwind database:

  • /northwind/docs - breaking change, obvious, less work
  • /docs?database=northwind - non breaking change, not so obvious, more work

What choice would you take?

If you know what is the choice I made for RavenDB, please don’t answer this post.

time to read 2 min | 224 words

Originally posted at 11/25/2010

In a recent post, I discussed the notion of competitive advantage and how you should play around them. In this post, I am going to focus on Uber Prof. Just to clarify, when I am talking about Uber Prof, I am talking about NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, Hibernate Profiler and LLBLGen Profiler. Uber Prof is just a handle for me to use to talk about each of those.

So, what is the major competitive advantage that I see in the Uber Prof line of products?

Put very simply, they focus very heavily on the developer’s point of view.

Other profilers will give you the SQL that is being executed, but Uber Prof will show you the SQL and:

  • Format that SQL in a way that make it easy to read.
  • Group the SQL statements into sessions. Which let the developer look at what is going on in the natural boundary.
  • Associate each query with the exact line of code that executed it.
  • Provide the developer with guidance about improving their code.

There are other stuff, of course, but those are the core features that make Uber Prof into what it is.

time to read 2 min | 237 words

Originally posted at 11/22/2010

In a previous post, I talked about how I found the following (really nasty) bug in RavenDB’s managed storage (which is still considered unstable, btw):

When deleting documents in a database that contains more than 2 documents, and  the document(s) deleted are deleted in a certain order, RavenDB would go into 100% CPU. The server would still function, but it would always think that it had work to do, even if it didn’t have any.

Now, I want to talk about the actual bug.


What I did wrong here is to reuse the removed and value parameters in the second call to TryRemove. That call is internal, and is only needed to properly balance the tree, but what it ended up doing is always return the removed/value from the right side of the tree.

Compounding the problem is that I only actually used the TryRemove value in a single location, and even then, it is a mistake. Take a look:


That meant that I actually looked for the problem in the secondary indexes for a while, before realizing that the actual problem was elsewhere.

time to read 3 min | 573 words

Originally posted at 11/22/2010

This method has a bug, a very subtle one. Can you figure it out?

public IBinarySearchTree<TKey, TValue> TryRemove(TKey key, out bool removed, out TValue value)
    IBinarySearchTree<TKey, TValue> result;
    int compare = comparer.Compare(key, theKey);
    if (compare == 0)
        removed = true;
        value = theValue;
        // We have a match. If this is a leaf, just remove it 
        // by returning Empty.  If we have only one child,
        // replace the node with the child.
        if (Right.IsEmpty && Left.IsEmpty)
            result = new EmptyAVLTree<TKey, TValue>(comparer, deepCopyKey, deepCopyValue);
        else if (Right.IsEmpty && !Left.IsEmpty)
            result = Left;
        else if (!Right.IsEmpty && Left.IsEmpty)
            result = Right;
            // We have two children. Remove the next-highest node and replace
            // this node with it.
            IBinarySearchTree<TKey, TValue> successor = Right;
            while (!successor.Left.IsEmpty)
                successor = successor.Left;
            result = new AVLTree<TKey, TValue>(comparer, deepCopyKey, deepCopyValue, successor.Key, 
                successor.Value, Left, Right.TryRemove(successor.Key, out removed, out value));
    else if (compare < 0)
        result = new AVLTree<TKey, TValue>(comparer, deepCopyKey, deepCopyValue, 
            theKey, theValue, Left.TryRemove(key, out removed, out value), Right);
        result = new AVLTree<TKey, TValue>(comparer, deepCopyKey, deepCopyValue,
            theKey, theValue, Left, Right.TryRemove(key, out removed, out value));
    return MakeBalanced(result);
time to read 1 min | 99 words

I am going to be at Dallas Days of .NET on March next year.  You can use the following link to get a discount if you order now: http://jointechies.eventbrite.com/?discount=OrenEini

This is going to be an interesting event, because there is one track in which I am going to be doing every other talk for 2 days. This is going to give me a wide enough scope to cover just about every topic that I am interested at, including some time to go in depth into several topics that I usually have the chance to only skim.

time to read 3 min | 463 words

Yesterday I had an interesting talk with a friend about being a Micro ISV. I am not sure how good a source I am for advice in the matter, but I did have some. Including one that I think is good enough to talk about here.

Currently my company have two products:

Both of them came into a market that already had strong competitors.

In the case of Uber Prof, I am competing with SQL Profiler, which is “free” (you get that with SQL Server) and the Huagati suite of profilers which are significantly cheaper than Uber Prof. In the case of RavenDB, MongoDB and CouchDB already had the mindshare, and they are both free as in beer and as in speech.

One of the decisions that you have to be aware of when creating your product is what are the products that people are going to compare you to. It doesn’t really matter whether that is an accurate comparison or whether they are comparing apples to camels, but you will be compared to them.

And early on, you have to decide what your answer is going to be like when someone will ask you “why should I use your stuff instead of XYZ?”.

Here is a general rule of the thumb. You never want to answer them “because it is cheaper than XYZ”. Pricing has a lot of implications, some of directly affect the perceived quality of your product. It is perfectly fine to point out that it has a much cheaper TCO, though, because then you are increasing the value of your product, not reducing it.

But those are general advices that you can get anywhere. My point here is somewhat different. Once you decide what you are doing with your product that gives you a good answer for that question, you have defined your competitive advantage. That thing that will make people choose your stuff over everyone else.

Remember, competing on pricing is a losing proposition – and the pun is fully intended here!

But once you have the notion of what your competitive advantage is going to be, you have to design your product around that. In essence, that competitive advantage is going to be the thing that you are going to work on. Every decision that you have is going to have to be judged in light of the goal of increasing your competitive advantage.

Can you try to guess what I define as competitive advantages for Uber Prof and Raven DB?

time to read 1 min | 123 words

Here is the story, the only reason that I am using iTunes is because I want to sync books that I buy from audible.com to my iPhone.

I am still fighting this problem. And I have installed / uninstalled, danced the mamba and even try some chicken sacrifice on the last full moon. Nothing helps, oh, it will works once, immediately after I install it, but on the next reboot, it will show the same error.

Right now I have uninstalled iTunes from my system, and I am currently building a VM specifically so I would be able to sync new audiobooks to my iPhones. I think that this is insane.

Anyone got a better option than that?

time to read 11 min | 2014 words

We had the following (really nasty) bug in RavenDB’s managed storage (which is still considered unstable, btw):

When deleting documents in a database that contains more than 2 documents, and  the document(s) deleted are deleted in a certain order, RavenDB would go into 100% CPU. The server would still function, but it would always think that it had work to do, even if it didn’t have any.

To call this annoying is an understatement. To understand the bug I have to explain a bit about how RavenDB’s uses Munin, the managed storage engine. Munin gives you the notion of a primary key (which can be any json tuple) and secondary indexes. As expected, the PK is unique, but secondary indexes can contain duplicate values.

The problem that we had was that for some reason, removing values from the table wouldn’t remove them from the secondary indexes. That drove me crazy. At first, I tried to debug the problem by running the following unit test:

public class CanHandleDocumentRemoval : LocalClientTest
    public void CanHandleDocumentDeletion()
        using(var store = NewDocumentStore())
            using(var session = store.OpenSession())
                for (int i = 0; i < 3; i++)
                    session.Store(new User
                        Name = "ayende"
            using (var session = store.OpenSession())
                var users = session.Query<User>("Raven/DocumentsByEntityName")
                    .Customize(x => x.WaitForNonStaleResults())
                foreach (var user in users)
            using (var session = store.OpenSession())
                var users = session.Query<User>("Raven/DocumentsByEntityName")
                    .Customize(x => x.WaitForNonStaleResults(TimeSpan.FromSeconds(5)))

But, while this reproduced the problem, it was very hard to debug properly. Mostly, because this executes the entire RavenDB server, which means that I had to deal with such things as concurrency, multiple operations, etc.

After a while, it became clear that I wouldn’t be able to understand the root cause of the problem from that test, so I decided to take a different route. I started to add logging in the places where I thought the problem was, and then I turned that log into a test all of its own.

public void CanProperlyHandleDeletingThreeItemsBothFromPK_And_SecondaryIndexes()
    var cmds = new[]
""type"":""Raven.Database.Tasks.RemoveFromIndexTask"",""mergable"":true},""TableId"":9,""TxId"":""NiAAMOT72EC/We7rnZS/Fw==""}", @"{""Cmd"":""Put"",""Key"":{""index"":""Raven/DocumentsByEntityName"",""id"":""AAAAAAAAAAEAAAAAAAAABg=="",""time"":""\/Date(1290420997509)\/"",
"type"":""Raven.Database.Tasks.RemoveFromIndexTask"",""mergable"":true},""TableId"":9,""TxId"":""NiAAMOT72EC/We7rnZS/Fw==""}", @"{""Cmd"":""Put"",""Key"":{""index"":""Raven/DocumentsByEntityName"",""id"":""AAAAAAAAAAEAAAAAAAAABw=="",""time"":""\/Date(1290420997509)\/"",
"type"":""Raven.Database.Tasks.RemoveFromIndexTask"",""mergable"":true},""TableId"":9,""TxId"":""NiAAMOT72EC/We7rnZS/Fw==""}", @"{""Cmd"":""Commit"",""TableId"":9,""TxId"":""NiAAMOT72EC/We7rnZS/Fw==""}", @"{""Cmd"":""Del"",""Key"":{""index"":""Raven/DocumentsByEntityName"",""id"":""AAAAAAAAAAEAAAAAAAAABg=="",""time"":""\/Date(1290420997509)\/"",
"type"":""Raven.Database.Tasks.RemoveFromIndexTask"",""mergable"":true},""TableId"":9,""TxId"":""wM3q3VA0XkWecl5WBr9Cfw==""}", @"{""Cmd"":""Del"",""Key"":{""index"":""Raven/DocumentsByEntityName"",""id"":""AAAAAAAAAAEAAAAAAAAABw=="",""time"":""\/Date(1290420997509)\/"",
"type"":""Raven.Database.Tasks.RemoveFromIndexTask"",""mergable"":true},""TableId"":9,""TxId"":""wM3q3VA0XkWecl5WBr9Cfw==""}", @"{""Cmd"":""Del"",""Key"":{""index"":""Raven/DocumentsByEntityName"",""id"":""AAAAAAAAAAEAAAAAAAAABQ=="",""time"":""\/Date(1290420997504)\/"",
"type"":""Raven.Database.Tasks.RemoveFromIndexTask"",""mergable"":true},""TableId"":9,""TxId"":""wM3q3VA0XkWecl5WBr9Cfw==""}", @"{""Cmd"":""Commit"",""TableId"":9,""TxId"":""wM3q3VA0XkWecl5WBr9Cfw==""}", }; var tableStorage = new TableStorage(new MemoryPersistentSource()); foreach (var cmdText in cmds) { var command = JObject.Parse(cmdText); var tblId = command.Value<int>("TableId"); var table = tableStorage.Tables[tblId]; var txId = new Guid(Convert.FromBase64String(command.Value<string>("TxId"))); var key = command["Key"] as JObject; if (key != null) { foreach (var property in key.Properties()) { if(property.Value.Type != JTokenType.String) continue; var value = property.Value.Value<string>(); if (value.EndsWith("==") == false) continue; key[property.Name] = Convert.FromBase64String(value); } } switch (command.Value<string>("Cmd")) { case "Put": table.Put(command["Key"], new byte[] {1, 2, 3}, txId); break; case "Del": table.Remove(command["Key"], txId); break; case "Commit": table.CompleteCommit(txId); break; } } Assert.Empty(tableStorage.Tasks); Assert.Null(tableStorage.Tasks["ByIndexAndTime"].LastOrDefault()); }

The cmds variable that you see here was generated from the logs. What I did was generate the whole log, verify that this reproduce the bug, and then start trimming the commands until I had the minimal set that reproduced it.

Using this approach, I was able to narrow the actual issue to a small set of API, which I was then able to go through in detail, and finally figure out what the bug was.

This post isn’t about the bug (I’ll cover that in the next post), but about the idea of going from a “there is a bug and I don’t know how to reproduce it in to a small enough step to understand” to “here is the exact things that fail”. A more sophisticated approach would be to do a dump of stack traces and parameters and execute that, but for my scenario, it was easy to just construct things from the log.


No future posts left, oh my!


  1. Recording (13):
    05 Mar 2024 - Technology & Friends - Oren Eini on the Corax Search Engine
  2. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  3. Production postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
  4. Challenge (74):
    13 Oct 2023 - Fastest node selection metastable error state–answer
  5. Filtering negative numbers, fast (4):
    15 Sep 2023 - Beating memcpy()
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats