Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,317 | Comments: 46,916

filter by tags archive

Setting up a Rhino Service Bus ApplicationPart I

time to read 5 min | 826 words

Part of what we are currently doing at Hibernating Rhinos is work on some internal applications. In particular, we slowly update our existing applications to do two things:

  • Make sure that we are running a maintainable software package.
  • Dog food our own stuff in real world scenarios.

In this case, I want to talk about Rhino Service Bus. Corey has been maintaining the project and I haven’t really followed very closely on what was going on. Today we started doing major work on our order processing backend system, so I decided that we might as well ditch the 2.5+ years old version of Rhino Service Bus that we were running and get working with the latest.

I was so lazy about that that I actually didn’t bother to get the sources, and just got the nuget package.

Install-Package Rhino.ServiceBus.Castle

Install-Package Raven.ServiceBus.Host

That got me half way there. No need to worry about adding references, etc, what an awesome idea.

Next, I had to deal with the configuration. Here is how you configure a Rhino Service Bus application (app.config):

        <section name="rhino.esb"
                         type="Rhino.ServiceBus.Config.BusConfigurationSection, Rhino.ServiceBus"/>
            <add name="HibernatingRhinos.Orders.Backend.Messages"

As you can see, it is fairly simple. Specify the endpoint, and the ownerships of the messages, and you are pretty much done.

The next step is to setup the actual application. In order to do that, we need to define a boot strapper. Think about it like a constructor, but for the entire application. The minimal boot strapper needed is:

public class BootStrapper : CastleBootStrapper

Again, it doesn’t really get any simpler.

The next step is somewhat of a pain. A Rhino Service Bus project is usually a class library (dll). So if we want to run it, we need to host it somehwere. RSB comes with Rhino.ServiceBus.Host, but the command line interface sucks. You need to do this to get it running:

Rhino.ServiceBus.Host.exe /Action:Debug /Name:HibernatingRhinos.Orders.Backend /Assembly:.\HibernatingRhinos.Orders.Backend.dll

And honestly, who ever thought about making it so complicated? Give me something to make this simpler, for crying out loud. Moron developer who no consideration of actual system usability.

Oh, yeah, that would be me. Maybe I ought to fix that.

Anyway, that is pretty much everything you need to do to get RSB up & running. Here is a sample test consumer:

public class TestConsumer : ConsumerOf<TestMsg>
    public void Consume(TestMsg message)

And that is enough for now. See you in the next post…

A new daily builds site

time to read 2 min | 245 words

It is absolutely shocking how quickly small applications add up. One of the applications that has been an absolute core part of the architecture in Hibernating Rhinos is the daily builds server. It is a core part of the system because it is the part that is responsible for… well, managing the downloads of our products, but it is also the endpoint that the profilers will use to check whatever they need to be updated, and how to get the new binaries.

Unfortunately, that piece was written in about 2 hours sometimes in 2008 (I only have history for the project up to mid 2009), it was a MonoRail project (ASP.Net MVC wasn’t released, and I didn’t really like it at the time). And it hasn’t been updated since.

I believed that I mentioned that there should be a very small transition between “you are hired” and “your stuff is in production”. We got a new developer last week, and we now have a new version of the daily build out: http://builds.hibernatingrhinos.com/

The new site is build on top of RavenDB, and has a much better UI. It also gives us, for the very first time, real feeling for the number of downloads that we actually have over time.

In particular, we have these:

And both graphs are things that I am very happy to see Smile.

NHibernate vs. Entity Framework: Usage

time to read 1 min | 189 words

This is in a response to a question on twitter:


In general, for applications, I would always use NHibernate. Mostly because I am able to do so much more with it.

For one off utilities and such, where I just need to get the data and I don’t really care how, I would generally just use the EF wizard to generate a model and do something with it. Mostly, it is so I can get the code gen stuff ASAP and have the data working.

For example, the routine to import the data from the Subtext database to the RacconBlog database is using Entity Framework. Mostly because it is easier and it is a one time thing.

I guess that if I was running any of the tools for generating NHibernate model from the database, I would be using it, but honestly, it just doesn’t really matter at that point. I just don’t care for those sort of tools.

RavenDB–Replication & Master <-> Master support

time to read 4 min | 621 words

RavenDB support master/master replication, with the caveat that such scenarios may result in conflicts. I recently had a long discussion with a customer about deploying a clustered RavenDB server.

Setting it up is as simple as having two RavenDB server and telling each one to replicate to the other:


One nice aspect of this is that with the way RavenDB replication works, we also get failover for free. With the setup that we have here, we need to configure the client to failover both reads are writes (by default, we only failover reads):

store.Conventions.FailoverBehavior = FailoverBehavior.AllowReadsFromSecondariesAndWritesToSecondaries;

And now we are all set. During normal operations, all writes to the primary would be replicated to the secondary automatically. If the primary goes down for any reason, we will failover to the secondary transparently. When the primary goes up, we will switch back to it, and the secondary will replicate all of the missing writes to the primary server.

So far, so good. There is only one thing that we have to worry about: conflicts.

What happen if during the failure period, we had two writes to the same document, one at the primary and one at the secondary. There is a very slim chance of this is happening, but it is something that we have to deal with. From RavenDB’s perspective, that is considered to be a conflict. We have two documents with the same key that have different ancestry chains. At that point, RavenDB will save all the conflict versions and then create a new conflict document with the document key. Access to the document would result in an error, and give you access to all of the conflicting versions. You now have to resolve the conflict by saving a new version of the document.

So far, easy enough, but the problem is that until the conflict is resolved, that document is not accessible. There are other alternatives, though. RavenDB can’t make decisions on which version of the document is accurate, or how to merge the two versions, but you might have enough information to do so, and we provide an extension point for you to do so:

public abstract class AbstractDocumentReplicationConflictResolver
    public abstract bool TryResolve(string id, RavenJObject metadata, RavenJObject document, JsonDocument existingDoc);

In fact, something that I suggested and might very well be the conflict resolution strategy is to select one of the documents (arbitrarily) and use that as the “merged” document, but record elsewhere that a conflict has occurred. The reasoning behind that is that we select one of the document versions, and then we have a human that gets to decide if there is anything that needs to be merged back to the live document.

This is useful because the chance of this actually happening are fairly small, we ensure that there is no data loss even if does happen, but more importantly, we don’t waste developer cycles on something that we currently don’t know how to handle. If/when we have a real failover, and if/when that resulted in a conflict, that is when we can start planning for having the next conflict automatically solved, and the best part of that? We don’t even need to solve all of the potential conflicts. We can solve the ones that we know about, and rest assure that anything that we aren’t familiar with would go through the same process of manual resolution that we already set up.

All in all, it is a fairly nice system, even if I do say so myself.

When should you NOT use RavenDB?

time to read 2 min | 208 words

That is a much more interesting question, especially when it is me that gets asked. Again, I am might be biased, but I really think that RavenDB is suitable for a whole range of problems.

One case that we don’t recommend using RavenDB for is reporting, though. In many cases, reporting databases need to answer queries that would be awkward or slow on a RavenDB system. For reporting, we recommend just throwing that data into a reporting database (either star schema or a cube) and do the reporting directly from there.

Most reporting tools will give you great support when you do that, and that is one sour spot (the reverse of a sweet spot) for RavenDB. We can do reporting, but you have to know what you want to report on beforehand. In most reporting systems, you give the users  great deal of freedom with regards to how to look, aggregate and process the data, and that isn’t something that you can easily do with RavenDB.

So far, that is the one place where we have said, “nope, that isn’t us, go look at that other thing that does this feature really well”. RavenDB is focused on being a good database for OLTP, not for reporting.

When should you use RavenDB?

time to read 2 min | 317 words

This is a question a lot of people are asking me, and I keep give them the same answer. “I am the one building RavenDB, your grandmother and all her cats deserve to have three licenses, each (maybe with 3 way replication as well).”

Kidding aside, it is a serious question, which I am going to try answering in this post. I just wanted to make sure that it is clear that there might be some bias from my perspective.

Having built NHibernate based application for years, and having built RavenDB based applications from almost the point where it could start up on its own, I can tell you that there is quite a bit of a difference between the two. Mostly because I intentionally designed RavenDB and the RavenDB client API to deal with problems and frustrations in the NHibernate / RDMBS model.

What we discovered is that the lack of mapping and the ability to store complex object graphs in a single document makes for a speedy development and high performance applications. We usually see RavenDB being used in OLTP applications or as a persistent view model store (usually for the performance critical pages).

Typically, it isn’t the only database in the project. In brown field projects, we usually see RavenDB brought in to serve as a persistent view model store for the critical pages, data is replicated to RavenDB from the main database and then read directly from RavenDB in order to process the perf critical pages. For green field projects, we usually see RavenDB used as the primary application database, most or all of the data resides inside RavenDB. In some cases, there is also an additional reporting database as well.

So the quick answer, and the one we are following, is that RavenDB is imminently suitable for OLTP applications, and can be used with great success as a persistent view model cache.

When should you use NHibernate?

time to read 1 min | 78 words

If you are using a relational database, and you are going to do writes, you want to use NHibernate.

If all you are doing are reads, you don’t need NHibernate, you can use something like Massive instead, and it would probably be easier. But when you have read & writes on a relational database, NHibernate is going to be a better choice.

That said, look for the next post about when you should be using a relational database.

Make a distinction: Errors vs. Alerts

time to read 3 min | 474 words

At several customer visits recently, I encountered a common problem. Their production error handling strategy. In one customer, they capture all of the production errors, push them to System Center Operations Manager (SCOM) and then send those errors to the development team over email. On another customer, they didn’t really have a production error handling strategy.

The interesting thing about that is that in both cases, production errors were handled in exactly the same way. They were ignored until a user called and complained about that.


What do you mean, ignored? The first client obviously did the right thing, they had capture and monitored the production errors, notifying the development team on each and every one of them.

Well, it is actually very simple, at the first client, I asked everyone to raise their hands if they receive the production errors emails. About half the room raised their hands. Then I asked how many of them set up a rule to move those emails from their inbox.

Every single one of them had done that.

The major problem is that errors happen all the time. In the vast majority of cases, you don’t really care, and it will fix itself automatically. For example, a error such as Transaction Deadlock Exception might happen a few times a day. There really isn’t much you can do about those errors (well, re-architecting the app might do that, but that is out of scope for this post). Another might be a call to an external service that occasionally fails and already have a retry strategy in place.

Do you get the problem?

Getting notified about every single production error has immunized the team from them. Now going over the productions errors is just a chore, and a fairly unpleasant one.

That is a major difference between Errors and Alerts. An error is just an exception or a problem that happened in production. Usually, those aren’t really important. It will sort itself out. An ETL process that runs once an hour can fail a few times, and as long as it’ll complete within a reasonable time frame, you don’t care.

Did you notice how often that statement is repeated. You don’t care.

When do you care?

  • When the ETL process has been unable to complete for three consecutive times.
  • When the external service that you call has been unresponsive for over 8 hours.
  • When a specific error is happening over 50 times an hour.
  • When an unknown error showed up in the logs more than twice in the last hour.

Each of those cases requires a human intervention in needed. And in most cases, those are going to be rare.

Errors are common place, they happen all the time and no one really care. Alerts is what you wake up at 2 AM for.

The required infrastructure frees you from infrastructure decisions

time to read 5 min | 871 words

One of the things that I try to do whenever I design a system is to have just enough infrastructure in place to make sure that I don’t have to make any infrastructure decisions.

What does this means? It means that the code that actually does the interesting things, the code that actually provide a business value, isn’t really aware of the context that it is running on. It can make assumptions about the state of the world in which it runs.

For example, let us take this controller, it inherits from RavenController, and it assumes that when it runs, it will have a documentSession instance primed & ready for it:

public ActionResult Validate(Guid key)
  var license = documentSession.Query<License>()
    .Where(x=>x.Key == key)

  if (license == null)
    return HttpNotFound();

  var orderMessage = documentSession.Load<OrderMessage>(license.OrderId);

  switch (orderMessage.OrderType)
    case "Perpetual":
      return Json(new {License = "Valid"});

  return HttpNotFound();

What is important about that is that it isn’t actively doing something, it just assumes that it is there.

Why is that important? It is important because instead of going ahead and creating the session, we assume it is there, we are unaware of how it got there, who did it, etc. If we wanted to execute this code in a unit test, all we would need to do is to plug a session in, and then execute the code.

But that is just part of this. Let us look at a more complex scenario, the processing of an order:

public class ProcessOrderMessageTask : BackgroundTask
  public string OrderId { get; set; }

  public override void Execute()
    var orderMessage = documentSession.Load<OrderMessage>(OrderId);

    var license = new License
      Key = Guid.NewGuid(),
      OrderId = orderMessage.Id

    TaskExecuter.ExecuteLater(new SendEmailTask
      From = "orders@tekpub.com",
      To = orderMessage.Email,
      Subject = "Congrats on your new tekpub thingie",
      Template = "NewOrder",
      ViewContext = new
        LiceseKey = license.Key

    orderMessage.Handled = true;

  public override string ToString()
    return string.Format("OrderId: {0}", OrderId);

There are several things that are happening here:

  • Again, we are assuming that when we run, we are actually going to run with the documentSession set to a valid value.
  • Note that we have the notion of TaskExecuter.ExecuteLater.

That is actually quite an important aspect of infrastructure ignorance.

ExecuteLater is just that, a promise to execute something later, it doesn’t specify how or when. In fact, it takes this a bit farther than that. Only if the current operation will complete successfully will this task run. So if we failed to commit the transaction, the SendemailTask won’t get executed. Those sort of details can only happen when we let go of trying to control everything and let the infrastructure support us.

During the Full Throttle episode in which this code was written, we moved the task execution from a background thread to a different process, and nothing in the code had to change, because the only thing that we had to do is to setup the environment that the code expected.

I really like this sort of behavior, because it frees the code from all the nasty infrastructure concerns, at the same time as it gives you an incredibly powerful platform to work on.

And just for fun, this result in a system that you can play with with great ease, you don’t have to worry about your choice. You can modify them at any time, because you don’t really care how those things are done.


  1. Feature intersection bugs are the hardest to predict - 6 hours from now
  2. RavenDB Conference videos: Implementing CQRS and Event Sourcing with RavenDB - about one day from now
  3. How did the milk get to the fridge? - 2 days from now
  4. RavenDB Conference videos: Building Codealike: a journey into the developers analytics world - 5 days from now
  5. Low level Voron optimizations: Transaction lock handoff - 6 days from now

And 8 more posts are pending...

There are posts all the way to Mar 10, 2017


  1. RavenDB Conference videos (12):
    21 Feb 2017 - Zapping ever faster
  2. Low level Voron optimizations (5):
    20 Feb 2017 - Recyclers do it over and over again.
  3. Implementing low level trie (4):
    26 Jan 2017 - Digging into the C++ impl
  4. Answer (9):
    20 Jan 2017 - What does this code do?
  5. Challenge (48):
    19 Jan 2017 - What does this code do?
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats