Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,026 | Comments: 44,842

filter by tags archive

Raven Situational Awareness

time to read 4 min | 684 words

There is a whole set of features that require collaboration from a set of severs. For example, when talking about auto scale scenarios, you really want the servers to figure things out on their own, without needing administrators to hold their hands and murmur sweet nothings at 3 AM.

We needed this feature in Raven DB, Raven MQ and probably in Raven FS, so I sat down and thought about what is actually needed and whatever I could package that in a re-usable form. I am on a roll for the last few days, and something that I estimated would take a week or two took me about six hours, all told.

At any rate, I realized that the important parts of this feature set is the ability to detect siblings on the same network, being able to detect failure of those siblings and the ability to dynamically select the master node.  The code is available here: https://github.com/hibernating-rhinos/Raven.SituationalAwareness under the AGPL license. If you want to use this code commercially, please contact me for commercial licensing arrangements.

Let us see what is actually involved here:

var presence = new Presence("clusters/commerce", new Dictionary<string, string>
    {"RavenDB-Endpoint", new UriBuilder("http", Environment.MachineName, 8080).Uri.ToString()}
}, TimeSpan.FromSeconds(3));
presence.TopologyChanged += (sender, nodeMetadata) =>
    switch (nodeMetadata.ChangeType)
        case TopologyChangeType.MasterSelected:
            Console.WriteLine("Master selected {0}", nodeMetadata.Uri);
        case TopologyChangeType.Discovered:
            Console.WriteLine("Found {0}", nodeMetadata.Uri);
        case TopologyChangeType.Gone:
            Console.WriteLine("Oh no, {0} is gone!", nodeMetadata.Uri);
            throw new ArgumentOutOfRangeException();

As you can see, we are talking about a single class that is exposed to your code. You need to provide the cluster name, this allows us to run multiple clusters on the same network without conflicts. (For example, in the code above, we have a set of servers for the commerce service, and another for the billing service, etc). Each node also exposes metadata to the entire cluster. In the code above, we share the endpoint for our RavenDB endpoint.  The TimeSpan variable determines the heartbeat frequency for the cluster (how often it would check for failing nodes).

We have a single event that we can subscribe to, which let us know about changes in the system topology. Discovered and Gone are pretty self explanatory, I think. But MasterSelected is more interesting.

After automatically discovering all the siblings on the network, Raven Situation Awareness will use the Paxos algorithm to decide who should be the master. The MasterSelected event happens when a quorum of the nodes select a master. You can then proceed with your own logic based on that. If the master will fail, the nodes will convene again and the quorum will select a new master.

With the network topology detection and the master selection out of the way, (and all of that with due consideration for failure conditions) the task of actually implementing a distributed server system just became significantly easier.


Roja Buck

How is the split-brain ( http://en.wikipedia.org/wiki/Split-brain_(Computing) ) problem dealt with by Raven.SituationalAwareness?

Is the implementation of Paxos protocol internal to the framework or is it based on an external dependency?

Ayende Rahien


Nodes will be notified about the split and about the joining, masters will be assigned for each split, and when the split it merged, will be merged as well.

What they do about it is outside the scope of RSA and in the realm of the actual server running it

Paxos is implemented internally, solely for the purpose of master detection


There is a threading bug here: The timer can fire on multiple threads at the same time (for example if your callback takes over 3 seconds to run). Your callback must be thread safe, probably by just ignoring concurrent timer events. Also after disposing the timer, events will still be delivered potentially infinitely many times because they might have queued up on the thread pool. I ran into this once and solved it with the following timer wrapper class which I license to you without restrictions:

public class StoppableSequentialTimer : IDisposable


    readonly Action callback;

    readonly System.Threading.Timer timer;

    bool disposed;

    bool running;

    public StoppableSequentialTimer(Action callback, TimeSpan dueTime, TimeSpan intervall)


        if (callback == null) throw new ArgumentNullException("callback");

        this.callback = callback;

        timer = new System.Threading.Timer(_ => TimerCallback(), null, dueTime, intervall);


    void TimerCallback()


        lock (timer)


            if (disposed || running) return;

            running = true;








            lock (timer)

                running = false;



    public void Dispose()


        lock (timer)

            disposed = true;




The callback executes non-concurrently and stops firing once the Timer is disposed. After disposal only a single superfluous callback run can happen.

Ayende Rahien


Where is the bug? Are you talking about the usage of the Timer in there?

There is no bug here, we might be running concurrently, but the code is safe for multi threading access.


Is topologyState accessed thread-safely? Is is being accessed and modified in the callback concurrently. I must say that I did not completely digest the code but this "smelled" of threading-bug.

Ayende Rahien

topologyState is ConcurrentDictionary


I cannot find anything concrete that is wrong with the code so it was probably a false alarm.


I know it's a breaking change, but you have a typo in the namespace and the the project files:

Raven.SituationaAwareness misses an "l".

Alex Simkin


This is a standard trick. You cannot copyright common words but you can copyright misspelled ones :)

Jon Wingfield

Google uses Paxos in its BigTable implementation. I came across this not by wikipedia, but in reading the BigTable pub. Pretty interesting


Hi Ayende, could you share with us how you tested it? I find it very interesting.

Daniel marbach

Hy ayende

Why are you using Udp with wcf. Isn't that scenario a typical one for rhino esb? Discovery, heartbeat etc. this all screams RhinoEsb ;)


Ayende Rahien


Discovery & heartbeat have nothing to do with Rhino ESB

Daniel Marbach

I thought this scenario is ideal to be integrated with RhinoESB and messaging...

Comment preview

Comments have been closed on this topic.


No future posts left, oh my!


  1. Technical observations from my wife (3):
    13 Nov 2015 - Production issues
  2. Production postmortem (13):
    13 Nov 2015 - The case of the “it is slow on that machine (only)”
  3. Speaking (5):
    09 Nov 2015 - Community talk in Kiev, Ukraine–What does it take to be a good developer
  4. Find the bug (5):
    11 Sep 2015 - The concurrent memory buster
  5. Buffer allocation strategies (3):
    09 Sep 2015 - Bad usage patterns
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats