Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,124 | Comments: 45,483

filter by tags archive

Message passing, performance–take 2

time to read 2 min | 249 words

In my previous post, I did some rough “benchmarks” to see how message passing options behave. I got some great comments, and I thought I’ll expand on that.

The baseline for this was a blocking queue, and we managed to process using that we managed to get:

145,271,000 msgs in 00:00:10.4597977 for 13,888,510 ops/sec

And the async BufferBlock, using which we got:

43,268,149 msgs in 00:00:10 for 4,326,815 ops/sec.

Using LMAX Disruptor we got a disappointing:

29,791,996 msgs in 00:00:10.0003334 for 2,979,100 ops/sec

However, it was pointed out that I can significantly improve this if I changed the code to be:

var disruptor = new Disruptor.Dsl.Disruptor<Holder>(() => new Holder(), new SingleThreadedClaimStrategy(256), new YieldingWaitStrategy(), TaskScheduler.Default);

After which we get a very nice:
141,501,999 msgs in 00:00:10.0000051 for 14,150,193 ops/sec
Another request I got was for testing this with a concurrent queue, which is actually what it is meant to do. The code is actually the same as the blocking queue, we just changed Bus<string> to ConcurrentQueue<string>.
Using that, we got:
170,726,000 msgs in 00:00:10.0000042 for 17,072,593 ops/sec
And yes, this is pretty much just because I could. Any of those methods is quite significantly higher than anything close to what I actually need.



And this was quite Disruptive, not only to the Disruptor


Yesterday, I was curious about the Disruptor comment as well. But from a quick peek through the code, I could not see how one could get thread safety using the SingleThreadedClaimStrategy and YieldWaitStrategy. Digging through the code made it look to me as if these were doing nothing to allow multiple threads to post/enqueue safely. Am I wrong about this? And if so, can someone please explain?

Jordan Terrell

I'm pretty sure the area where you would see the greatest benefit from Disruptor was when you had multiple consumers with a dependency network between them [1][2]. Typically you would use multiple queues in the network, whereas you would only use one Disruptor ring-buffer. I'd like to see performance tests that test that scenario, which comes up more often in message passing scenarios.

1: http://mechanitis.blogspot.com/2011/07/dissecting-disruptor-wiring-up.html 2: http://martinfowler.com/articles/lmax.html

Romain Verdier

@tyler: Yes, as the name suggests, SingleThreadedClaimStrategy is usable when there is a single thread acquiring entries from the ringbuffer, which was the benchmarked scenario. Others claim strategies must be used when you acquire entries from multiple threads. Note: according to the disruptor terminology, "acquiring entries" means acquiring entries to publish new values to the ring buffer.

Anyway, this kind of benchmark (single publisher/single consumer, monitor-based synchronization with no real contention, no latency measurement, etc.) is only meaningful if you just want to get a vague order of magnitude of what you can achieve in term of throughput between two threads, relatively to your use case. I think that's the point of these posts.


Looks like you are implementing RAFT in .NET :)


@Romain, thanks for the clarification. I misread Ayende's original code and thought, incorrectly, that he had two threads publishing. Somehow I missed the fact that one was posting and one was read/receiving. I must learn to read more carefully next time.

Jordan Terrell

Setup a test harness to try out difference scenarios. More scenarios to come...



@Jordan, not only depdendent consumers but also concurrent where both of them are read-only. @ Romain, currently I'm not aware of a public Disruptor port for .NET which includes the newest MultiProducerSequencer introduced in JAVA Disruptor 3.0. This is hugely beneficial, and speeds up multi-prod scenario a lot.

Jordan Terrell

@Scooletz Agreed, and yes, the Disruptor port for .NET is in bad shape.

Jordan Terrell

@ayende I'm pretty sure the Nuget package is out of date and is a debug build. I'm using the 2.10 release build of the Disruptor port and it is beating even the concurrent queue [1]. Again, its rare that anyone needs to process millions of messages a second, but since we are comparing raw performance, I think Disruptor comes out on top.

1: https://github.com/iSynaptic/MessagingShootout

Jordan Terrell

As expected, when you implement a fork-join pattern with Disruptor, it clearly takes the least (I'm seeing by a factor of 2).


Comment preview

Comments have been closed on this topic.


  1. RavenDB 3.5 whirl wind tour: I’ll find who is taking my I/O bandwidth and they SHALL pay - 3 hours from now
  2. The design of RavenDB 4.0: Physically segregating collections - about one day from now
  3. RavenDB 3.5 Whirlwind tour: I need to be free to explore my data - 2 days from now
  4. RavenDB 3.5 whirl wind tour: I'll have the 3+1 goodies to go, please - 5 days from now
  5. The design of RavenDB 4.0: Voron has a one track mind - 6 days from now

And 12 more posts are pending...

There are posts all the way to May 30, 2016


  1. RavenDB 3.5 whirl wind tour (14):
    02 May 2016 - You want all the data, you can’t handle all the data
  2. The design of RavenDB 4.0 (13):
    03 May 2016 - Making Lucene reliable
  3. Tasks for the new comer (2):
    15 Apr 2016 - Quartz.NET with RavenDB
  4. Code through the looking glass (5):
    18 Mar 2016 - And a linear search to rule them
  5. Find the bug (8):
    29 Feb 2016 - When you can't rely on your own identity
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats