You might have noticed that there we a few errors on the blog recently.
That is related to a testing strategy that we employ for RavenDB.
In particular, part of our test scenario is to shove a RavenDB build to our own production system, to see how it works in a live production system with real workloads.
Our production systems are entirely running on RavenDB, and we have been playing with all sort of configuration and deployment options recently. The general idea is to give us good indication about the performance of RavenDB and make sure that we don’t have any performance regression.
Here is an example taken from our tracking system, for this blog:
You can see that we had a few periods with longer than usual response times. The actual reason for that was that we had some code that was throwing tremendous amount of work to RavenDB, and this is actually exhibiting noisy neighbor syndrome in this case (that is, the blog is behaving fine, but the machine it is on is very busy). That gave us indication about a possible optimization and some better internal metrics.
At any rate, the downside of living on the bleeding edge in production is that we sometimes get a lemon build.
That is the cost of dog fooding, something you need to clean up the results.