Ayende @ Rahien

It's a girl

Lambda methods and implicit context

The C# compiler is lazy, which is usually a very good thing, but that can also give you some issues. We recently tracked down a memory usage issue to code that looked roughly like this.

var stats = new PerformatStats
{
    Size = largeData.Length
};
stats.OnCompletion += () => this.RecordCompletion(stats);

Write(stats, o =>
{
    var sp = new Stopwatch();
    foreach (var item in largeData)
    {
        sp.Restart();
        // do something with this
        stats.RecordOperationDuration(sp);
    }
});

On the surface, this looks good. We are only using largeData for a short while, right?

But behind the scene, something evil lurks. Here is what this actually is translated to by the compiler:

__DisplayClass3 cDisplayClass3_1 = new __DisplayClass3
{
    __this = this,
    largeData = largeData
};
cDisplayClass3_1.stats = new PerformatStats { Size = cDisplayClass3_1.largeData.Length };

cDisplayClass3_1.stats.OnCompletion += new Action(cDisplayClass3_1, __LambdaMethod1__);

Write(cDisplayClass3_1.stats, new Action(cDisplayClass3_1, __LambdaMethod2__));

You need to pay special attention to what is going on. We need to maintain the local state of the variables. So the compiler lift the local parameters into an object. (Called __DisplayClass3).

Creating spurious objects is something that we want to avoid, so the C# compiler says: “Oh, I’ve two lambdas in this call that need to get access to the local variables. Instead of creating two objects, I can create just a single one, and share it among both calls, thereby saving some space”.

Unfortunately for us, there is a slight issue here. The lifetime of the stats object is pretty long (we use it to report stats). But we also hold a reference to the completion delegate (we use that to report on stuff later on). Because the completion delegate holds the same lifted parameters object, and because that holds the large data object. It means that we ended up holding a lot of stuff in memory far beyond the time they were useful.

The annoying thing is that it was pretty hard to figure out, because we were looking at the source code, and the lambda that we know is likely to be long running doesn’t look like it is going to hold a referece to the largeData object.

Ouch.

Linux, Debts and Out Of Memory Killer

Imagine that you go to the bank, and ask for a 100,000$ mortgage. The nice guy in the bank agrees to lend you the money,  and since you need to pay that in 5 installments, you take 15,000$ to the contractor, and leave the rest in the bank until it is needed. The bank is doing brisk business, and promise a lot of customers that they can get their mortgage in the bank. Since most of the mortgages are also taken in installments, the bank never actually have enough money to hand over to all lenders. But it make do.

Until one dark day when you come to the bank and ask for the rest of the money, because it is time to install the kitchen cabinets, and you need to pay for that. The nice man in the bank tell you to wait a bit, and goes to see if they have any money. At this point, it would be embarrassing to tell you that they don’t have any money to give you, because they over committed themselves. The nice man from the bank murders you and bury your body in the desert, to avoid you complaining that you didn’t get the money that you were promised.  Actually, the nice man might go ahead and kill someone else (robbing them in the process), and then give you their money. You go home happy to your blood stained kitchen cabinets.

That is how memory management works in Linux.

After this dramatic opening, let us get down to what is really going on. Linux has a major problem. Its process model means that it is stuck up a tree and the only way down is via free fall. Whenever a process wants to create another process, the standard method in Linux is to call fork() and then call execv() to execute the new binary. The problem here is what fork() does. It needs to copy the entire process state to the new process. That include all memory, handles, registers, etc.

Let us assume that we have a process that allocated 1GB of memory for reading and writing, and then called fork(). The way things are setup, it is pretty cheap to create the new process, all we need to do is duplicate the kernel data structures and we are done. However, what happens when the memory that the process allocated? The fork() call requires that both processes will have access to that memory, and also that both of them may modify it. That means that we have a copy on write situation. Whenever one of the processes modify the memory, it is forcing the OS to copy that piece of memory to another physical memory location and remap the virtual addresses.

This allows the developer to do some really cool stuff. Redis implemented its backup strategy via the fork() call. By forking and then dumping the in memory process state to disk it can get consistent snapshot of the system with almost no code. It is the OS that is responsible for maintaining that invariant.

Speaking of invariants, it also means that there is absolutely no way that Linux can manage memory properly. If we have 2 GB of RAM on the machine, and we have a 1GB process that fork()-ed, what is going to happen? Well, it was promised 1 GB of RAM, and it got that. And it was also promised by fork() that both processes will be able to modify the full 1GB of RAM. If we also have some other processes taking memory (and assuming no swap for the moment), that pretty much means that someone is going to end up holding the dirty end of the stick.

Now, Linux has a configuration option that would prevent it (vm.overcommit_memory = 2, and the over commit ratio, but that isn’t really important. I’m including this here for the nitpickers, and yes, I’m aware that you can set oom_adj = –17 to protect myself from this issue, not the point.). This tell Linux that it shouldn’t over commit. In such cases, it would mean that the fork() method call would fail, and you’ll be left with an effectively a crippled system. So, we have the potential for a broken invariant. What is going to happen now?

Well, Linux promised you memory, and after exhausting all of the physical memory, it will start paging to swap file. But that can be exhausted to. That is when the Out Of Memory Killer gets to play, and it takes an axe and start choosing a likely candidate to be mercilessly murdered. The “nice” thing about this is that there is no control over that, and you might be a perfectly well behaved process that the OOM just doesn’t like this Monday, so buh-bye!

Looking around, it seems that we aren’t the only one that had run head first into this issue. The Oracle recommendation is to set things up to panic and reboot the entire machine when this happens, and that seems… unproductive.

The problem is that as a database, we aren’t really in control of how much we allocate, and we rely on the system to tell us when we do too much. Linux has no facility to do things like warn applications that memory is low, or even letting us know that by refusing to allocate more memory. Both are things that we already support, and would be very helpful.

That is quite annoying.

Tags:

Published at

Originally posted at

Comments (5)

Picture of the day, Rhino & Raven

We have an amateur photographer in the office, who like to arrange the rhinos in the office (now exceeding 100, I think) in various ways.

I really like this picture.

Published at

Originally posted at

Comments (2)

Buffer Managers, production code and alternative implementations

We are porting RavenDB to Linux, and as such, we run into a lot of… interesting issues. Today we run into a really annoying one.

We make use of the BufferManager class inside RavenDB to reduce memory allocations. On the .Net side of things, everything works just fine, and we never really had any issues with it.

On the Mono side of things, we started getting all sort of weird errors. From ArgumentOutOfRangeException to NullReferenceException to just plain weird stuff. That was the time to dig in and look into what is going on.

On the .NET side of things, BufferManager implementation is based on a selection criteria between large (more than 85Kb) and small buffers. For large buffers, there is a single large pool that is shared among all the users of the pool. For small buffers, the BufferManager uses a pool per active thread as well as a global pool, etc. In fact, looking at the code we see that it is really nice, and a lot of effort has been made to harden it and make it work nicely for many scenarios.

The Mono implementation, on the other hand, decides to blithely discard the API contract by ignoring the maximum buffer pool size. It seems because “no user code is designed to cope with this”. Considering the fact that RavenDB is certainly dealing with that, I’m somewhat insulted, but it seems par the course for Linux, where “memory is infinite until we kill you”* is the way to go.

But what is far worse is that this class is absolutely not thread safe. That was a lot of fun to discover. Considering that this piece of code is pretty central for the entire WCF stack, I’m not really sure how that worked. We ended up writing our own BufferManager impl for Mono, to avoid those issues.

* Yes, somewhat bitter here, I’ll admit. The next post will discuss this in detail.

Long running async and memory fragmentation

We are working on performance a lot lately, but performance isn’t just an issue of how fast you can do something, it is also an issue of how many resources we use while doing that. One of the things we noticed was that we are using more memory than we would like to, and even after we were freeing the memory we were using. Digging into the memory usage, we found that the problem was that we were suffering from fragmentation inside the managed heap.

More to the point, this isn’t a large object heap fragmentation, but actually fragmentation in the standard heap. The underlying reason is that we are issuing a lot of outstanding async I/O requests, especially to serve things like the Changes() API, wait for incoming HTTP requests, etc.

Here is what this looks like inside dotProfiler.

image

As you can see, we are actually using almost no memory, but heap fragmentation is killing us in terms of memory usage.

Looking deeper, we see:

image

We suspect that the issue is that we have pinned instances that we sent to async I/O, and that match what we have found elsewhere about this issue, but we aren’t really sure how to deal with it.

Ideas are more than welcome.

.NET Packaging mess

In the past few years, we had:

  • .NET Full
  • .NET Micro
  • .NET Client Profile
  • .NET Silverlight
  • .NET Portable Class Library
  • .NET WinRT
  • Core CLR
  • Core CLR (Cloud Optimized)*
  • MessingWithYa CLR

* Can’t care enough to figure out if this is the same as the previous one or not.

In each of those cases, they offered similar, but not identical API and options. That is completely ignoring the versioning side of things ,where we have .NET 2.0 (1.0 finally died a while ago), .NET 3.5, .NET 4.0 and .NET 4.5. I don’t think that something can be done about versioning, but the packaging issue is painful.

Here is a small example why:

image

In each case, we need to subtly tweak the system to accommodate the new packaging option. This is pure additional cost to the system, with zero net benefit. Each time that we have to do that, we add a whole new dimension to the testing and support matrix, leaving aside the fact that the complexity of the solution is increasing.

I wouldn’t mind it so much, if it weren’t for the fact that a lot of those are effectively drive-bys, it feels. Silverlight took a lot of effort, and it is dead. WinRT took a lot of effort, and it is effectively dead.

This adds a real cost in time and effort, and it is hurting the platform as a whole.

Now users are running into issues with the Core CLR not supporting stuff that we use. So we need to rip out MEF from some of our code, and implement it ourselves just to get things in the same place as before.

Excerpts from the RavenDB Performance team report: Optimizing Compare – The circle of life (a post-mortem)

Note, this post was written by Federico.

I want first to give special thanks to Thomas, Tobi, Alex and all the others that have made so many interesting comments and coded alternative approaches. It is a great thing to have so many people looking over your work and pointing out interesting facts that in the heat of the battle with the code we could easily overlook.

It was just three weeks since I wrote this series of posts, and suddenly three weeks seems to be a lot of time. There had been very interesting discussions regarding SSE, optimization, different approaches to optimize specific issues, about lowering measurement errors, that I was intending to write down in a follow up post. However, there was an earth shattering event happening this week that just made that post I wanted to write a moot exercise.

Let’s first recap how we ended up optimizing compares.

At first we are using memcmp via P/Invoke. Everything was great for a while, until we eventually hit the wall and we needed all the extra juice we could get. Back at the mid of 2014 we performed the first optimization rounds where for the first time introduced a fully managed pointers based solution. That solution served us well, until we started to optimize other paths and really stress RavenDB 3.0.

In the second round, that eventually spawned this series, we introduced branch-tables, bandwidth optimizations, and we had to resort to every trick on the book to convince the JIT to write pretty specific assembler code.

And here we are. 3 days ago, on the 3rd of February, Microsoft released CoreCLR for everyone to dig into. For everyone interested in performance it is a one in a million opportunity. Being able to see how the JIT works, how high-performance battle tested code is written (not looking at it in assembler) is quite an experience.

In there, specifically in the guts of the mscorlib source we noticed a couple of interesting comments specifically one that said:

// The attributes on this method are chosen for best JIT performance.

// Please do not edit unless intentional.

Needless to say that caught our attention. So we did what we usually do, just go and try it Open-mouthed smile.

The initial results were “interesting” to say the least. Instantaneously we could achieve faster speed-ups for our copy routines (more on that in a future series --- now I feel lucky I didn’t wrote more than the introduction). In fact, the results were “so good” that we decided to give it a try for compare, if we could at least use the SSE optimized native routine for big memory compares we would be in a great spot.

The result of our new tests? Well, we were in for a big surprise.

We had the most optimized routine all along. The code in question:

[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
[SuppressUnmanagedCodeSecurity]
[SecurityCritical]
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
public static extern int memcmp(byte* b1, byte* b2, int count);

We just needed to be a bit more permissive, aka disable P/Invoke security checks.

From the MSDN documentation for SuppressUnmanagedCodeSecurity:

This attribute can be applied to methods that want to call into native code without incurring the performance loss of a run-time security check when doing so. The stack walk performed when calling unmanaged code is omitted at run time, resulting in substantial performance savings.

Well I must say substantial is an understatement. We are already running in full-trust and also memcmp is safe function by default, then no harm is done. The same cannot be said by memcpy, where we have to be extra careful. But that is another story. 

  • Size: 2 Native (with checks): 353 Managed: 207 Native (no checks): 217 - Gain: -4,608297%
  • Size: 4 Native (with checks): 364 Managed: 201 Native (no checks): 225 - Gain: -10,66667%
  • Size: 8 Native (with checks): 354 Managed: 251 Native (no checks): 234 - Gain: 7,26496%
  • Size: 16 Native (with checks): 368 Managed: 275 Native (no checks): 240 - Gain: 14,58334%
  • Size: 32 Native (with checks): 426 Managed: 366 Native (no checks): 276 - Gain: 32,6087%
  • Size: 64 Native (with checks): 569 Managed: 447 Native (no checks): 384 - Gain: 16,40625%
  • Size: 128 Native (with checks): 748 Managed: 681 Native (no checks): 554 - Gain: 22,92418%
  • Size: 256 Native (with checks): 1331 Managed: 1232 Native (no checks): 1013 - Gain: 21,61895%
  • Size: 512 Native (with checks): 2430 Managed: 2552 Native (no checks): 1956 - Gain: 30,47035%
  • Size: 1024 Native (with checks): 4813 Managed: 4407 Native (no checks): 4196 - Gain: 5,028594%
  • Size: 2048 Native (with checks): 8202 Managed: 7088 Native (no checks): 6910 - Gain: 2,575982%
  • Size: 8192 Native (with checks): 30238 Managed: 23645 Native (no checks): 23490 - Gain: 0,6598592%
  • Size: 16384 Native (with checks): 59099 Managed: 44292 Native (no checks): 44041 - Gain: 0,5699277%

The gains are undeniable, especially where it matters the most for us (16 - 64 bytes). As you can see the managed optimizations are really good, at the 2048 level and up we are able to compete with the native version (accounting for the P/Invoke that is).

The .Compare() code ended up looking like this:

[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int Compare(byte* bpx, byte* bpy, int n)
{
    switch (n)
    {
        case 0: return 0;
        case 1: return *bpx - *bpy;
        case 2:
            {
                int v = *bpx - *bpy;
                if (v != 0)
                    return v;
 
                bpx++;
                bpy++;
 
                return *bpx - *bpy;
            }
             
        default: return StdLib.memcmp(bpx, bpy, n);
    }
}
 

With these results:

  • Size: 2 - Managed: 184 Native (no checks): 238
    Size: 4 - Managed: 284 Native (no checks): 280
  • Size: 8 - Managed: 264 Native (no checks): 257
  • Size: 16 - Managed: 302 Native (no checks): 295
  • Size: 32 - Managed: 324 Native (no checks): 313

The bottom-line here is, until we are allowed to generate SSE2/AVX instructions in .Net I doubt we will be able to do a better job, but we are eagerly looking forward for the opportunity to try it out when RyuJIT is released.  *

* We could try to write our own optimized routine in C++/Assembler (if that is even possible), but that is something it is out of scope at the moment (we have bigger fishes to fry performance-wise).

Cool index batch stats

I talked about this in the past, but since we just used to to diagnose a real live problem, I can’t help but do that again, take a look:

image

I especially like how it shows the actual costs, including using parallel computing.

Published at

Originally posted at

Crunching the numbers

I’m running the numbers, and I didn’t really believe them. So I put them in a chart, and I’m still not sure that I trust them.

Here you can see our income per month for the past year. Yes, July was pretty bad with everyone on vacation and no one is buying.

March 14 was pretty good, with people buying RavenConf tickets are a lot of excitement around that. September and October were great. In fact, October was out best month ever.

And then we get to February 2015. In other words, this month. In other words, this month where we still have 10 more days to make sales.

image

I’m pretty happy Smile. But I wished that I had hard numbers on why.

Excerpts from the RavenDB Performance team report: JSON & Structs in Voron

What seems to be a pretty set in stone rule of thumb is that when building a fool proof system, you will also find a better fool. In this case, the fool was us.

In particular, during profiling of our systems, we realized that we did something heinous, something so bad that it deserve a true face palm:

In essence, we spent all this time to create a really fast key/value store with Voron. We spent man years like crazy to get everything working just so… and then came the time to actually move RavenDB to use Voron. That was… a big task, easily as big as writing Voron in the first place. The requirements RavenDB makes of its storage engine are anything but trivial. But we got it working. And it worked good. Initial performance testing showed that we were  quite close to the performance of Esent, within 10% – 15%, which was good enough for us to go ahead with.

Get things working, get them working right and only then get them working fast.

So we spent a lot of time making sure that things worked, and worked right.

And then the time came to figure out if we can make things faster. Remember, Voron is heavily based around the idea of memory mapping and directly accessing values at very little cost. This is important, because we want to use Voron for a whole host of stuff. That said, in RavenDB we use it to store JSON documents, so we obviously store the JSON in it. There are quite a lot of book keeping information that we keep track of.

In particular, let us focus on the indexing stats. We need to keep track of the last indexed etag per index, when it was changed, number of successful indexing attempts, number of failed indexing attempts, etc.

Here is how we updated those stats:

image

As you can imagine, a lot of the time was actually spent just deserializing and deserializing the data back & forth from JSON. In fact, a lot of time was spend doing just that.

Now, indexing status needs to be queried on every single query, so that was a high priority for us to fix. We did this by not storing the data as JSON, but storing the data in its raw form directly into Voron.

Voron gives us a memory of size X, and we put all the relevant data into that memory, and pull the data directly from the raw memory. No need for serialization/deserialization step.

There is a minor issue about the use of strings .While using fixed size structure (like the indexing stats) is really easy, we just read/write to that memory. When dealing with variable length data (such as strings), you have harder time at it, but we dealt with that as well by defining a proper way to encode string lengths and read them back.

The results, by the way:

image

That means that we got close to 4 times faster! That is certainly going to help us down the road.

Improving the RavenDB Experience: Help is right where you need it

One of the things that we work very hard on is the whole experience of the product. Not just creating a good database, or writing great code, but also having good documentation, to explain exactly what is going on here.

We have now taken it a bit further, and we incorporate help documents in pretty much every section of the studio, so you can get relevant docs about what you are looking by just click the big help button.

image

 

It ain’t Clippy, so it won’t try to read your mind, but we do hope that it will end up being useful.

Tags:

Published at

Originally posted at

Comments (1)

RavenDB on Linux, first steps…

Early days, but we are now at the point where we are capable of running RavenDB itself on Linux Smile.

unnamed

That was the easy part, now we have to figure out all of our Windows dependencies and switch them with a Linux equivalent, find out where we are doing things like case insensitive file names, etc, etc, etc.

The current work is going to be pretty much a lot of “let us get all the tests running on Linux”, which is expected to last quite a while. After that, we’ll be able to make sure that we run in a manner consistent with a Linux admin expectations, and then… world domination*!

* Insert maniacal sound track here.

Tags:

Published at

Originally posted at

Comments (8)

The RavenDB Comics

The new flyers just arrived, and we also have a new comics to go with them. We are going to give them out at QCon London in March 4 – 5, I have a really good feeling about this. And I really like the comics.

Nitpicker: Yes, I know it isn’t readable at this resolution. We’ll post this online a few weeks after the QCon conference, come see us there!

Tags:

Published at

Originally posted at

Excerpts from the RavenDB Performance team report: Facets of information, Part II

In my previous post, I outlined the performance problem with facets, in this one, I want to discuss how we solved it.

For the purpose of discussion, we’ll deal with the following data model:

{ "Id": "phones/1", "Unlocked": false, "Brand": "Apple", ... }
{ "Id": "phones/2", "Unlocked": true,  "Brand": "LG", ... }
{ "Id": "phones/3", "Unlocked": false, "Brand": "Samsung", ... }
{ "Id": "phones/4", "Unlocked": true,  "Brand": "Apple", ... }
{ "Id": "phones/5", "Unlocked": false, "Brand": "Samsung", ... }

In order to handle that, we have to understand how Lucene actually work behind the scenes. Everything with Lucene is based on the notion of a document id. The results of a query is a set of document ids, and the internal structures refer to document ids on a regular basis.

So, if the result of a query is actually a set of document ids (Int32), we can represent them as a simple HashSet<int>. So far, so good. Now, we want to do a facet by the brand. How is Lucene actually storing that information?

There are two data structures of interest. The TermsEnum, which looks like this:

  • Brand:Apple – 2
  • Brand:LG –1
  • Brand:Samsung -2

This shows the number of documents per term. This is almost exactly what we would have wanted, except that this is global information, relevant for all the data. It our query is for unlocked phones only, we cannot use this data to answer the query.

For that, we need to use the TermsDocs, which looks like this:

  • Brand:Apple – [1,4]
  • Brand:LG – [2]
  • Brand:Samsung – [3,5]

So we have the term, and the document ids that contained that term.

Note that this is a very high level view of how this actually works, and it ignores a lot of stuff that isn’t actually relevant for what we need to understand here.

The important bit is that we have good way to get all the document matches for a particular term. Because we can do that, it gives us an interesting option. Instead of trying to calculate things on a case by case basis, we changed things around. We used the information inside the Lucene  index to create the following structure:

{
	"Brand": {
		"Apple": [1,4],
		"LG": [2],
		"Samsung": [3,5]
	}
}

There is caching and other obvious stuff also involved, naturally, but those aren’t important for the purpose of this talk.

Now, let us get back to our query and the required facets. We want:

Give me all the unlocked phones, and show me the facet for Brand.

A query in Lucene is going to return the following results:

[1,3,5]

Looking at the data like that, I think you can figure out where we are going with this. Here is the general algorithm that we have for answering facets.

var docsIds = ExecuteQuery();
var results = {};
foreach(var facet in facets) {
	var termsForFacet = terms[facet];
	foreach(var term in termsForFacet){
		results[facet] = CountIntersection(docIds, term.DoccumentsForTerm)
	}
}

The actual cost for calculating the facet is down to a mere set intersection, so the end result is that the performance of facets in the scenarios we test had improved by more than an order of magnitude.

I’m skipping over a lot of stuff, obviously. With facets we deal with distinct queries, ranges, dynamic aggregation, etc. But we were able to shove all of them into this style of work, and the results certainly justify the work.

Oh, and the normal case that we worked so hard to optimize normally, we are seeing 50% improvement there as well, so I’m pretty happy.

Excerpts from the RavenDB Performance team report: Facets of information, Part I

Facets are annoying. Mostly because they tend to be used relatively rarely, but when they are used, they are used a lot, and in many interesting ways. Because they are fairly rarely used, let me explain what they are first.

We want to show the users all the recent phones. We can do this using the following query:

from phone in session.Query<Phone>() 
orderby phone.ReleaseDate descending
select phone;

So far, so good. But we want to do better than just show a potentially big list to the user. So we start faceting the results. We pull some interesting pieces of data from the results, and allow the user to quickly narrow down the details. Typical facets for a phone query would be:

  • Brand
  • Screen Size
  • Camera
  • Cost

Now, we can certainly give the  option to search for them individually, but then you end up with something like this:

That is neither friendly nor very useful. so UX designers came up with this idea, instead:

image

This gives the user a much easier way to look at the large data sets, and quickly narrow down the search to just the things they want. It also gives the user some indication about the actual data. I can see how changing the facet selection would likely change the number of results I have. This is a great way to offer data exploration capabilities. And in any kind of data repository, this is crucial. Shopping sites are a natural choice, but content/document management systems, job sites, information centers are all commonly making use of this feature.

I’m sorry for the long intro, but I feel that this feature needs to be placed in context, to explain its history and the various iterations it has gone through.

We first got this feature in Sep 2011. It was a pull request from Matt Warren, apparently based on some code that I spiked. I intentionally goes there, because that is the first time we see this code, and it was implemented in the simplest possible manner that could possibly work, to showcase this feature and to make sure that it works end to end. The underlying implementation looked something like this:

private void HandleTermsFacet(string index, Facet facet, IndexQuery indexQuery, IndexSearcher currentIndexSearcher, 
	Dictionary<string, IEnumerable<FacetValue>> results)
{
	var terms = database.ExecuteGetTermsQuery(index,
		facet.Name,null,
		database.Configuration.MaxPageSize);
	var termResults = new List<FacetValue>();
	foreach (var term in terms)
	{
		var baseQuery = database.IndexStorage.GetLuceneQuery(index, indexQuery);
		var termQuery = new TermQuery(new Term(facet.Name, term));

		var joinedQuery = new BooleanQuery();
		joinedQuery.Add(baseQuery, BooleanClause.Occur.MUST);
		joinedQuery.Add(termQuery, BooleanClause.Occur.MUST);

		var topDocs = currentIndexSearcher.Search(joinedQuery, 1);

		if(topDocs.totalHits > 0)
			termResults.Add(new FacetValue
			{
				Count = topDocs.totalHits,
				Range = term
			});
	}

	results[facet.Name] = termResults;
}

In other words, the overall logic was something like this:

  • Get a faceted query, with a list of facets.
  • For each facet:
    • Get all the terms for the field (if this is Operating System, then that would be: Android, iOS, BlackBerry, Windows Phone)
    • For each of those terms, executed the original query and add a clause that limit the results to the current term and value.

As you can imagine, this has some.. performance issues. The more facets we had, and the more terms per facets, the costlier this got. Surprisingly enough, this code lasted almost unmodified for about 10 months, and the next major change was simply to process all of this in parallel.

Shortly thereafter, we have more changes. Moving from this trivial model to a much more efficient one, where we are executing the query once (PR from Michael Weber, by the way) per facet, instead of once per facet/term. Dynamic aggregation in RavenDB is also done through facets, and that added a much more common code path, so more changes started happening to the facets code. In particular, we started handling caching, we ended up reducing the time to run by a lot by restructuring how we were handling things so we would only executed a single query, then use low level index access to get all the other information we needed in an optimized fashion.

We got more customer usage sample, from the guys who had 70,000 facet queries to the people who wanted facets over multiple millions of results. All of that meant that we put a lot of thought into how we can reduce the performance of faceted queries, and over time, things because more and more complex, but much faster. In particular, by ordering the sequence of operations for computing the query, adding prefetching support and in general greatly reduced the amount of work we needed to do to just get the query working.

The end result was that the facet code looked something like this:

IndexedTerms.ReadEntriesForFields(currentState, fieldsToRead, allCollector.Documents,
    (term, docId) =>
    {
        List<Facet> facets;
        if (!sortedFacets.TryGetValue(term.Field, out facets))
            return;
        
        foreach (var facet in facets)
        {
            switch (facet.Mode)
            {
                case FacetMode.Default:
                    var facetValues = facetsByName.GetOrAdd(facet.DisplayName);
                    FacetValue existing;
                    if (facetValues.TryGetValue(term.Text, out existing) == false)
                    {
                        existing = new FacetValue
                        {
                            Range = GetRangeName(term)
                        };
                        facetValues[term.Text] = existing;
                    }
                    ApplyFacetValueHit(existing, facet, docId, null, currentState, fieldsCrc);
                    break;
                case FacetMode.Ranges:
                    // removed for brevity
                    break;
                default:
                    throw new ArgumentOutOfRangeException();
            }
        }

    });
});

There is a lot of stuff going on around this code, but this is where we actually computed the facets. Note that this basically calls the lambda once per every matching term for the specified fields for every document in the query.

And the performance was pretty good for our test cases, we run this through several customers who were heavy facet users, and they saw a marked improvement in performance of queries. But we recently got a customer that mentioned in passing that facet queries were expensive. As in, 5 – 10 seconds range per query. That raised some red flags, we consider requests that takes more than 100ms to be suspicious, so we asked for more information, and we realized that he was using facets slightly differently from most of our customers.

Usually, facets are used to… well, facet the data on a search. Typical performance issues with facets in the past were with large number of facets on a search result (I wasn’t kidding with the 70,000 facets). Because most of the customer feedback we got was around that, we focused on this issue primarily, to the point were even stuff like that was well behaving. This customer, however, was using facets not for searching, but as a way to manage the data itself. That meant that they were using a lot of facets on the entire data set.

So instead of having many facets on a medium size number of documents, we were using a small number of facets, but on the entire data set. Profiling this resulted in a few calls being highlighted:

  • GetOrAdd
  • TryGetValue

They are very cheap calls, on their own, but this piece of code was called literally tens of millions of times for this particular scenario. We spent some time with a profiler and started playing with the code, trying to see if we can remove the costly calls and optimize this code path.

The answers was… no. In fact, everything we did was in the range of 5% – 7% change (in both directions, mind).

This sucked, because going back to the user and saying: “that is the way it is” didn’t sit well with us. We got together, closed the shutters, turned off the light, and solely by the light of the monitors, we did a major hacking session. We completely changed the way we were handling facets, to avoid not just the problem with the calls to TryGetValue or GetOrAdd but to change direction to a much more efficient mode.

This post is getting pretty long, so I’ll discuss this in my next post. For now, can you suggest options for improvements?

Figure this code?

I have just finished writing this code, and it occurred to me that on its own, it makes very little sense.

private static int RedactedFunctionName(List<int> a, List<int> b)
{
    List<int> n,m;
    if (a.Count > b.Count)
    {
        n = a;
        m = b;
    }
    else
    {
        n = b;
        m = a;
    }

    int nSize = n.Count;
    int mSize = m.Count;

    double o1 = nSize + mSize;
    double o2 = nSize * Math.Log(mSize, 2);

    int result = 0;
    if (o1 < o2)
    {
        int mi = 0, ni = 0;
        while (mi < mSize && ni < nSize)
        {
            if (n[ni] > m[mi])
            {
                mi++;
            }
            else if (n[ni] < m[mi])
            {
                ni++;
            }
            else
            {
                ni++;
                mi++;
                result++;
            }
        }
    }
    else
    {
        for (int i = 0; i < mSize; i++)
        {
            if (n.BinarySearch(m[i]) >= 0)
                result++;
        }
    }
    return result;
}

Can you figure out what this function does, why is it behaving in this manner, and what preconditions it expects?

Tags:

Published at

Originally posted at

Comments (15)

New RavenDB 3.0 Stable Release

We have been hard at work in the past three months, and according to the statistics, we got quite a few things done, 1,144 commits a host of new features and serious performance improvements are out.

I’ll do another post shortly on our future work, but for now, here is what is new in this release:

We also made a lot of performance improvements, most of them were discussed in this blog, but I think that you’ll like the new numbers.

There are also things that we did mostly as preparation of future work, such as:

You’ll be hearing a lot more on those when we are ready for the next round of feature releases.

In the meantime, you can head here for the new build, hot off the press.

Tags:

Published at

Originally posted at

Comments (2)

Excerpts from the RavenDB Performance team report: Do you copy that?

Note, this post was written by Federico. This series relies heavily on the optimization work we did on the optimizing memory comparisons, if you haven’t read it, I suggest you start there instead.

If memory compare is the bread, memory copy is the butter in an storage engine. Sooner or later we will need to send the data to disk, in the case of Voron we use memory mapped files so we are doing the copy ourselves. Voron also uses a copy-on-write methodology to deal with tree changes, so there are plenty of copy commands around. Smile

Differently from the memory compare case where an optimized version already existed. In the case of memory copy we relied on a p/invoke call to memcpy because we usually move lots of memory around and it is hard to compete in the general case with an assembler coded version.  Scrap that, it is not hard, it is extremely hard!!! Don’t underestimate the impact that SSE extensions and access to the prefetching operation can have on memcpy. [1]

However, usually not all memory copies are created equally and there is plenty opportunity to do some smart copy; our code wasn’t exactly an exception to the rule. The first work involved isolating the places where the actual “big copies” happen, especially where the cost of actually doing a p/invoke call gets diluted by the sheer amount of data copied [2] in the statistically representative case. You guessed right, for that we used our FreeDB example and the results were very encouraging, there  were a couple of instances of “big copies”. In those cases using the P/Invoke memcpy was not an option, but for the rest we had plenty of different alternatives to try.

The usual suspects to take over our P/Invoke implementation for small copies would be Buffer.BlockCopy, the MSIL cpblk operation and Buffer.Memcpy which is internal, but who cares, we can still clone it Smile.

The general performance landscape for all our alternatives is:

image

What we can get from this is: Buffer.Memcpy should be the base for any optimization effort until we hit the 2048 bytes where all behave more or less in the same way. If  we have to choose between Buffer.BlockCopy and memcpy though, we will select the latter because when running in 32 bits the former is pretty bad. [3]

Having said that, the real eye opener here is the for-loop which is always a bad bet against Buffer.Memcpy. Specially because that’s usual strategy followed when copying less than 32 bytes.

There is also another interesting tidbit here, Buffer.Memcpy has some pretty nasty discontinuities around 16 bytes and 48 bytes.

Size: 14 Memcpy: 128 Stdlib: 362 BlockCopy: 223 ForLoop: 213
Size: 16 Memcpy: 126 Stdlib: 336 BlockCopy: 220 ForLoop: 235
Size: 18 Memcpy: 170 Stdlib: 369 BlockCopy: 262 ForLoop: 303
Size: 20 Memcpy: 160 Stdlib: 368 BlockCopy: 247 ForLoop: 304
Size: 22 Memcpy: 165 Stdlib: 399 BlockCopy: 245 ForLoop: 312
 
Size: 44 Memcpy: 183 Stdlib: 499 BlockCopy: 257 ForLoop: 626
Size: 46 Memcpy: 181 Stdlib: 563 BlockCopy: 264 ForLoop: 565
Size: 48 Memcpy: 272 Stdlib: 391 BlockCopy: 257 ForLoop: 587
Size: 50 Memcpy: 342 Stdlib: 447 BlockCopy: 290 ForLoop: 674
Size: 52 Memcpy: 294 Stdlib: 561 BlockCopy: 269 ForLoop: 619

What would you do here?

[1] With RyuJIT there are new vectorised operations (SIMD). We will certainly look for opportunities to implement a fully managed  version of memcpy if possible.

[2] For a detailed analysis of the cost of P/Invoke and the Managed C++/CLI Interface you can go the this article: http://www.codeproject.com/Articles/253444/PInvoke-Performance

[3] For detailed metrics check: http://code4k.blogspot.com.ar/2010/10/high-performance-memcpy-gotchas-in-c.html

This exception never happened

I’m not sure what is going on, but it is strange.

image

This seems to be related to having multiple catch statements with the same exception name, but I can’t reproduce this in isolation.

And it seems that other people have run into it as well: http://stackoverflow.com/questions/5650868/net-exception-caught-is-unexpectedly-null

Worked around it by changing the parameter name… but still, really strange.

Tags:

Published at

Originally posted at

Comments (13)

Excerpts from the RavenDB Performance team report: Optimizing Compare – Conclusions

Note, this post was written by Federico. We are done with the optimizations, but the question remains. Was all this trouble worthwhile?

If you remember from the first post in this series, based on our bulk insert scenario I said that each comparison in average takes us 97.5 nanoseconds. Now let’s first define what that means.

The most important thing is to understand what are the typical characteristics of this scenarios.

  • All the inserts are in the same collection.
  • Inserts are in batches and with sequential ids.
  • Document keys share the first 10 bytes. (which is very bad for us but also explains why we have a cluster around 13 bytes).
  • No map-reduce keys (which are scattered all over the place)
  • No indexing happens.

In short, we gave ourselves a pretty bad deal.

We are dealing with a live database here, we can control the scenario, but it is insanely difficult to control the inner tasks as we do with micro-benchmarks. Therefore all percentage gains use call normalization which allows us to make an educated guess of what would have been the gain under the same scenario. The smaller the difference in calls, the less noise or jitter in the measurement.

So without further ado this is the comparison before-after with call normalization.

image

With our nightmare scenario we get a 10% improvement. We went from 97.5 nanoseconds per compare to 81.5 nanoseconds. Which is AWESOME if we realize we were optimizing an already very tight method.

What this means in context? Let’s take a look at the top user on this workload, searching a page.

Before:

image

After:

image

It seems that we got better, but also there is something different. Why those gets are missing? In the first batch of optimizations we requested the compiler to aggressively inline those properties, now those properties are part of the call-site with optimized code. A bit off-topic but when optimizing first look for these before going in an optimization frenzy.

This is the search method after optimization.

image

Our 10% improvement on the Memory Compare gave us almost an extra 4% in the search when bulk inserting. Which for a method that is called in the billions range calls on real life workloads, is pretty damn good.

And this concludes the series about memory compare, hope you have enjoyed it.

PS: With all the information provided, where do you guess I will look afterwards for extra gains?

Published at

Originally posted at

Excerpts from the RavenDB Performance team report: Comparing Branch Tables

Note, this post was written by Federico. In the previous post after inspecting the decompiled source using ILSpy  we were able to uncover potential things we could do.

In the last post we found out that the housekeeping of our last loop is non negligible, but we also know it’s execution is bounded by n. From our static analysis we know that n cannot be bigger than a word or 8 bytes.

To build an efficient loop what we can do is exploit our knowledge on how to “hint” the compiler to build a branch table and build customized code for each one of the cases.

image

We ended up paying some housekeeping because the code gets unwieldy pretty fast. We know we can still go that road if we need to.

The performance of this loop trades comparisons with branch tables and gets rid of the comparison of n (notice that we have an endless loop here). But to squeeze the latest few nanoseconds there is a very important tidbit that could easily pass inadvertent.

Mileage will vary in real workloads but let’s do a couple of assumptions just for the purpose of illustrate the optimization.

Let’s say that in half of the requests the memory blocks are equal and on the other half they are not. That means that half of our request will be our worst case scenario, good news is that we have optimized the hell out of it. But what about the other half? For simplicity we will assume that if 2 memory blocks are different there is an uniform probability of it to be different at position n.

That means:

  • If we are comparing 3 bytes memory blocks there is a chance of 1/3 that byte-1 is different and 2/3 that it’s not.
  • If byte-1 of the memory blocks are equal, then there is 1/3 of a chance that byte-2 will be different.
  •  And so on.

For the mathematically inclined, this looks like a geometric distribution (http://en.wikipedia.org/wiki/Geometric_distribution).

image

Or the probability distribution of the number X of Bernoulli trials needed to get a desired outcome, supported on the set { 1, 2, 3, ...}

So let’s say that we have X compares to do in order to rule out equality of the memory blocks (our worst case scenario).

From Wikipedia:

image

In our example, the probability p = 0.66.  Why? Because that is the probability of “successes” until we “fail” … You will notice that success and fail are inverted in our case, because what matter to us is the failure. But mathematically speaking using the other probability would be wrong, it is just a matter of interpretation. Smile

image

Now the important tidbit here is, no matter how long is the memory block (or lower the p-value), the cumulative distribution function will always grow pretty fast. To us, that means that the chances to find a difference in the first X bytes if the memory blocks are different is pretty high. In the context of Voron, let’s say we are looking for a particular key out there in a tree. If the keys are spread apart we are going to be able to rule them out pretty fast until we are in the neighborhood of the key, where we will need to check more bytes to know for sure.

The code to achieve this optimization is quite simple, we already used it to optimize the last-loop. Smile

So we just copy the switch block into the entrance of method. If the size of the memory block is 0 to 3, we will have a fast-return. In the case of bigger memory blocks, we will consume the first 4 bytes (32 bits) before starting to test with words.

If I were to modify the original implementation and just add this optimization. What’s would the speed up be under this theoretical workload? (I will publish that in the comments section in a couple of days J so make your educated guess).

Ayende’s note: That said, there is actually another thing to consider here. It is very common for us to have keys with shared prefixes. For example, “users/1”, “users/2”, etc. In fact, we are trying hard to optimize for that route because that gives us a better structure for the internal B+Tree that we use in Voron.

Excerpts from the RavenDB Performance team report: Optimizers, Assemble!

Note, this post was written by Federico. In the previous post on the topic after inspecting the decompiled source using ILSpy  we were able to uncover potential things we could do.

Do you remember the loop from a couple of posts behind?

image

This loop was the responsible to check for the last bytes; either because the words compare found a difference or we are at the end of the memory block and the last block is not 8 bytes wide. You may remember it easier because of the awful code it generated.

image

After optimization work that loop looked sensibly different:

image

And we can double-check that the optimization did create a packed MSIL.

image

Now let’s look at the different operations involved. In red, blue and orange we have unavoidable instructions, as we need to do the comparison to be able to return the result.

13 instructions to do the work and 14 for the rest. Half of the operations are housekeeping in order to prepare for the next iteration.

The experienced developer would notice that if we would have done this on the JIT output each one of the increments and decrements can be implemented with a single assembler operation using specific addressing mode. However, we shouldn’t underestimate the impact of memory loads and stores.

How would the following loop look in pseudo-assembler?

:START
Load address of lhs into register
Load address of R into raddr-inregister
Move value of lhs into R
Load byte into a 32 bits register (lhs-inregister)
Substract rhs-inregister, [raddr-inregister],
Load int32 from R into r-inregister
Jump :WEAREDONE if non zero r-inregister,
Load address of lhs into lhsaddr-register
Add 4 into [lhsaddr-register] (immediate-mode)
Load address of rhs into rhsaddr-register
Add 4 into [rhsaddr-register]   (immediate-mode)
Load address of n into naddr-register
Increment [naddr-register]
 
Load content of n into n-register
Jump :START if bigger than zero n-register
Push 0
Return
 
:WEAREDONE
Push r-inregister
Return

As you can see the housekeeping keeps being a lot of operations. J

Armed with this knowledge. How would you optimize this loop?

RavenDB in QCon London

We’ll be in QCon London in March 4-6 this year, talking about RavenDB and giving away some cool swag. I urge you to come an join us.

And if you can’t make it to QCon, I’ll be giving a free talk at Skills Matter on March 5. In this talk, I’ll talk about the kind of performance optimizations work that went into RavenDB in the past few months, and what kind of things we had to do to get even more speed out of the system.

Tags:

Published at

Originally posted at