Excerpts from the RavenDB Performance team reportFacets of information, Part II

time to read 4 min | 673 words

In my previous post, I outlined the performance problem with facets, in this one, I want to discuss how we solved it.

For the purpose of discussion, we’ll deal with the following data model:

{ "Id": "phones/1", "Unlocked": false, "Brand": "Apple", ... }
{ "Id": "phones/2", "Unlocked": true,  "Brand": "LG", ... }
{ "Id": "phones/3", "Unlocked": false, "Brand": "Samsung", ... }
{ "Id": "phones/4", "Unlocked": true,  "Brand": "Apple", ... }
{ "Id": "phones/5", "Unlocked": false, "Brand": "Samsung", ... }

In order to handle that, we have to understand how Lucene actually work behind the scenes. Everything with Lucene is based on the notion of a document id. The results of a query is a set of document ids, and the internal structures refer to document ids on a regular basis.

So, if the result of a query is actually a set of document ids (Int32), we can represent them as a simple HashSet<int>. So far, so good. Now, we want to do a facet by the brand. How is Lucene actually storing that information?

There are two data structures of interest. The TermsEnum, which looks like this:

  • Brand:Apple – 2
  • Brand:LG –1
  • Brand:Samsung -2

This shows the number of documents per term. This is almost exactly what we would have wanted, except that this is global information, relevant for all the data. It our query is for unlocked phones only, we cannot use this data to answer the query.

For that, we need to use the TermsDocs, which looks like this:

  • Brand:Apple – [1,4]
  • Brand:LG – [2]
  • Brand:Samsung – [3,5]

So we have the term, and the document ids that contained that term.

Note that this is a very high level view of how this actually works, and it ignores a lot of stuff that isn’t actually relevant for what we need to understand here.

The important bit is that we have good way to get all the document matches for a particular term. Because we can do that, it gives us an interesting option. Instead of trying to calculate things on a case by case basis, we changed things around. We used the information inside the Lucene  index to create the following structure:

{
	"Brand": {
		"Apple": [1,4],
		"LG": [2],
		"Samsung": [3,5]
	}
}

There is caching and other obvious stuff also involved, naturally, but those aren’t important for the purpose of this talk.

Now, let us get back to our query and the required facets. We want:

Give me all the unlocked phones, and show me the facet for Brand.

A query in Lucene is going to return the following results:

[1,3,5]

Looking at the data like that, I think you can figure out where we are going with this. Here is the general algorithm that we have for answering facets.

var docsIds = ExecuteQuery();
var results = {};
foreach(var facet in facets) {
	var termsForFacet = terms[facet];
	foreach(var term in termsForFacet){
		results[facet] = CountIntersection(docIds, term.DoccumentsForTerm)
	}
}

The actual cost for calculating the facet is down to a mere set intersection, so the end result is that the performance of facets in the scenarios we test had improved by more than an order of magnitude.

I’m skipping over a lot of stuff, obviously. With facets we deal with distinct queries, ranges, dynamic aggregation, etc. But we were able to shove all of them into this style of work, and the results certainly justify the work.

Oh, and the normal case that we worked so hard to optimize normally, we are seeing 50% improvement there as well, so I’m pretty happy.

More posts in "Excerpts from the RavenDB Performance team report" series:

  1. (20 Feb 2015) Optimizing Compare – The circle of life (a post-mortem)
  2. (18 Feb 2015) JSON & Structs in Voron
  3. (13 Feb 2015) Facets of information, Part II
  4. (12 Feb 2015) Facets of information, Part I
  5. (06 Feb 2015) Do you copy that?
  6. (05 Feb 2015) Optimizing Compare – Conclusions
  7. (04 Feb 2015) Comparing Branch Tables
  8. (03 Feb 2015) Optimizers, Assemble!
  9. (30 Jan 2015) Optimizing Compare, Don’t you shake that branch at me!
  10. (29 Jan 2015) Optimizing Memory Comparisons, size does matter
  11. (28 Jan 2015) Optimizing Memory Comparisons, Digging into the IL
  12. (27 Jan 2015) Optimizing Memory Comparisons
  13. (26 Jan 2015) Optimizing Memory Compare/Copy Costs
  14. (23 Jan 2015) Expensive headers, and cache effects
  15. (22 Jan 2015) The long tale of a lambda
  16. (21 Jan 2015) Dates take a lot of time
  17. (20 Jan 2015) Etags and evil code, part II
  18. (19 Jan 2015) Etags and evil code, Part I
  19. (16 Jan 2015) Voron vs. Esent
  20. (15 Jan 2015) Routing