Ayende @ Rahien

Refunds available at head office

When select IS broken (or just slow)

Usually, "select" isn't broken is a good motto to follow. Occasionally, there are cases where this is case. In particular, it may not be that it is broken, it may very well be that the way it works doesn't match the things that we need it to do.

I spoke about an optimization story that happened recently, in which we managed to reduce the average time from 5 - 10 seconds to 5 - 15 milliseconds.

What we needed was to walk a tree structure, which was stored in a database, and do various interesting tree based operations on it. The most natural way of working with trees is with recursion, and SQL is just not the right way of dealing with it.

Deciding to load the entire table to memory, build a real tree structure and perform all the operations on that tree structure has paid off tremendously. What is important to remember is that we hadn't had to do anything radical to the data model or the way the application worked. We only had to modify the implementation of the component that exposed that tree to the application.

One of the things that we had to deal was the case where the amount of data we have to deal with would exceed available memory. At least, we thought we had to deal with it.

But our tree was very simple, it consisted of a few properties and that it. Let us do the math about this, shall we?

  • Name - 50 chars, unicode - 100 bytes
  • 4 decimal fields - 16 bytes each = 64 bytes
  • 3 boolean fields - 3 bytes
  • Parent point - 4 bytes
  • Set of children - average of 10 per node - 40 bytes + ~50 bytes bookkeeping

This is very rough, of course, but that would do. It puts the memory cost of a node at just under 256 bytes. We will use that number, because it is easier to work with.

Now, with 256 bytes per node, how many can we reasonably use?

Well, 100 MB will take 409,600 nodes or so. Which is pretty good number, I say. A table of that size is considered big, by most people. A GB of memory will give us 4,194,304 items in the tree, and keep the traversal speed near instantaneous. At that point, I would start thinking about the size of the node, because 256 bytes is big size. More realistic size would be 64 bytes or so (drop the name, pack the decimals, use linked list for children) which would give me 16,777,216 nodes for the same memory requirement.

All of those numbers are greater than the current and expected size of the data set, so there isn't a reason to care much beyond that.

The important thing here is to understand that the usual truth about "let the tool do the optimization" doesn't really hold true when you have specific scenarios. For solving very specific, very narrow circumstances, you can generally come up with a much better approach than the generic one.

Of course, this approach would not allow any generalization, and it doesn't have other benefits that using the common platform might have offered (needing to do our own transactions, for example).

Keep that in mind.


10/16/2008 03:52 PM by

hm... perhaps i am missing something

this tree would be a right only tree correct?

wouldn't traversing this tree take a really (really) long?

Anonymous fool
10/16/2008 04:37 PM by
Anonymous fool

This is great if you can keep your entire tree in memory.

There are other approaches that will work with nearly equal performance as keeping the tree in memory but scale much better.

One now aging approach is one that Joe Celko called nested sets. We used this at my last company. It has been documented a number of places, one of which is this codeproject article:


Another approach (of my own design) is keeping another table that I call a decendency list. The cool thing with this approach is that it's very narrow and maintaining that left-right thing that slows down inserts and moves within the nested-set approach really isn't an issue. The admittedly bad thing is the need for a second table to hold the hierarchy relationships for efficient queries.

Another approach is a recursive query feature that was built into SQL Server 2005 that has performance on par with the above structured approaches. This is cool because the natural parent-child relationships remain in the intuitive just-the-parent-id-in-child-record approach. The feature is called CTE (Common-Table-Expression).


Oracle has something similar with their "connect by" clause...


Not using the above? I guess you're on your own. Do some research and find something that works for your situation.

...or load the tree in memory at the start of your app. Your call. :)

Andrey Shchekin
10/17/2008 06:38 AM by
Andrey Shchekin

__The most natural way of working with trees is with recursion, and SQL is just not the right way of dealing with it.

CTEs in SQL2005/DB2, hierarchyid in SQL2008, Connect By in Oracle.

I do agree with your main point, however.

Ayende Rahien
10/17/2008 08:43 AM by
Ayende Rahien


Those are just ways to get the data from hierarchical structure in the rows.

As a simple example, you want to filter the tree, and upgrade children of filtered nodes to the next top level.

You can't really do this with SQL.

10/17/2008 06:13 PM by

You forget the cases when select is the right tool for the job but simply won´t work as expected because the database engine is trying the wrong execution path. I had a case like that with Ms Sql 2000 and the developers preferred to ignore it because the "engine knows more" and working at that level is not "a good practice" (it appears than getting broken batch reports because Sql Server was choking in data IS a good practice).

Ayende Rahien
10/17/2008 09:56 PM by
Ayende Rahien


Traversing in memory trees is fast

Ayende Rahien
10/17/2008 09:58 PM by
Ayende Rahien


There is one problem with any of your suggestions. It assume that the hierarchy is fixed and unaffected by business logic, which is not the case in my scenario

10/18/2008 06:52 AM by

I am missing a point .. who is "Fool" here ?

Comments have been closed on this topic.