An optimization story
I left work today very happy. There was a piece in the UI that was taking too long when run under with a real world data set. What is slow? Let us call it 40 seconds to start with. This is a pretty common operation in the UI, so that was a good place to optimize.
I wasn't there for that part, but optimizing the algorithms used reduced the time from 40 seconds to 5 - 10 seconds, and impressive amount by all accounts, but still one in which the users had to wait an appreciable amount for a common UI operation. Today we decided to tackle this issue, and see if we can optimize this further.
The root action is loading some data and executing a bit of business logic on top of that data. I checked the queries being generated, and while they weren't ideal, they weren't really bad (just not the way I would do things). At that point, we decided to isolate the issue in a test page, which would allow us to test just this function in isolation. Then, we implemented this from scratch, as plain data loading process.
The performance for that was simply amazing. 300 - 150 ms per operation, vs. 5 - 10 seconds in the optimized scenario. Obviously, however, we were comparing apples to oranges here. The real process also did a fair amount of business logic (and related data loading), which was the reason that it was slow. I looked at the requirement again, then at the queries, and despaired.
I hoped that I would be able to use a clever indexing scheme and get the 1000% perf benefit using some form of SQL. But the requirement simply cannot be expressed in SQL. And trying to duplicate the existing logic would only put us in the same position as before.
What to do... what to do...
The solution was quite simple, take the database out of the loop. For a performance critical piece of the application, we really can't afford to rely on external service (and the DB is considered one, in this scenario). I spent some time loading the data at application startup, as well as doing up front work on the data set to make it easier to work with.
This turned that operation into an O(1) operation, where O consists of a small set of in memory hash table lookups. And the performance? The performance story goes like this:
I go into the manager office, and ask him how fast he wants this piece of functionality to run. He hesitate for a moment and then says: "A second?".
I shake my head, "I can't do that, can you try again?"
"Two seconds?" He asked.
"I am sorry", I replied, "I can do five"
The I left the office and thrown over my shoulder, "oh, but it is in milliseconds".
Sometimes I have a rotten sense of humor, but the stunned silence that followed that declaration was a very pleasing.
I am lucky in that the data set is small enough to fit in memory. But I am not going to rely on that, we need to implement soft paging of the data anyway (to make the application startup time acceptable), so it will be able to handle that easily enough even when the data set that we are talking about will grow beyond the limits of memory (which I don't expect to happen in the next couple of years).
Overall, it was a very impressive optimization, even if I say so myself.