Writing production systems the wrong way, on purpose
It should come as no surprise that our entire internal infrastructure is running on RavenDB. I wholly believe in the concept of dog fooding and it has serve us very well over the years.
I was speaking to a colleague just now and it occurred to me that it is surprising that we do certain things wrong, intentionally. It is fair to say that we know what the best practices for using RavenDB are, the things that you can do to get the most out of it.
In some of our internal systems, we are doing things in exactly the wrong way. We are doing things that are inefficient in RavenDB. We take the expedient route to implement things. A good example of that is that we have a set of documents that can grow to be multiple MB in size. They are also some of the most common changed documents in the system. Properly design would call to break them apart to make things easier for RavenDB.
We intentionally modeled things this way. Well, I gave the modeling task to an intern with no knowledge of RavenDB and then I made things worse for RavenDB in a few cases where he didn’t get it out of shape enough for my needs.
Huh?! I can hear you thinking. Why on earth would we do something like that?
We do this because if serves as an excellent proving ground for misuse of RavenDB. It show us how the system behave under non ideal situations. Not just when the user is able to match everything to the way RavenDB would like things to be, but how they are likely to build their system. Unaware of what is going on behind the scenes and what the optimal solution would be. We want RavenDB to be able to handle that scenario well.
An example that pops to mind was having all the uploads on the system be attachments on a single document. That surfaced that we had a O(N^2) algorithm very deep in the bowels of RavenDB for placing a new attachment. It would be completely invisible under normal case, because it was fast enough under any normal or abnormal situation that we could think of. But when we started getting high latency from uploads, we realized that adding the 100,002th attachment to a document required us to scan through the whole list… it was obvious that we needed a fix. (And please, don’t put hundreds of thousands of attachments on a document, it will work (and it is fast now), but it isn’t nice).
Doing the wrong thing on purpose means that we can be sure that when users are doing the wrong thing accidently, they get good behavior.
Comments
If the wrong thing results in good behaviour... is it even the wrong thing any more?
(Some might say document DBs are the wrong thing and some sort of structured model amenable to querying should be used. Once upon a time that was true, due to limitations of computing hardware.)
Comment preview