reKiip’s MongoDB’s experience
We got asked several times to respond to this post, about the reason Kiip moved away from MongoDB:
On the surface, RavenDB and MongoDB are really similar, looking at the Good parts of the Kiip post, we have schemalessness, easy replication, rich query langauge and we can be access from multiple languages.
But under the hood, RavenDB operates in a completely different way than MongoDB does. A vast majority of the issues that Kiip run into are actually low level (really low level, is some cases) issues that shouldn’t really be visible to the user.
Non-counting B-Trees
The fact that MongoDB uses non counting B-Trees? The only reason that the user care about that is that it actually impacts performance, but the Kiip blog mentions a bunch of other issues related to that.
In RavenDB, we use Lucene as the indexing format, and we really don’t care about the actual format of the indexes. We natively support Count() and limit / skip, because we feel that those are actually core parts of what most users need. In fact, our API allows us to get the total count of results of a paged query as a by product of actually making the query. There isn’t any additional cost for doing this.
Poor Memory Management
MongoDB relies on the OS to do the memory management, by letting the OS memory manager to do its work. That is actually quite a smart decision, because I can guarantee that more work has gone into optimizing the OS memory manager than could have been invested by the MongoDB project. But that is just part of the work.
In RavenDB, we are actually a managed application, so we don’t have directly control over memory. That doesn’t mean that we don’t actually manage it. We have several layers of caching in place, exactly because we know more than the OS about our own usage scenarios. In many cases, even if you are making a totally new request, it would never hit the disk, because we are keeping track on hot data and making sure that it resides in memory. This applies to both indexes and documents, mind. And during the indexing process we are very careful about memory management.
Sure, the OS memory manager is more optimized, but the database knows what is going on, and can predict its own usage patterns. That is how RavenDB does a lot of magic relating to auto configuration.
Uncompressed field names
In MongoDB, it is considered good practice to shorten field names for space optimization. But MongoDB doesn’t do it for you automatically.
RavenDB doesn’t compress field names, but at the same time, it isn’t a good practice to do so. In fact, I think that this is a horrible little mess. There are a lot of arguments against compressing field names, not the least of which is that it makes it pretty hard to figure out what it is that you are actually trying to do. Looking at the raw data, something that is done fairly frequently when debugging and troubleshooting becomes harder to work with and manage:
{ "a2": "nathan ", "d3": "", "a2": "2012-05-17T00:00:00.0000000", "h3": "2012-04-15T00:00:00.0000000", "r2": "archanid@sample.com", "o2": "8169cd4a-babf-4015-a3c7-4d503642e021", "o1": "products/NHProf" }
Anyone wants to figure out what this document is about? And at least in this one, the data itself tells you a lot about the actual content.
There are far better alternatives in place. In RavenDB, we do full response / request compression, and we allow to do document compression on disk as well. If we were ever to get to the point where this would be a serious problem (and so far, it isn’t, even on large data sets), it would be less than a week of work to implement string interning inside RavenDB, so we would use the same string references for field values.
Global write lock
MongoDB (as of the current version at the time of writing: 2.0), has a process-wide write lock. … At this point, all other operations including reads are blocked because of the write lock.
Now, to be fair, also have a write lock, but it isn’t nearly as bad as it is in MongoDB. RavenDB write lock is actually for… writes, and it doesn’t interfere with the either reads or indexes. It is on the list of things to remove, but the crazy part is. So far, and we have really demanding users, no one cares. The reason that no one cares is that this is really small lock, and it only affects writes, it is not Stop the World type of thing.
Safe off by default
I am just going to let Kiip’s words stand for themselves (emphasis mine):
This is a crazy default, although useful for benchmarks. As a general analogy: it’s like a car manufacturer shipping a car with air bags off, then shrugging and saying “you could’ve turned it on” when something goes wrong.
RavenDB entire philosophy is around Safe by Default. That is the only thing that really make sense, because otherwise… Well… here is what happenned at Kiip:
We lost a sizable amount of data at Kiip for some time before realizing what was happening and using safe saves where they made sense (user accounts, billing, etc.).
Offline table compaction
Every now and then, you need to take down MongoDB and let it compact its on disk data. This is another Stop the World operation, and the only way to keep up when you do so is to have a hot standby ready.
RavenDB does all maintenance task while the server is up and serving requests. You don’t need any downtime just because RavenDB need to arrange some data on disk, we take care of that live, and with no interruption in service.
Secondaries do not keep hot data in RAM
As Kiip explains it:
The primary doesn’t relay queries to secondary servers, preventing secondaries from maintaining hot data in memory. This severely hinders the “hot-standby” feature of replica sets, since the moment the primary fails and switches to a secondary, all the hot data must be once again faulted into memory.
RavenDB doesn’t do so either, but for a drastically different reason. As I mentioned earlier, the way RavenDB works is quite different. When you are running a hot standby node, it will get the new data from the server and index it. We keep the index open, so for a lot of the data, it is already going to be in memory. For the rest, as I mentioned, we have several layers of caches that would help prevent needing to page gigabytes on data into memory.
Conclusion
As an utterly unbiased observer (), I can say that RavenDB rocks.
What we are actually seeing here is that RavenDB put different emphasis on different things. I really care for making the common application level scenarios easy and nice to work with. And I had enough time supporting production level apps that I tried very hard to make sure that RavenDB can take care of itself for most scenarios without any hand holding.
More posts in "re" series:
- (19 Jun 2024) Building a Database Engine in C# & .NET
- (05 Mar 2024) Technology & Friends - Oren Eini on the Corax Search Engine
- (15 Jan 2024) S06E09 - From Code Generation to Revolutionary RavenDB
- (02 Jan 2024) .NET Rocks Data Sharding with Oren Eini
- (01 Jan 2024) .NET Core podcast on RavenDB, performance and .NET
- (28 Aug 2023) RavenDB and High Performance with Oren Eini
- (17 Feb 2023) RavenDB Usage Patterns
- (12 Dec 2022) Software architecture with Oren Eini
- (17 Nov 2022) RavenDB in a Distributed Cloud Environment
- (25 Jul 2022) Build your own database at Cloud Lunch & Learn
- (15 Jul 2022) Non relational data modeling & Database engine internals
- (11 Apr 2022) Clean Architecture with RavenDB
- (14 Mar 2022) Database Security in a Hostile World
- (02 Mar 2022) RavenDB–a really boring database
Comments
Makes me appreciate the relatively little pain I've had with RavenDB running a live production website for the past two months.
The only issue I've had production wise, which pales into comparison of Kiip LOOSING data, was a VPS restart that messed up the indexes (always stale and missing results) and they had to be manually reset. We lost no data and the website stayed online whilst we reset the indexes manually (this issue is now fixed as well).
I appreciate choosing RavenDB even more after reading Kiip's blog post. Yikes!
Safe by Default, Transactions, and lack of a global write lock are why I went with RavenDB instead of MongoDB. Moreso Transactions than anything else. While Mongo has atomic document editing, there are many times I need to edit 2 documents at the same time (think 2 characters in a game trading items).
Offline table compaction and a global write lock are just things make me wonder what they thought. SQL Server would be unsellable with such limitations.
For the same reason SQL Server has rock-solid online backup and online index operations.
You can't just tell a billion dollar corporation that its sales website will be down for maintenance regularly. It needs to be up always. That just doesn't fly.
When will ravenhq go live? Atm they are asking me to use Appharbor.
I don't think you can tell that RavenDB would be a better choice for Kiip - there's too little information. Every database is a good choice until you reach the limits of performance or functionality and then strange things can happen. But the most funny part for me is that they migrated from MongoDB to PostgreSQL... Probably they'd be better off using Postgres from the beginning.
... and please take into consideration that kiip's MongoDB was doing about 2000 updates per second together with an unknown number of concurrent reads (http://blog.engineering.kiip.me/post/14317098482/ec2-to-vpc-executing-a-zero-downtime-migration) so the global lock is there probably because without it the database would be just too fast
Comment preview