Production postmortemYour math is wrong, recursion doesn’t work this way
We got a call from a customer, a pretty serious one. RavenDB is used to compute billing charges for customers. The problem was that in one of their instances, the value for a particular customer was wrong. What was worse was that it was wrong on just one instance of the cluster. So the customer would see different values in different locations. We take such things very seriously, so we started an investigation.
Let me walk you through reproducing this issue, we have three collections (Users, Credits and Charges):
The user is performing actions in the system, which issue charges. This is balanced by the Credits in the system for the user (payment they made). There is no 1:1 mapping between charges and credits, usually.
Here is an example of the data:
And now, let’s look at the index in question:
This is a multi map-reduce index that aggregates data from all three collections. Now, let’s run a query:
This is… wrong. The charges & credits should be more or less aligned. What is going on?
RavenDB has a feature called Map Reduce Visualizer, to help work with such scenarios, let’s see what this tells us, shall we?
What do we see in this image?
You can see that we have two results for the index. Look at Page #854 (at the top), we have one result with –67,343 and another with +67,329. The second result also does not have an Id property or a Name property.
What is going on?
It is important to understand that the image that we have here represents the physical layout of the data on disk. We run the maps of the documents, and then we run the reduce on each page individually, and sum them up again. This approach allows us to handle even a vast amount of data with ease.
Look at what we have in Page #540. We have two types of documents there, the users/ayende document and the charges documents. Indeed, at the top of Page #540 we can see the result of reducing all the results in the page. The data looks correct.
However…
Look at Page #865, what is going on there? Looks like we have most of the credits there. Most importantly, we don’t have the users/ayende document there. Let’s take a look at the reduce definition we have:
What would happen when we execute it on the results in Page #865? Well, there is no entry with the Name property there. So there is no Name, but there is also no Id. But we project this out to the next stage.
When we are going to reduce the data again among all the entries in Page #854 (the root one), we’ll group by the Id property, but the Id property from the different pages is different. So we get two separate results here.
The issue is that the reduce function isn’t recursive, it assumes that in all invocations, it will have a document with the Name property. That isn’t valid, since RavenDB is free to shuffle the deck in the reduce process. The index should be robust to reducing the data multiple times.
Indeed, that is why we had different outputs on different nodes, since we don’t guarantee that will process results in the same order, only that the output should be identical, if the reduce function is correct. Here is the fixed version:
And the query is now showing the correct results:
That is much better
More posts in "Production postmortem" series:
- (12 Dec 2023) The Spawn of Denial of Service
- (24 Jul 2023) The dog ate my request
- (03 Jul 2023) ENOMEM when trying to free memory
- (27 Jan 2023) The server ate all my memory
- (23 Jan 2023) The big server that couldn’t handle the load
- (16 Jan 2023) The heisenbug server
- (03 Oct 2022) Do you trust this server?
- (15 Sep 2022) The missed indexing reference
- (05 Aug 2022) The allocating query
- (22 Jul 2022) Efficiency all the way to Out of Memory error
- (18 Jul 2022) Broken networks and compressed streams
- (13 Jul 2022) Your math is wrong, recursion doesn’t work this way
- (12 Jul 2022) The data corruption in the node.js stack
- (11 Jul 2022) Out of memory on a clear sky
- (29 Apr 2022) Deduplicating replication speed
- (25 Apr 2022) The network latency and the I/O spikes
- (22 Apr 2022) The encrypted database that was too big to replicate
- (20 Apr 2022) Misleading security and other production snafus
- (03 Jan 2022) An error on the first act will lead to data corruption on the second act…
- (13 Dec 2021) The memory leak that only happened on Linux
- (17 Sep 2021) The Guinness record for page faults & high CPU
- (07 Jan 2021) The file system limitation
- (23 Mar 2020) high CPU when there is little work to be done
- (21 Feb 2020) The self signed certificate that couldn’t
- (31 Jan 2020) The slow slowdown of large systems
- (07 Jun 2019) Printer out of paper and the RavenDB hang
- (18 Feb 2019) This data corruption bug requires 3 simultaneous race conditions
- (25 Dec 2018) Handled errors and the curse of recursive error handling
- (23 Nov 2018) The ARM is killing me
- (22 Feb 2018) The unavailable Linux server
- (06 Dec 2017) data corruption, a view from INSIDE the sausage
- (01 Dec 2017) The random high CPU
- (07 Aug 2017) 30% boost with a single line change
- (04 Aug 2017) The case of 99.99% percentile
- (02 Aug 2017) The lightly loaded trashing server
- (23 Aug 2016) The insidious cost of managed memory
- (05 Feb 2016) A null reference in our abstraction
- (27 Jan 2016) The Razor Suicide
- (13 Nov 2015) The case of the “it is slow on that machine (only)”
- (21 Oct 2015) The case of the slow index rebuild
- (22 Sep 2015) The case of the Unicode Poo
- (03 Sep 2015) The industry at large
- (01 Sep 2015) The case of the lying configuration file
- (31 Aug 2015) The case of the memory eater and high load
- (14 Aug 2015) The case of the man in the middle
- (05 Aug 2015) Reading the errors
- (29 Jul 2015) The evil licensing code
- (23 Jul 2015) The case of the native memory leak
- (16 Jul 2015) The case of the intransigent new database
- (13 Jul 2015) The case of the hung over server
- (09 Jul 2015) The case of the infected cluster
Comments
Change
Id
to be assigned from group is certainly a way to fix it, but one thing popup in my mind is the performance. This map and reduce index involve on 3 different collections. The key of this index is amount.First alternative
If we only join credits and charges collection without worry about users, then the map and reduce index only have to worry about user
Id
and amount. That way, we won't have such error from happening and to load it, the effort is minimum as well.Since user often have many other fields, such as phone, email etc. In the business logic, might be best to lazy load user and lazy fetch user's amount from index. First it make map and reduce index simple and any property of user change won't cause map and reduce index to trigger.
I'm not sure how well RavenDB handles property that not involved within the index, if user's email changed, I will assume map and reduce index will trigger, for any data that associated with that user.
Second alternative
Instead have map and reduce on 2 or 3 collections, let's have 2 map and reduce index on each collection. That way, when new credit or charge been added or modified, only one index will trigger. Since we have lazy operation in RavenDB client, retrieve shouldn't be an issue. Unless we want to order it.
Third alternative
Same as second approach to have two map and reduce index on charges and credits, we could summarize user's amount in user each time credit or charge been created or updated. That will require dedication to make sure it is always updating it. If any time unsuccessful update, it might require to go through all users to summarize again.
Then, similar to manually summarize on each modification, I remember Oren had an blog post about save index result in another virtual collection, and index that virtual collection instead. Oren can probably answer this, are we able to store outcome of two map and reduce index, then have another merge index to collect outcome? Such as.
The benefit to have charge and credit individual map and reduce index, is that we can use it to summarize who had most credits and who had most charges. Of course in exist map and reduce we can also do that, by have three fields.
amount
,charges
andcredits
.The only benefit of join three collection is to have more info from user's collection for full text search or filter purpose.
So, Oren, from speed, memory usage and amount of operation cost point of view, what would be better approach on such scenario? Of course we still need to consider actual business requirement.
Jason,
You are laying out the options in a good way. One thing to note is that the question is whether you even need the third map is crucial. In the case above, we just pull the name, and the question is whether we need it or not for _queries_. We can include or load the value otherwise, and usually that is what I would recommend.
The complexity starts when you have additional data there to deal with. For example, maybe the account type modifies when you declare an account as past due, etc.
For the "most credits" / "most debits" thing, you can also have fields for that, and I would recommend having a single index here.
Oren:
Sorry, my question seems not well described. When I was going through the scenario earlier, I was thinking of joined index and cost of any entity been changed. For index only involve with single collection, that's probably the simplest. Of course there are still details on how index outcome been maintained and how database update index outcome.
If we just focus on index that associated with multiple collections. We have few variants.
One to many joined index
In the scenario of this blog, it is credits or charges.
For such index, any time a charge been added, modified or delete. The cost is minimum. Where, when a user has been modified, the cost can be huge. Depends on how many charges associated with the given user.
One thing I am not certain is, if a given user's phone has been changed, will that cause above index to go through all charges that associated with the given user, even though user's name and email has not been changed?
Multiple collection map and reduce
For such index, as what you have in your blog. What kind of entity change will cost most resource for such index?
If the index is smart enough, I think it could figure out the user document ID, and modify existing index outcome, without go through all the charges and credits that associated with that user. As business logic developer, it would be good to understand of the cost of each index, then to put them into consideration on any new feature or improvement for existing logic.
Once we have more insight of the cost. For cases such as credit or charge deletion will cost most resource, as an example. Then when we have business rule says we will never delete credit or charges, then such index won't be costly at all.
Jason,
In the case of your index, choosing between load vs. multi-map. I would go with multi-map. In such cases, we need to do a lot less work to update the index.
Load document will trigger reindexing of all referencing documents for any change in the document, yes.
For such an index, a user modification will be the most expensive operation. Note that this is running on the background, you won't see it. But it is still costly.
A multi map, on the other hand, can update just the user entry and then compute the final result.
Cool, Thanks Oren. Multi map index is something I haven't utilize as much, thanks for clarify that.
Comment preview