Production postmortemThe case of the “it is slow on that machine (only)”

time to read 4 min | 661 words

A customer called with a major issue, on a customer machine, a particular operation took too long. In fact, it takes quite a bit more than too long. Instead of the few milliseconds or (at worst, seconds), the customer saw a value in the many minutes.

At first, we narrowed it down to an extreme load on the indexing engine. The customer had a small set of documents that were referenced using LoadDocument by large number of other documents. That meant that whenever those documents were updated, we would need to reindex all the referencing documents.

In their case, that was in the tens to hundreds of thousands of referencing documents in some cases. So an update to a single document could force re-indexing of quarter million documents. Except… that this wasn’t actually the case. What drove everyone crazy was that here was a reasonable, truthful and correct answer. And on one machine the exact same thing took 20 – 30 seconds, and on the customer machine the process took 20 minutes.

The customer also assured us that those documents that everyone referenced are very rarely, if at all, touched or modified, so that shouldn’t be the issue.

The machine with the problem was significantly more powerful from the one without the problem. This issue also started to occur recently, out of the blue. Tracing the resource utilization in the system showed moderate CPU usage, low I/O and memory consumption and nothing much really going on. We looked at the debug logs, and we couldn’t really figure out what it was doing. There were very large gaps in the log where nothing seems to be happening. % Time in GC was low, so that ruled out a long GC pause that would explain the gap in the logs.

This is in version 2.5, which predates all of our introspection efforts, so figuring out what was going on was pretty hard. I’ll have another post talking about that in this context later.

Eventually we gained access to the machine and was able to reproduce this, and take a few mini dumps along the way. Looking at the stack traces, we found this:

image

And now it all became clear. Suggestions in RavenDB is a really cool feature, which allows you to ask RavenDB to figure out what the user actually meant to ask. It is also extremely CPU intensive during indexing, which is really visible when you try to pump large number of documents through it. And it is a single threaded process.

Except that the customer wasn’t using Suggestions in their application…

So, what happened, in order to hit this issue the following things all needed to happen:

  • Suggestions to be enabled on the relevant index/indexes. Check while the customer wasn’t currently using it, they were using it in the past, and unfortunately that stuck.
  • A very large number of documents need to be indexed. Check – that happened when they updated one of the commonly referenced documents.
  • A commonly referenced document needed to be modified. Check – that happens when they started work for next year, which modified those rarely touched documents.

Now, why didn’t it manifest itself on the other machines? Simple, on those machines, they used the current version of the application, which didn’t use suggestions. On the machines that were problematic, they upgraded to the new version, so even though they weren’t using suggestions, that was still in affect, and still running.

According to a cursory check, those suggestions has been running there for over 6 months, and no one noticed, because you needed the confluence of all three aspects to actually get this issue.

Removing the suggestions once we know they were there was very easy, and the problem was resolved.

More posts in "Production postmortem" series:

  1. (07 Jun 2019) Printer out of paper and the RavenDB hang
  2. (18 Feb 2019) This data corruption bug requires 3 simultaneous race conditions
  3. (25 Dec 2018) Handled errors and the curse of recursive error handling
  4. (23 Nov 2018) The ARM is killing me
  5. (22 Feb 2018) The unavailable Linux server
  6. (06 Dec 2017) data corruption, a view from INSIDE the sausage
  7. (01 Dec 2017) The random high CPU
  8. (07 Aug 2017) 30% boost with a single line change
  9. (04 Aug 2017) The case of 99.99% percentile
  10. (02 Aug 2017) The lightly loaded trashing server
  11. (23 Aug 2016) The insidious cost of managed memory
  12. (05 Feb 2016) A null reference in our abstraction
  13. (27 Jan 2016) The Razor Suicide
  14. (13 Nov 2015) The case of the “it is slow on that machine (only)”
  15. (21 Oct 2015) The case of the slow index rebuild
  16. (22 Sep 2015) The case of the Unicode Poo
  17. (03 Sep 2015) The industry at large
  18. (01 Sep 2015) The case of the lying configuration file
  19. (31 Aug 2015) The case of the memory eater and high load
  20. (14 Aug 2015) The case of the man in the middle
  21. (05 Aug 2015) Reading the errors
  22. (29 Jul 2015) The evil licensing code
  23. (23 Jul 2015) The case of the native memory leak
  24. (16 Jul 2015) The case of the intransigent new database
  25. (13 Jul 2015) The case of the hung over server
  26. (09 Jul 2015) The case of the infected cluster