Production postmortemThe lightly loaded trashing server

time to read 3 min | 522 words

1329757286-300pxWe got a puzzle. A particular customer was seeing very high latency in certain operations on a fairly frequent basis. The problem is that when this happened, the server was practically idle, serving around 5% of the usual requests/sec. What was even stranger was that during the times when we had reached the peek of requests/seconds, we’ll see no such slowdowns. The behavior was annoyingly consistent, we’ll see no slowdown at all during high load, but after a period of relatively light load, the server would appear to choke.

That one took a lot of time to figure out, because it was so strange. The immediate cause was pretty simple to figure out, the server was busy paging into memory a lot of data, but why would it need to do this? The server was just sitting there doing nothing much, but it was thrashing like crazy, and that was affecting the entire system.

I’ll spare you the investigation, because it was mostly grunting and frustration, but the sequence of events as we pieced it together was something like this:

  • The system is making heavy use of caching, with a cache duration set to 15 minutes or so. Most pages would hit the cache first, and if there was a miss, generate it and save it back. The cached documents was setup with the RavenDB expiration bundle.
  • During periods of high activity, we’ll typically have very few cache expiration (because we kept using the cached data) and we’ll fill up the cache quite heavily (the cache db was around 100GB or so).
  • That would work just fine and rapidly be able to serve a high number of requests.
  • And then came the idle period…
  • During that time, we had other work (by a different process) going on in the server, which we believe would give the OS reason to page the now unused memory to disk.

So far, everything goes on as predicted, but then something happens. The expiration timer is hit, and we now have a lot of items that need to be expired. RavenDB expiration is coarse, and it runs every few minutes, so each run we had a lot of stuff to delete. Most of it was on disk, and we needed to access all of it so we can delete it. And that caused us to trash, affecting the overall server performance.

As long as we were active, we wouldn’t expire so much at once, and we had a lot more of the db in memory, so the problem wasn’t apparent.

The solution was to remove the expiration usage and handle the cache invalidation in the client, when you fetched a cached value, you checked its age, and then you can apply a policy decision if you wanted to update it or not. This actually turned out to be a great feature in general for that particular customer, since they had a lot of data that can effectively be cached for much longer periods, and that gave them the ability to express that policy.

More posts in "Production postmortem" series:

  1. (07 Apr 2025) The race condition in the interlock
  2. (12 Dec 2023) The Spawn of Denial of Service
  3. (24 Jul 2023) The dog ate my request
  4. (03 Jul 2023) ENOMEM when trying to free memory
  5. (27 Jan 2023) The server ate all my memory
  6. (23 Jan 2023) The big server that couldn’t handle the load
  7. (16 Jan 2023) The heisenbug server
  8. (03 Oct 2022) Do you trust this server?
  9. (15 Sep 2022) The missed indexing reference
  10. (05 Aug 2022) The allocating query
  11. (22 Jul 2022) Efficiency all the way to Out of Memory error
  12. (18 Jul 2022) Broken networks and compressed streams
  13. (13 Jul 2022) Your math is wrong, recursion doesn’t work this way
  14. (12 Jul 2022) The data corruption in the node.js stack
  15. (11 Jul 2022) Out of memory on a clear sky
  16. (29 Apr 2022) Deduplicating replication speed
  17. (25 Apr 2022) The network latency and the I/O spikes
  18. (22 Apr 2022) The encrypted database that was too big to replicate
  19. (20 Apr 2022) Misleading security and other production snafus
  20. (03 Jan 2022) An error on the first act will lead to data corruption on the second act…
  21. (13 Dec 2021) The memory leak that only happened on Linux
  22. (17 Sep 2021) The Guinness record for page faults & high CPU
  23. (07 Jan 2021) The file system limitation
  24. (23 Mar 2020) high CPU when there is little work to be done
  25. (21 Feb 2020) The self signed certificate that couldn’t
  26. (31 Jan 2020) The slow slowdown of large systems
  27. (07 Jun 2019) Printer out of paper and the RavenDB hang
  28. (18 Feb 2019) This data corruption bug requires 3 simultaneous race conditions
  29. (25 Dec 2018) Handled errors and the curse of recursive error handling
  30. (23 Nov 2018) The ARM is killing me
  31. (22 Feb 2018) The unavailable Linux server
  32. (06 Dec 2017) data corruption, a view from INSIDE the sausage
  33. (01 Dec 2017) The random high CPU
  34. (07 Aug 2017) 30% boost with a single line change
  35. (04 Aug 2017) The case of 99.99% percentile
  36. (02 Aug 2017) The lightly loaded trashing server
  37. (23 Aug 2016) The insidious cost of managed memory
  38. (05 Feb 2016) A null reference in our abstraction
  39. (27 Jan 2016) The Razor Suicide
  40. (13 Nov 2015) The case of the “it is slow on that machine (only)”
  41. (21 Oct 2015) The case of the slow index rebuild
  42. (22 Sep 2015) The case of the Unicode Poo
  43. (03 Sep 2015) The industry at large
  44. (01 Sep 2015) The case of the lying configuration file
  45. (31 Aug 2015) The case of the memory eater and high load
  46. (14 Aug 2015) The case of the man in the middle
  47. (05 Aug 2015) Reading the errors
  48. (29 Jul 2015) The evil licensing code
  49. (23 Jul 2015) The case of the native memory leak
  50. (16 Jul 2015) The case of the intransigent new database
  51. (13 Jul 2015) The case of the hung over server
  52. (09 Jul 2015) The case of the infected cluster