Production postmortemOut of memory on a clear sky

time to read 3 min | 470 words

A customer opened a support call telling us that they reached the scaling limits of RavenDB. Given that they had a pretty big machine specifically to handle the load they were expecting, they were (rightly) upset about that.

A short back and forth caused us to realize that RavenDB started to fail shortly after they added a new customer to their system. And by fail I mean that it started throwing OutOfMemoryException in certain places. The system was not loaded and there weren’t any other indications of high load. The system had plenty of memory available, but critical functions inside RavenDB would fail because of out of memory errors.

We looked at the actual error and found this log message:

Raven.Client.Exceptions.Database.DatabaseLoadFailureException: Failed to start database orders-21
At /data/global/ravenData/Databases/orders-21
 ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
   at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName)
   at System.Threading.Thread.StartCore()
   at Raven.Server.Utils.PoolOfThreads.LongRunning(Action`1 action, Object state, String name) in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Utils\PoolOfThreads.cs:line 91
   at Raven.Server.Documents.TransactionOperationsMerger.Start() in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Documents\TransactionOperationsMerger.cs:line 76
   at Raven.Server.Documents.DocumentDatabase.Initialize(InitializeOptions options, Nullable`1 wakeup) in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Documents\DocumentDatabase.cs:line 388
   at Raven.Server.Documents.DatabasesLandlord.CreateDocumentsStorage(StringSegment databaseName, RavenConfiguration config, Nullable`1 wakeup) in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Documents\DatabasesLandlord.cs:line 826 

This is quite an interesting error. To start with, this is us failing to load a database, because we couldn’t spawn the relevant thread to handle transaction merging. That is bad, but why?

It turns out that .NET will only consider a single failure scenario for a thread failing to start. If it fails, it must be because the system is out of memory. However, we are running on Linux, and there are other reasons why that can happen. In particular, there are various limits that you can set on your environment that would limit the number of threads that you can set.

There are global knobs that you should look at first, such as those:

  • /proc/sys/kernel/threads-max
  • /proc/sys/kernel/pid_max
  • /proc/sys/vm/max_map_count

Any of those can serve as a limit. There are also ways to set those limits on a per process manner.

There is also a per user setting, which is controlled via:

/etc/systemd/logind.conf: UserTasksMax

The easiest way to figure out what is going on is to look at the kernel log at that time, here is what we got in the log:

a-orders-srv kernel: cgroup: fork rejected by pids controller in /system.slice/ravendb.service

That made it obvious where the problem was, in the ravendb.service file, we didn’t have TasksMax set, which meant that it was set to 4915 (probably automatically set by the system depending on some heuristic).

When the number of databases and operations on the database reached a particular size, we hit the limit and started failing. That is not a fun place to be in, but at least it is easy to fix.

I created this post specifically so it will be easy to Google that in the future. I also created an issue to get a better error message in this scenario.

More posts in "Production postmortem" series:

  1. (05 Aug 2022) The allocating query
  2. (22 Jul 2022) Efficiency all the way to Out of Memory error
  3. (18 Jul 2022) Broken networks and compressed streams
  4. (13 Jul 2022) Your math is wrong, recursion doesn’t work this way
  5. (12 Jul 2022) The data corruption in the node.js stack
  6. (11 Jul 2022) Out of memory on a clear sky
  7. (29 Apr 2022) Deduplicating replication speed
  8. (25 Apr 2022) The network latency and the I/O spikes
  9. (22 Apr 2022) The encrypted database that was too big to replicate
  10. (20 Apr 2022) Misleading security and other production snafus
  11. (03 Jan 2022) An error on the first act will lead to data corruption on the second act…
  12. (13 Dec 2021) The memory leak that only happened on Linux
  13. (17 Sep 2021) The Guinness record for page faults & high CPU
  14. (07 Jan 2021) The file system limitation
  15. (23 Mar 2020) high CPU when there is little work to be done
  16. (21 Feb 2020) The self signed certificate that couldn’t
  17. (31 Jan 2020) The slow slowdown of large systems
  18. (07 Jun 2019) Printer out of paper and the RavenDB hang
  19. (18 Feb 2019) This data corruption bug requires 3 simultaneous race conditions
  20. (25 Dec 2018) Handled errors and the curse of recursive error handling
  21. (23 Nov 2018) The ARM is killing me
  22. (22 Feb 2018) The unavailable Linux server
  23. (06 Dec 2017) data corruption, a view from INSIDE the sausage
  24. (01 Dec 2017) The random high CPU
  25. (07 Aug 2017) 30% boost with a single line change
  26. (04 Aug 2017) The case of 99.99% percentile
  27. (02 Aug 2017) The lightly loaded trashing server
  28. (23 Aug 2016) The insidious cost of managed memory
  29. (05 Feb 2016) A null reference in our abstraction
  30. (27 Jan 2016) The Razor Suicide
  31. (13 Nov 2015) The case of the “it is slow on that machine (only)”
  32. (21 Oct 2015) The case of the slow index rebuild
  33. (22 Sep 2015) The case of the Unicode Poo
  34. (03 Sep 2015) The industry at large
  35. (01 Sep 2015) The case of the lying configuration file
  36. (31 Aug 2015) The case of the memory eater and high load
  37. (14 Aug 2015) The case of the man in the middle
  38. (05 Aug 2015) Reading the errors
  39. (29 Jul 2015) The evil licensing code
  40. (23 Jul 2015) The case of the native memory leak
  41. (16 Jul 2015) The case of the intransigent new database
  42. (13 Jul 2015) The case of the hung over server
  43. (09 Jul 2015) The case of the infected cluster