Production postmortemReading the errors

time to read 4 min | 776 words

One of the hardest practices to learn as a developer is to actually sit down and read the information that you have. Ever since we became a product company, we have to deal with partial information, because some data wasn’t captured, or it isn’t possible to capture more information because it is running in a production environment.

That means that we have to do the most with the information we have available. Here are a few recent cases of having information, and ignoring it. A customer complained that a query was timing out. He provided the following error.

{"Url":"/databases/Events/streams/query/
EventsByTimeStampAndSequenceIndex?&start=9134999&sort=EventTimeStamp&sort=EventSequence_Range&SortHint-
EventTimeStamp=String&SortHint-EventSequence_Range=Long","Error":"System.OperationCanceledException: The operation 
was canceled.\r\n   at System.Threading.CancellationToken.ThrowIfCancellationRequested()\r\n   at Raven.Database.
Indexing.Index.IndexQueryOperation.<Query>d__5d.MoveNext()\r\n   at Raven.Database.Util.ActiveEnumerable`1..ctor(
IEnumerable`1 enumerable)\r\n   at Raven.Database.Actions.QueryActions.DatabaseQueryOperation.Init()\r\n   at Raven
.Database.Server.Controllers.StreamsController.SteamQueryGet(String id)\r\n   at lambda_method(Closure , Object , 
Object[] )\r\n   at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClass10.<
GetExecutor>b__9(Object instance, Object[] methodParameters)\r\n   at System.Web.Http.Controllers.
ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, 
CancellationToken cancellationToken)\r\n--- End of stack trace from previous location where exception was thrown 
---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.
CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Runtime.
CompilerServices.TaskAwaiter`1.GetResult()\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<
InvokeActionAsyncCore>d__0.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown 
---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.
CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.
HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter`1.
GetResult()\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()\r\n--- End of 
stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.
ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task 
task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task 
task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\r\n   at System.Web.Http.Controllers.
ExceptionFilterResult.<ExecuteAsync>d__0.MoveNext()"}

As you can see, that is a big chunk of text, and it require careful parsing. I’m actually quite proud of this, even though the error is obtuse, it is giving us all the information that we need to figure out what the problem is.

In particular, there are two pieces of information that we need to see here:

  • OperationCancelledException is thrown when we have a timeout processing the query on the server side.
  • start=9134999 is a request to do deep paging of about 10 million records, and then start reading.

Those two combine tell us that the issue is that we are doing deep paging, which cause us to timeout internally before we start sending any data.

But why are we timing out on deep paging? What is expensive about that?

Turns out that RavenDB does de-dup during queries. So if you got events/1 in page #1, you won’t see it in page #2 even if there are results from the index for that document in page #2. If you care to know more, the issue is related to fanout and one document having many index entries pointing to it. But the major issue here is that we need to scan 10 million docs to avoid duplicates before we can return any value to the user. There is a flag you can send that would avoid this, but that wasn’t used in this case.

Another issue that was solved by reading the message from the customer was:

Message
Version store out of memory (cleanup already attempted)
 
Exception
Microsoft.Isam.Esent.Interop.EsentVersionStoreOutOfMemoryException: Version store out of memory (cleanup already attempted) 
at Raven.Database.Storage.Esent.StorageActions.DocumentStorageActions.RemoveAllBefore(String name, Etag etag) in c:\Builds\RavenDB-Stable-3.0\Raven.Database\Storage\Esent\StorageActions\Lists.cs:line 74
at Raven.Database.Smuggler.SmugglerEmbeddedDatabaseOperations.<>c__DisplayClass12.<PurgeTombstones>b__11(IStorageActionsAccessor accessor) in c:\Builds\RavenDB-Stable-3.0\Raven.Database\Smuggler\SmugglerEmbeddedDatabaseOperations.cs:line 219
at Raven.Storage.Esent.TransactionalStorage.ExecuteBatch(Action`1 action, EsentTransactionContext transactionContext) in c:\Builds\RavenDB-Stable-3.0\Raven.Database\Storage\Esent\TransactionalStorage.cs:line 799 at Raven.Storage.Esent.TransactionalStorage.Batch(Action`1 action) in c:\Builds\RavenDB-Stable-3.0\Raven.Database\Storage\Esent\TransactionalStorage.cs:line 778
at Raven.Abstractions.Smuggler.SmugglerDatabaseApiBase.<ExportData>d__2.MoveNext() in c:\Builds\RavenDB-Stable-3.0\Raven.Abstractions\Smuggler\SmugglerDatabaseApiBase.cs:line 187 --- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
Reported
25 minutes ago (08/02/15, 9:43am)
Level
Error

Version store out of memory happens when the size of the uncommitted transaction is too large, and we error. Usually that indicate some unbounded operation that wasn’t handled well.

Those are almost always batch operations of some kind that do various typed of cleanups. In order to handle them, we are pulsing the transaction. Effectively committing the transaction and starting a new one. That frees the uncommitted values.

The problem was that this method had this behavior:

image

So the investigation was focused on why MaybePulseTransaction didn’t work properly. There is some heuristics there to avoid being too costly, so it was very strange.

Can you see the error? We were looking at the wrong overload, this is the one that takes a time, and the one that was called takes an etag. Indeed, in the version the customer had, that call wasn’t there. In the current version, it was added.

Knowing how to read errors and logs, and actually paying attention to what they are saying is crucial when the time to fix a problem is at hand.

More posts in "Production postmortem" series:

  1. (12 Dec 2023) The Spawn of Denial of Service
  2. (24 Jul 2023) The dog ate my request
  3. (03 Jul 2023) ENOMEM when trying to free memory
  4. (27 Jan 2023) The server ate all my memory
  5. (23 Jan 2023) The big server that couldn’t handle the load
  6. (16 Jan 2023) The heisenbug server
  7. (03 Oct 2022) Do you trust this server?
  8. (15 Sep 2022) The missed indexing reference
  9. (05 Aug 2022) The allocating query
  10. (22 Jul 2022) Efficiency all the way to Out of Memory error
  11. (18 Jul 2022) Broken networks and compressed streams
  12. (13 Jul 2022) Your math is wrong, recursion doesn’t work this way
  13. (12 Jul 2022) The data corruption in the node.js stack
  14. (11 Jul 2022) Out of memory on a clear sky
  15. (29 Apr 2022) Deduplicating replication speed
  16. (25 Apr 2022) The network latency and the I/O spikes
  17. (22 Apr 2022) The encrypted database that was too big to replicate
  18. (20 Apr 2022) Misleading security and other production snafus
  19. (03 Jan 2022) An error on the first act will lead to data corruption on the second act…
  20. (13 Dec 2021) The memory leak that only happened on Linux
  21. (17 Sep 2021) The Guinness record for page faults & high CPU
  22. (07 Jan 2021) The file system limitation
  23. (23 Mar 2020) high CPU when there is little work to be done
  24. (21 Feb 2020) The self signed certificate that couldn’t
  25. (31 Jan 2020) The slow slowdown of large systems
  26. (07 Jun 2019) Printer out of paper and the RavenDB hang
  27. (18 Feb 2019) This data corruption bug requires 3 simultaneous race conditions
  28. (25 Dec 2018) Handled errors and the curse of recursive error handling
  29. (23 Nov 2018) The ARM is killing me
  30. (22 Feb 2018) The unavailable Linux server
  31. (06 Dec 2017) data corruption, a view from INSIDE the sausage
  32. (01 Dec 2017) The random high CPU
  33. (07 Aug 2017) 30% boost with a single line change
  34. (04 Aug 2017) The case of 99.99% percentile
  35. (02 Aug 2017) The lightly loaded trashing server
  36. (23 Aug 2016) The insidious cost of managed memory
  37. (05 Feb 2016) A null reference in our abstraction
  38. (27 Jan 2016) The Razor Suicide
  39. (13 Nov 2015) The case of the “it is slow on that machine (only)”
  40. (21 Oct 2015) The case of the slow index rebuild
  41. (22 Sep 2015) The case of the Unicode Poo
  42. (03 Sep 2015) The industry at large
  43. (01 Sep 2015) The case of the lying configuration file
  44. (31 Aug 2015) The case of the memory eater and high load
  45. (14 Aug 2015) The case of the man in the middle
  46. (05 Aug 2015) Reading the errors
  47. (29 Jul 2015) The evil licensing code
  48. (23 Jul 2015) The case of the native memory leak
  49. (16 Jul 2015) The case of the intransigent new database
  50. (13 Jul 2015) The case of the hung over server
  51. (09 Jul 2015) The case of the infected cluster