Production postmortemThe industry at large

time to read 1 min | 100 words

The following is a really good study on real world production crashes:

Simple Testing Can Prevent Most Critical Failures:
An Analysis of Production Failures in Distributed
Data-Intensive Systems

It makes for fascinating reading, especially since the include the details of the root cause of some of the errors. I wasn’t sure whatever to cringe or sympathize Open-mouthed smile.

image

More posts in "Production postmortem" series:

  1. (07 Jan 2021) The file system limitation
  2. (23 Mar 2020) high CPU when there is little work to be done
  3. (21 Feb 2020) The self signed certificate that couldn’t
  4. (31 Jan 2020) The slow slowdown of large systems
  5. (07 Jun 2019) Printer out of paper and the RavenDB hang
  6. (18 Feb 2019) This data corruption bug requires 3 simultaneous race conditions
  7. (25 Dec 2018) Handled errors and the curse of recursive error handling
  8. (23 Nov 2018) The ARM is killing me
  9. (22 Feb 2018) The unavailable Linux server
  10. (06 Dec 2017) data corruption, a view from INSIDE the sausage
  11. (01 Dec 2017) The random high CPU
  12. (07 Aug 2017) 30% boost with a single line change
  13. (04 Aug 2017) The case of 99.99% percentile
  14. (02 Aug 2017) The lightly loaded trashing server
  15. (23 Aug 2016) The insidious cost of managed memory
  16. (05 Feb 2016) A null reference in our abstraction
  17. (27 Jan 2016) The Razor Suicide
  18. (13 Nov 2015) The case of the “it is slow on that machine (only)”
  19. (21 Oct 2015) The case of the slow index rebuild
  20. (22 Sep 2015) The case of the Unicode Poo
  21. (03 Sep 2015) The industry at large
  22. (01 Sep 2015) The case of the lying configuration file
  23. (31 Aug 2015) The case of the memory eater and high load
  24. (14 Aug 2015) The case of the man in the middle
  25. (05 Aug 2015) Reading the errors
  26. (29 Jul 2015) The evil licensing code
  27. (23 Jul 2015) The case of the native memory leak
  28. (16 Jul 2015) The case of the intransigent new database
  29. (13 Jul 2015) The case of the hung over server
  30. (09 Jul 2015) The case of the infected cluster