ChallengeThe race condition in the TCP stack

time to read 3 min | 463 words

Occasionally, one of our tests hangs. Everything seems to be honky dory, but it just freezes and does not complete. This is a new piece of code, and thus is it suspicious unless proven otherwise, but an exhaustive review of it looked fine. It took over two days of effort to narrow it down, but eventually we managed to point the finger directly at this line of code:

image

In certain cases, this line would simply not read anything on the server. Even though the client most definitely sent the data. Now, given that TCP is being used, dropped packets might be expected. But we are actually testing on the loopback device, which I expect to be reliable.

We spent a lot of time investigating this, ending up with a very high degree of certainty that the problem was in the TCP stack somewhere. Somehow, on the loopback device, we were losing packets. Not always, and not consistently, but we were absolutely losing packets, which led the server to wait indefinitely for the client to send the message it already did.

Now, I’m as arrogant as the next developer, but even I don’t think I found that big a bug in TCP. I’m pretty sure that if it was this broken, I would have known about it. Beside, TCP is supposed to retransmit lost packets, so even if there were lost packets on the loopback device, we should have recovered from that.

Trying to figure out what was going on there sucked. It is hard to watch packets on the loopback device in WireShark, and tracing just told me that a message is sent from the client to the server, but the server never got it.

But we continued, and we ended up with a small reproduction of the issue. Here is the code, and my comments are below:

This code is pretty simple. It starts a TCP server, and listens to it, and then it reads and writes to the client. Nothing much here, I think you’ll agree.

If you run it, however, it will mostly work, except that sometimes (anywhere between 10 runs and 500 runs on my machine), it will just hang. I’ll save you some time and let you know that there are no dropped packets, TCP is working properly in this case. But the code just doesn’t. What is frustrating is that it is mostly working, it takes a lot of work to actually get it to fail.

Can you spot the bug? I’ll continue discussion of this in my next post.

More posts in "Challenge" series:

  1. (01 Jul 2024) Efficient snapshotable state
  2. (13 Oct 2023) Fastest node selection metastable error state–answer
  3. (12 Oct 2023) Fastest node selection metastable error state
  4. (19 Sep 2023) Spot the bug
  5. (04 Jan 2023) what does this code print?
  6. (14 Dec 2022) What does this code print?
  7. (01 Jul 2022) Find the stack smash bug… – answer
  8. (30 Jun 2022) Find the stack smash bug…
  9. (03 Jun 2022) Spot the data corruption
  10. (06 May 2022) Spot the optimization–solution
  11. (05 May 2022) Spot the optimization
  12. (06 Apr 2022) Why is this code broken?
  13. (16 Dec 2021) Find the slow down–answer
  14. (15 Dec 2021) Find the slow down
  15. (03 Nov 2021) The code review bug that gives me nightmares–The fix
  16. (02 Nov 2021) The code review bug that gives me nightmares–the issue
  17. (01 Nov 2021) The code review bug that gives me nightmares
  18. (16 Jun 2021) Detecting livelihood in a distributed cluster
  19. (21 Apr 2020) Generate matching shard id–answer
  20. (20 Apr 2020) Generate matching shard id
  21. (02 Jan 2020) Spot the bug in the stream
  22. (28 Sep 2018) The loop that leaks–Answer
  23. (27 Sep 2018) The loop that leaks
  24. (03 Apr 2018) The invisible concurrency bug–Answer
  25. (02 Apr 2018) The invisible concurrency bug
  26. (31 Jan 2018) Find the bug in the fix–answer
  27. (30 Jan 2018) Find the bug in the fix
  28. (19 Jan 2017) What does this code do?
  29. (26 Jul 2016) The race condition in the TCP stack, answer
  30. (25 Jul 2016) The race condition in the TCP stack
  31. (28 Apr 2015) What is the meaning of this change?
  32. (26 Sep 2013) Spot the bug
  33. (27 May 2013) The problem of locking down tasks…
  34. (17 Oct 2011) Minimum number of round trips
  35. (23 Aug 2011) Recent Comments with Future Posts
  36. (02 Aug 2011) Modifying execution approaches
  37. (29 Apr 2011) Stop the leaks
  38. (23 Dec 2010) This code should never hit production
  39. (17 Dec 2010) Your own ThreadLocal
  40. (03 Dec 2010) Querying relative information with RavenDB
  41. (29 Jun 2010) Find the bug
  42. (23 Jun 2010) Dynamically dynamic
  43. (28 Apr 2010) What killed the application?
  44. (19 Mar 2010) What does this code do?
  45. (04 Mar 2010) Robust enumeration over external code
  46. (16 Feb 2010) Premature optimization, and all of that…
  47. (12 Feb 2010) Efficient querying
  48. (10 Feb 2010) Find the resource leak
  49. (21 Oct 2009) Can you spot the bug?
  50. (18 Oct 2009) Why is this wrong?
  51. (17 Oct 2009) Write the check in comment
  52. (15 Sep 2009) NH Prof Exporting Reports
  53. (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
  54. (01 Sep 2009) Why isn’t select broken?
  55. (06 Aug 2009) Find the bug fixes
  56. (26 May 2009) Find the bug
  57. (14 May 2009) multi threaded test failure
  58. (11 May 2009) The regex that doesn’t match
  59. (24 Mar 2009) probability based selection
  60. (13 Mar 2009) C# Rewriting
  61. (18 Feb 2009) write a self extracting program
  62. (04 Sep 2008) Don't stop with the first DSL abstraction
  63. (02 Aug 2008) What is the problem?
  64. (28 Jul 2008) What does this code do?
  65. (26 Jul 2008) Find the bug fix
  66. (05 Jul 2008) Find the deadlock
  67. (03 Jul 2008) Find the bug
  68. (02 Jul 2008) What is wrong with this code
  69. (05 Jun 2008) why did the tests fail?
  70. (27 May 2008) Striving for better syntax
  71. (13 Apr 2008) calling generics without the generic type
  72. (12 Apr 2008) The directory tree
  73. (24 Mar 2008) Find the version
  74. (21 Jan 2008) Strongly typing weakly typed code
  75. (28 Jun 2007) Windsor Null Object Dependency Facility