ChallengeThe race condition in the TCP stack, answer

time to read 3 min | 410 words

In my previous post, I discussed a problem in missing data over TCP connection that happened in a racy manner, only every few hundred runs. As it turns out, there is a simple way to make the code run into the problem every single time.

The full code for the repro can be found here.

Change these lines:

image

And voila, you will consistently run into the problem .  Wait, run that by me again, what is going on here?

As it turns out, the issue is in the server, more specifically, here and here. We use a StreamReader to read the first line from the client, do some processing, and then hand it to the ProcessConnection method, which also uses a StreamReader. More significantly, it uses a different StreamReader.

Why is that significant? Well, because of this, the StreamReader has buffers, by default, that are 1KB in size. So here is what happens in the case above: we send a single packet to the server, and when the first StreamReader reads from the stream, it fills the buffer with the two messages. But since there is a line break between them, when we call ReadLineAsync, we actually only get the first one.

Then, we when get to the ProcessConnection method, we have another StreamReader, which also reads from the stream, but the second message had already been read (and is waiting in the first StreamReader buffer), so we are waiting for more information from the client, which will never come.

So how come it sort of works if we do this in two separate calls? Well, it is all about the speed. In most cases, when we split it into two separate calls, the server socket has only the first message in there when the first StreamReader runs, so the second StreamReader is successful in reading the second line. But in some cases, the client manages being fast enough and sending both messages to the server before the server can read them, and voila, we have the same behavior, only much more unpredictable.

The key problem was that it wasn’t obvious we were reading too much from the stream, and until we figured that one out, we were looking in a completely wrong direction. 

More posts in "Challenge" series:

  1. (03 Feb 2025) Giving file system developer ulcer
  2. (20 Jan 2025) What does this code do?
  3. (01 Jul 2024) Efficient snapshotable state
  4. (13 Oct 2023) Fastest node selection metastable error state–answer
  5. (12 Oct 2023) Fastest node selection metastable error state
  6. (19 Sep 2023) Spot the bug
  7. (04 Jan 2023) what does this code print?
  8. (14 Dec 2022) What does this code print?
  9. (01 Jul 2022) Find the stack smash bug… – answer
  10. (30 Jun 2022) Find the stack smash bug…
  11. (03 Jun 2022) Spot the data corruption
  12. (06 May 2022) Spot the optimization–solution
  13. (05 May 2022) Spot the optimization
  14. (06 Apr 2022) Why is this code broken?
  15. (16 Dec 2021) Find the slow down–answer
  16. (15 Dec 2021) Find the slow down
  17. (03 Nov 2021) The code review bug that gives me nightmares–The fix
  18. (02 Nov 2021) The code review bug that gives me nightmares–the issue
  19. (01 Nov 2021) The code review bug that gives me nightmares
  20. (16 Jun 2021) Detecting livelihood in a distributed cluster
  21. (21 Apr 2020) Generate matching shard id–answer
  22. (20 Apr 2020) Generate matching shard id
  23. (02 Jan 2020) Spot the bug in the stream
  24. (28 Sep 2018) The loop that leaks–Answer
  25. (27 Sep 2018) The loop that leaks
  26. (03 Apr 2018) The invisible concurrency bug–Answer
  27. (02 Apr 2018) The invisible concurrency bug
  28. (31 Jan 2018) Find the bug in the fix–answer
  29. (30 Jan 2018) Find the bug in the fix
  30. (19 Jan 2017) What does this code do?
  31. (26 Jul 2016) The race condition in the TCP stack, answer
  32. (25 Jul 2016) The race condition in the TCP stack
  33. (28 Apr 2015) What is the meaning of this change?
  34. (26 Sep 2013) Spot the bug
  35. (27 May 2013) The problem of locking down tasks…
  36. (17 Oct 2011) Minimum number of round trips
  37. (23 Aug 2011) Recent Comments with Future Posts
  38. (02 Aug 2011) Modifying execution approaches
  39. (29 Apr 2011) Stop the leaks
  40. (23 Dec 2010) This code should never hit production
  41. (17 Dec 2010) Your own ThreadLocal
  42. (03 Dec 2010) Querying relative information with RavenDB
  43. (29 Jun 2010) Find the bug
  44. (23 Jun 2010) Dynamically dynamic
  45. (28 Apr 2010) What killed the application?
  46. (19 Mar 2010) What does this code do?
  47. (04 Mar 2010) Robust enumeration over external code
  48. (16 Feb 2010) Premature optimization, and all of that…
  49. (12 Feb 2010) Efficient querying
  50. (10 Feb 2010) Find the resource leak
  51. (21 Oct 2009) Can you spot the bug?
  52. (18 Oct 2009) Why is this wrong?
  53. (17 Oct 2009) Write the check in comment
  54. (15 Sep 2009) NH Prof Exporting Reports
  55. (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
  56. (01 Sep 2009) Why isn’t select broken?
  57. (06 Aug 2009) Find the bug fixes
  58. (26 May 2009) Find the bug
  59. (14 May 2009) multi threaded test failure
  60. (11 May 2009) The regex that doesn’t match
  61. (24 Mar 2009) probability based selection
  62. (13 Mar 2009) C# Rewriting
  63. (18 Feb 2009) write a self extracting program
  64. (04 Sep 2008) Don't stop with the first DSL abstraction
  65. (02 Aug 2008) What is the problem?
  66. (28 Jul 2008) What does this code do?
  67. (26 Jul 2008) Find the bug fix
  68. (05 Jul 2008) Find the deadlock
  69. (03 Jul 2008) Find the bug
  70. (02 Jul 2008) What is wrong with this code
  71. (05 Jun 2008) why did the tests fail?
  72. (27 May 2008) Striving for better syntax
  73. (13 Apr 2008) calling generics without the generic type
  74. (12 Apr 2008) The directory tree
  75. (24 Mar 2008) Find the version
  76. (21 Jan 2008) Strongly typing weakly typed code
  77. (28 Jun 2007) Windsor Null Object Dependency Facility