ChallengeFastest node selection metastable error state
Side note: Current state in Israel right now is bad. I’m writing this blog post as a form of escapism so I can talk about something that makes sense and follow logic and reason. I’ll not comment on the current status otherwise in this area.
Consider the following scenario. We have a bunch of servers and clients. The clients want to send requests for processing to the fastest node that they have available. But the algorithm that was implemented has an issue, can you see what this is?
To simplify things, we are going to assume that the work that is being done for each request is the same, so we don’t need to worry about different request workloads.
The idea is that each client node will find the fastest node (usually meaning the nearest one) and if there is enough load on the server to have it start throwing errors, it will try to find another one. This system has successfully spread the load across all servers, until one day, the entire system went down. And then it stayed down.
Can you figure out what is the issue?
More posts in "Challenge" series:
- (01 Jul 2024) Efficient snapshotable state
- (13 Oct 2023) Fastest node selection metastable error state–answer
- (12 Oct 2023) Fastest node selection metastable error state
- (19 Sep 2023) Spot the bug
- (04 Jan 2023) what does this code print?
- (14 Dec 2022) What does this code print?
- (01 Jul 2022) Find the stack smash bug… – answer
- (30 Jun 2022) Find the stack smash bug…
- (03 Jun 2022) Spot the data corruption
- (06 May 2022) Spot the optimization–solution
- (05 May 2022) Spot the optimization
- (06 Apr 2022) Why is this code broken?
- (16 Dec 2021) Find the slow down–answer
- (15 Dec 2021) Find the slow down
- (03 Nov 2021) The code review bug that gives me nightmares–The fix
- (02 Nov 2021) The code review bug that gives me nightmares–the issue
- (01 Nov 2021) The code review bug that gives me nightmares
- (16 Jun 2021) Detecting livelihood in a distributed cluster
- (21 Apr 2020) Generate matching shard id–answer
- (20 Apr 2020) Generate matching shard id
- (02 Jan 2020) Spot the bug in the stream
- (28 Sep 2018) The loop that leaks–Answer
- (27 Sep 2018) The loop that leaks
- (03 Apr 2018) The invisible concurrency bug–Answer
- (02 Apr 2018) The invisible concurrency bug
- (31 Jan 2018) Find the bug in the fix–answer
- (30 Jan 2018) Find the bug in the fix
- (19 Jan 2017) What does this code do?
- (26 Jul 2016) The race condition in the TCP stack, answer
- (25 Jul 2016) The race condition in the TCP stack
- (28 Apr 2015) What is the meaning of this change?
- (26 Sep 2013) Spot the bug
- (27 May 2013) The problem of locking down tasks…
- (17 Oct 2011) Minimum number of round trips
- (23 Aug 2011) Recent Comments with Future Posts
- (02 Aug 2011) Modifying execution approaches
- (29 Apr 2011) Stop the leaks
- (23 Dec 2010) This code should never hit production
- (17 Dec 2010) Your own ThreadLocal
- (03 Dec 2010) Querying relative information with RavenDB
- (29 Jun 2010) Find the bug
- (23 Jun 2010) Dynamically dynamic
- (28 Apr 2010) What killed the application?
- (19 Mar 2010) What does this code do?
- (04 Mar 2010) Robust enumeration over external code
- (16 Feb 2010) Premature optimization, and all of that…
- (12 Feb 2010) Efficient querying
- (10 Feb 2010) Find the resource leak
- (21 Oct 2009) Can you spot the bug?
- (18 Oct 2009) Why is this wrong?
- (17 Oct 2009) Write the check in comment
- (15 Sep 2009) NH Prof Exporting Reports
- (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
- (01 Sep 2009) Why isn’t select broken?
- (06 Aug 2009) Find the bug fixes
- (26 May 2009) Find the bug
- (14 May 2009) multi threaded test failure
- (11 May 2009) The regex that doesn’t match
- (24 Mar 2009) probability based selection
- (13 Mar 2009) C# Rewriting
- (18 Feb 2009) write a self extracting program
- (04 Sep 2008) Don't stop with the first DSL abstraction
- (02 Aug 2008) What is the problem?
- (28 Jul 2008) What does this code do?
- (26 Jul 2008) Find the bug fix
- (05 Jul 2008) Find the deadlock
- (03 Jul 2008) Find the bug
- (02 Jul 2008) What is wrong with this code
- (05 Jun 2008) why did the tests fail?
- (27 May 2008) Striving for better syntax
- (13 Apr 2008) calling generics without the generic type
- (12 Apr 2008) The directory tree
- (24 Mar 2008) Find the version
- (21 Jan 2008) Strongly typing weakly typed code
- (28 Jun 2007) Windsor Null Object Dependency Facility
Comments
Could it be that on line 15 we're not selecting the "next" fastest node? The code is picking a node, which might still be the same one we were using before
self.fastest
is a promise which will only run once andurl = await self.fastest
will always return the same valueThink it may be a failure loop.
If there are no successful responses SelectFastest gives first server as answer.
This first server will be overloaded and then the request fails which triggers SelectFastest again which does a request per server thus overloading the entire system again. Possible solution: random server as fallback when everybody down; exponential back off on error;
I'm with Gerdus - if one server fails, it's work will get moved the others. For every server that goes down, 1/Nth of the load will get spread to the others. If you start with 10 servers and one goes down, only 10% of the load will get moved to the others. If you start with three working servers and one goes down, then 1/3 of the load will get moved to the remaining two. At some point, there may be more load than the remaining servers can handle, so the first server gets hammered due the fallback logic in _SelectFastest.
Aside - I'm glad to hear you are safe. We will continue to hope for a quick and peaceful resolution in this terrible conflict.
David, Yes, but that is fine. The node is still the fastest node. That may have been a transient error, after all...
Espen, that is by design, yes.I want to "remember" the fastest node, otherwise there will be a lot of noise around each request
Gerdus, I didn't consider that scenario, in that case, all nodes are already broken, so it shouldn't matter, but it shouldn't be a problem to make this change, sure.
Chris,
Why do you assume that the load will spread ?
From the problem statement, I inferred (assumed?) that the Router would run on many client nodes, and each client is looking for the fastest server to connect to. Assuming the number of clients is stable, if a server becomes unavailable, that client would need to move to a new server.
Specify a timeout? Otherwise we will wait (for ever?) for a server that will never come up.
// Ryan
I suspect the other tasks
_SelectFastest
are ongoing coroutines, which are not cancelled when the first result becomes available. It is wrapped in another coroutine, which completes when the first result becomes availabe.So my assumption is that the result of
_SelectFastest
is not actually the fastest reply, but simply the first completed task in the list. Which may have completed later than the fastest task. In that case it ironically doesn't return the fastest. haha.What if the first server returns an error message?
Comment preview