ChallengeMinimum number of round trips
We are working on creating better experience for RavenDB & Sharding, and that led us to the following piece of code:
shardedSession.Load<Post>("posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232");
And the following Shard function:
public static string[] GetAppropriateUrls(string id) { switch (id.Last()) { case '4': return new[] { "http://srv-4", "http://srv-backup-4" }; case '2': return new[] { "http://srv-2" }; case '8': return new[] { "http://srv-backup-4" }; default: throw new InvalidOperationException(); } }
Write a function that would make the minimum number of queries to load of all the posts from all of the servers.
More posts in "Challenge" series:
- (03 Feb 2025) Giving file system developer ulcer
- (20 Jan 2025) What does this code do?
- (01 Jul 2024) Efficient snapshotable state
- (13 Oct 2023) Fastest node selection metastable error state–answer
- (12 Oct 2023) Fastest node selection metastable error state
- (19 Sep 2023) Spot the bug
- (04 Jan 2023) what does this code print?
- (14 Dec 2022) What does this code print?
- (01 Jul 2022) Find the stack smash bug… – answer
- (30 Jun 2022) Find the stack smash bug…
- (03 Jun 2022) Spot the data corruption
- (06 May 2022) Spot the optimization–solution
- (05 May 2022) Spot the optimization
- (06 Apr 2022) Why is this code broken?
- (16 Dec 2021) Find the slow down–answer
- (15 Dec 2021) Find the slow down
- (03 Nov 2021) The code review bug that gives me nightmares–The fix
- (02 Nov 2021) The code review bug that gives me nightmares–the issue
- (01 Nov 2021) The code review bug that gives me nightmares
- (16 Jun 2021) Detecting livelihood in a distributed cluster
- (21 Apr 2020) Generate matching shard id–answer
- (20 Apr 2020) Generate matching shard id
- (02 Jan 2020) Spot the bug in the stream
- (28 Sep 2018) The loop that leaks–Answer
- (27 Sep 2018) The loop that leaks
- (03 Apr 2018) The invisible concurrency bug–Answer
- (02 Apr 2018) The invisible concurrency bug
- (31 Jan 2018) Find the bug in the fix–answer
- (30 Jan 2018) Find the bug in the fix
- (19 Jan 2017) What does this code do?
- (26 Jul 2016) The race condition in the TCP stack, answer
- (25 Jul 2016) The race condition in the TCP stack
- (28 Apr 2015) What is the meaning of this change?
- (26 Sep 2013) Spot the bug
- (27 May 2013) The problem of locking down tasks…
- (17 Oct 2011) Minimum number of round trips
- (23 Aug 2011) Recent Comments with Future Posts
- (02 Aug 2011) Modifying execution approaches
- (29 Apr 2011) Stop the leaks
- (23 Dec 2010) This code should never hit production
- (17 Dec 2010) Your own ThreadLocal
- (03 Dec 2010) Querying relative information with RavenDB
- (29 Jun 2010) Find the bug
- (23 Jun 2010) Dynamically dynamic
- (28 Apr 2010) What killed the application?
- (19 Mar 2010) What does this code do?
- (04 Mar 2010) Robust enumeration over external code
- (16 Feb 2010) Premature optimization, and all of that…
- (12 Feb 2010) Efficient querying
- (10 Feb 2010) Find the resource leak
- (21 Oct 2009) Can you spot the bug?
- (18 Oct 2009) Why is this wrong?
- (17 Oct 2009) Write the check in comment
- (15 Sep 2009) NH Prof Exporting Reports
- (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
- (01 Sep 2009) Why isn’t select broken?
- (06 Aug 2009) Find the bug fixes
- (26 May 2009) Find the bug
- (14 May 2009) multi threaded test failure
- (11 May 2009) The regex that doesn’t match
- (24 Mar 2009) probability based selection
- (13 Mar 2009) C# Rewriting
- (18 Feb 2009) write a self extracting program
- (04 Sep 2008) Don't stop with the first DSL abstraction
- (02 Aug 2008) What is the problem?
- (28 Jul 2008) What does this code do?
- (26 Jul 2008) Find the bug fix
- (05 Jul 2008) Find the deadlock
- (03 Jul 2008) Find the bug
- (02 Jul 2008) What is wrong with this code
- (05 Jun 2008) why did the tests fail?
- (27 May 2008) Striving for better syntax
- (13 Apr 2008) calling generics without the generic type
- (12 Apr 2008) The directory tree
- (24 Mar 2008) Find the version
- (21 Jan 2008) Strongly typing weakly typed code
- (28 Jun 2007) Windsor Null Object Dependency Facility
Comments
Assuming that when we have two or more shards a single post can be freely loaded from any:
This code produces the following results:
Anton, you can save a request to http://srv-4 by loading the posts from http://srv-backup-4.
You might want to consider finding posts that can be serviced from common servers too...
The final result would be:
http://srv-2 - posts/1232 posts/1232 http://srv-backup-4 - posts/1238 posts/1234 post/3214
Saves one round-trip
Anton & Stuart, When you have multiple urls for a single id, that document reside in one of those urls, not any of them.
var distinctIds = ids.Distinct(); var allShardsAndServers = distinctIds.SelectMany(id => GetAppropriateUrls(id).Select(server => new { id, server })); var rankedByPopularServer = allShardsAndServers.GroupBy(i => i.server).OrderByDescending(g => g.Count()); var idsForShards = distinctIds.Select(id => rankedByPopularServer.SelectMany(g => g).First(g => g.id == id));
Gives the ouput:
{ id = posts/1234, server = http://srv-backup-4 } { id = post/3214, server = http://srv-backup-4 } { id = posts/1232, server = http://srv-2 } { id = posts/1238, server = http://srv-backup-4 }
Stuart, posts/1234 may be in either srv-4 or srv-backup-4, you need to check both
@Ayende
So a case '4' could be found in either "http://srv-4" OR "http://srv-backup-4", as opposed to "http://srv-4" AND "http://srv-backup-4"?
Stuart, Yes
How about something like this then?
var ids = new[] { "posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232" };
var requests = ids.SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) .GroupBy(m => m.Url);
Yan, Nice, now handle the case of an item with multiple Url found on the first item and shouldn't be in the second request to the url
Ayende, Just to clarify, assuming that the requests are to be made synchronously, and the first request was to srv-4 and posts/1234 was found there, now you want to exclude posts/1234 from the request to srv-backup-4 because you already have it?
Yan, Yes, and if there aren't any more ids to check, don't make the request
Ayende, I introduced a dummy 'MakeRequest' method here that will tell you that you've got all the keys you ask for:
// Define other methods and classes here // dummy request method that returns all the ids you can ask for public static string[] MakeRequest(string url, string[] ids) { Console.WriteLine("{0} retrieved", string.Join(",", ids)); return ids; }
And the rest of the solution goes like this:
var ids = new[] { "posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232" };
// build up a dictionary of the distinct post ids to keep track of 'if' it has been retrieved var idsDict = ids.Distinct().ToDictionary(id => id, id => false);
var idGroups = idsDict .Keys // map each Id into Id-Url pairs, one for each url .SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) // group by the url to get a Url-Id[] mapping .GroupBy(m => m.Url) // filter groups so only proceed if a group has ids it needs to retrieve // note this works thanks to delay execution .Where(gr => gr.Any(m => !idsDict[m.Id])) // final transformation to make it easier for the loop .Select(gr => new { Url = gr.Key, Ids = gr.Select(m => m.Id).Where(id => !idsDict[id]).ToArray() });
foreach (var idGroup in idGroups) { var retrieved = MakeRequest(idGroup.Url, idGroup.Ids);
}
Running this prints:
posts/1234,post/3214 retrieved posts/1238 retrieved posts/1232 retrieved
Well, why is there a limitation of one document per GET request in the first place? Loading means retrieving, so one can argue that there's no reason why a request couldn't return multiple documents?
Just thinking outside the box..
Dave, Oh, you can get multiple ids in a single request, yes
A further optimisation to Yan's solution would be to consider the order in which urls are hit. For e.g. any Load containing an Id ending in "8" means http://srv-backup-4 MUST be hit. If we do this first we MIGHT also get those ending in "4" for free and save a hit to http://srv-backup-4.
Sorry, save a hit to http://srv-4
Dan, That's a nice idea, even without formally specifying that one server should have priority another, for this example at least you can save the hit to srv-4 by ordering the groups by the number of ids it contains:
idsDict .Keys // map each Id into Id-Url pairs, one for each url .SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) // group by the url to get a Url-Id[] mapping .GroupBy(m => m.Url) .OrderByDescending(m => m.Count()) // filter groups so only proceed if a group has ids it needs to retrieve // note this works thanks to delay execution .Where(gr => gr.Any(m => !idsDict[m.Id]))
Doing so prints out (plus minor modification to 'MakeRequest' method): posts/1234,post/3214,posts/1238 retrieved from http://srv-backup-4 posts/1232 retrieved from http://srv-2
With given knowledge it's impossible to get optimal request sequence, because we don't know where document is located (on main or backup server). So, solution with Dynamic Programming might fit well. The idea to load document with iteration, and pick server, who could return the largest number of documents on each turn:
F# Solution:
let mapToUrl (id : System.String) = let lastChar = id.[(String.length id) - 1] match lastChar with | '4' -> ["http://srv-4"; "http://srv-backup-4"] | '2' -> ["http://srv-2"] | '8' -> ["http://srv-backup-4"]
let docsList = ["posts/1234"; "post/3214"; "posts/1232"; "posts/1238"; "posts/1232"] let fetch server docs = printfn "Fetching:\t%s - %A" server docs let serverContents = match server with | "http://srv-4" -> ["posts/1234"] | "http://srv-backup-4" -> ["post/3214"; "posts/1238"] | "http://srv-2" -> ["posts/1232"; "posts/1232"] let r = serverContents |> Set.ofSeq |> Set.intersect (Set.ofSeq docs) printfn "Fetched:\t%A" r r
let rec bestCandidateRec (docsList : string list) processedServers = let (srv, docParis) = docsList |> List.ofSeq |> Seq.collect (fun d -> mapToUrl d |> Seq.map (fun x -> (x, d))) |> Seq.filter (fun (srv, _) -> not (Set.contains srv processedServers)) |> Seq.groupBy fst |> Seq.maxBy (fun(k, v) -> Seq.length v) let docs = docParis |> Seq.map snd let fetched = fetch srv docs let remaining = (Set.difference (Set.ofSeq docsList) (fetched)) printfn "Remaining:\t%A" remaining if not (Set.isEmpty remaining) then bestCandidateRec (Set.toList remaining) (processedServers |> Set.add srv)
bestCandidateRec docsList Set.empty
Result: Fetching: http://srv-backup-4 - seq ["posts/1234"; "post/3214"; "posts/1238"] Fetched: set ["post/3214"; "posts/1238"] Remaining: set ["posts/1232"; "posts/1234"] Fetching: http://srv-2 - seq ["posts/1232"] Fetched: set ["posts/1232"] Remaining: set ["posts/1234"] Fetching: http://srv-4 - seq ["posts/1234"] Fetched: set ["posts/1234"] Remaining: set []
There is not enough information to determine optimal sequence of server calls. So, my best bet is to use a dynamic programming technique to call server, which could return largest number of documents each time. While this is not guaranteeing to find a best possible solution, it will minimize a number of round trips, compared to naive solution.
For better results, it might be good idea to add some heuristics, based on run-time usage data, to pick "better" servers.
For more complex loading schemes, it's possible to find subset of servers to call, using, for example Minimum Spanning Tree algorithm.
First pass, F# solution
let mapToUrl (id : System.String) = let lastChar = id.[(String.length id) - 1] match lastChar with | '4' -> ["http://srv-4"; "http://srv-backup-4"] | '2' -> ["http://srv-2"] | '8' -> ["http://srv-backup-4"]
let docsList = ["posts/1234"; "post/3214"; "posts/1232"; "posts/1238"; "posts/1232"] let fetch server docs = printfn "Fetching:\t%s - %A" server docs let serverContents = match server with | "http://srv-4" -> ["posts/1234"] | "http://srv-backup-4" -> ["post/3214"; "posts/1238"] | "http://srv-2" -> ["posts/1232"; "posts/1232"] let r = serverContents |> Set.ofSeq |> Set.intersect (Set.ofSeq docs) printfn "Fetched:\t%A" r r
let rec bestCandidateRec (docsList : string list) processedServers = let (srv, docParis) = docsList |> List.ofSeq |> Seq.collect (fun d -> mapToUrl d |> Seq.map (fun x -> (x, d))) |> Seq.filter (fun (srv, _) -> not (Set.contains srv processedServers)) |> Seq.groupBy fst |> Seq.maxBy (fun(k, v) -> Seq.length v) let docs = docParis |> Seq.map snd let fetched = fetch srv docs let remaining = (Set.difference (Set.ofSeq docsList) (fetched)) printfn "Remaining:\t%A" remaining if not (Set.isEmpty remaining) then bestCandidateRec (Set.toList remaining) (processedServers |> Set.add srv)
bestCandidateRec docsList Set.empty
Fetching: http://srv-backup-4 - seq ["posts/1234"; "post/3214"; "posts/1238"] Fetched: set ["post/3214"; "posts/1238"] Remaining: set ["posts/1232"; "posts/1234"] Fetching: http://srv-2 - seq ["posts/1232"] Fetched: set ["posts/1232"] Remaining: set ["posts/1234"] Fetching: http://srv-4 - seq ["posts/1234"] Fetched: set ["posts/1234"] Remaining: set []
Press any key to continue . . .
Simply sorting by largest possible result set wouldn't be optimal in some cases. You should pull out any documents with only one possible server and perform them first (along with requesting any other documents that may be on that server), then perform the remaining requests largest -> smallest.
example that breaks the largest -> smallest algorithm (servers are numbers, documents are letters) 1: a,b,c 2: a,e 3: b,f 4: c,g
4 requests vs 3 requests.
@Betty IMO, this is a good suggestion, however, it would't help in general case. I bet there is a counterexample too for your's algorithm. To come up with least number of round-trips, its required to apply some sort of graph search algorithms. For example do a http://en.wikipedia.org/wiki/Minimum_spanning_tree , if we encode documents as nodes, and connect nodes, who are on the same servers.
The problem is, that MST seems like n^n (rough guess) complexity, so it might be overkill, to search optimal solution.
@Valera - hmm... attempting to graph something that you have no knowledge about seems to be an exercise in futility
@Betty - great suggestion... this should do the job
// sorry for any formatting issues // assume MakeRequest returns the "ids" returned from the request
var servers = ids.Distinct() .Select(x => new { Id = x, Servers = GetAppropriateUrls(x), }) .SelectMany(x => x.Servers, (x, c) => new { Id = x.Id, Server = c, ServerCount = x.Servers.Count(), }) .OrderBy(x => x.ServerCount) .GroupBy(x => x.Server, (k, c) => new { Url = k, Concerns = c.Select(a => a.Id) });
var resolved = new List<string>(); foreach (var server in servers) { var concerns = server.Concerns .Except(resolved).ToArray();
if (concerns.Length == 0) continue;
var response = MakeRequest(server.Url, concerns); resolved.AddRange(response); }
Perhaps I'm being naive, but the usual meta-goal for reducing network round-trips is to optimize for the lowest overall elapsed time due to network latency effects.
That can often be achieved by issuing overllapping asynchronous parallel queries to multiple servers (not that hard to do with .Net parallel Task support these days).
Usually that gives you a total elapsed time slightly greater than the longest running query.
Most of the above-proposed methods introduce serial execution dependencies and are 'stateful'; hence not readily executed in parallel and thus will likely result in longer elapsed times than a naive 'stateless' parallel algorithm, except for some 'edge cases' where the 'stateful' algorithm will perform better.
A 'naive' general algorithmic approach would be:
Step 1 - coalesce the queries into 'batches' with 1 batch/ target server (querying all servers in a group where a target document can be present in any of a group of several servers).....I liked the LINQ approach to that problem.
Step 2 - If only 1 batch, issue the query synchronously (no point in adding extra overhead for 'parallel' execution!), return results when complete
Step 3 - otherwise, take the output batches from step 1 and execute the required queries as asynchronous parallel tasks (1 per server), with a 'join point' when all queries are complete. Note that execution threads need to enforce the rule that existence of a document 'trumps' any 'not-found' results for a document.
Step 4 - Post-process the aggregated query results looking for situations where the net outcome of queries for specific documents did not find the document and generate any related exceptions (since your example searches by primary Key, not finding a document on any target server is probably invalid (?)).
Step 5 - return aggregated results.
To avoid 'premature optimization', Implement the 'naive' algorithm first, then benchmark using a reasonably-sized set of 'random' queries and perform low-level optimizations to fix any 'bottlenecks' identified by the benchmark....rinse and repeat using the same benchmark data.
@Mike Not exactly, we know at least something about distribution of a documents in server. At fists, for example we know that document A is located at Server 1, 2 or 3 with probability of 33%. After attempt to fetch document from Server 1, it would be 50% for Server 2 and 3. This information can be applied as entry point for MST algorithm.
A bit lengthy due to introduction of Post and PostRequest classes ...
Comment preview