Ayende @ Rahien

Hi!
My name is Ayende Rahien
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

@

Posts: 5,947 | Comments: 44,540

filter by tags archive

ChallengeMinimum number of round trips


We are working on creating better experience for RavenDB & Sharding, and that led us to the following piece of code:

shardedSession.Load<Post>("posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232");

And the following Shard function:

public static string[] GetAppropriateUrls(string id)
{
    switch (id.Last())
    {
        case '4':
            return new[] { "http://srv-4", "http://srv-backup-4" };

        case '2':
            return new[] { "http://srv-2" };

        case '8':
            return new[] { "http://srv-backup-4" };

        default:
            throw new InvalidOperationException();
    }
}

Write a function that would make the minimum number of queries to load of all the posts from all of the servers.

More posts in "Challenge" series:

  1. (28 Apr 2015) What is the meaning of this change?
  2. (26 Sep 2013) Spot the bug
  3. (27 May 2013) The problem of locking down tasks…
  4. (17 Oct 2011) Minimum number of round trips
  5. (23 Aug 2011) Recent Comments with Future Posts
  6. (02 Aug 2011) Modifying execution approaches
  7. (29 Apr 2011) Stop the leaks
  8. (23 Dec 2010) This code should never hit production
  9. (17 Dec 2010) Your own ThreadLocal
  10. (03 Dec 2010) Querying relative information with RavenDB
  11. (29 Jun 2010) Find the bug
  12. (23 Jun 2010) Dynamically dynamic
  13. (28 Apr 2010) What killed the application?
  14. (19 Mar 2010) What does this code do?
  15. (04 Mar 2010) Robust enumeration over external code
  16. (16 Feb 2010) Premature optimization, and all of that…
  17. (12 Feb 2010) Efficient querying
  18. (10 Feb 2010) Find the resource leak
  19. (21 Oct 2009) Can you spot the bug?
  20. (18 Oct 2009) Why is this wrong?
  21. (17 Oct 2009) Write the check in comment
  22. (15 Sep 2009) NH Prof Exporting Reports
  23. (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
  24. (01 Sep 2009) Why isn’t select broken?
  25. (06 Aug 2009) Find the bug fixes
  26. (26 May 2009) Find the bug
  27. (14 May 2009) multi threaded test failure
  28. (11 May 2009) The regex that doesn’t match
  29. (24 Mar 2009) probability based selection
  30. (13 Mar 2009) C# Rewriting
  31. (18 Feb 2009) write a self extracting program
  32. (04 Sep 2008) Don't stop with the first DSL abstraction
  33. (02 Aug 2008) What is the problem?
  34. (28 Jul 2008) What does this code do?
  35. (26 Jul 2008) Find the bug fix
  36. (05 Jul 2008) Find the deadlock
  37. (03 Jul 2008) Find the bug
  38. (02 Jul 2008) What is wrong with this code
  39. (05 Jun 2008) why did the tests fail?
  40. (27 May 2008) Striving for better syntax
  41. (13 Apr 2008) calling generics without the generic type
  42. (12 Apr 2008) The directory tree
  43. (24 Mar 2008) Find the version
  44. (21 Jan 2008) Strongly typing weakly typed code
  45. (28 Jun 2007) Windsor Null Object Dependency Facility

Comments

Anton Gogolev

Assuming that when we have two or more shards a single post can be freely loaded from any:

    var ids = new[] {"posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232"};
    var allEligibleShards = ids.SelectMany(id => GetAppropriateUrls(id)).Distinct().OrderBy(url => url);
    var idsForShard = ids.
        Select(id => new { id, shard = GetAppropriateUrls(id).OrderBy(url => url).Intersect(allEligibleShards).First() }).
        GroupBy(x => x.shard, x => x.id);

This code produces the following results:

http://srv-4 - posts/1234 post/3214 
http://srv-2 - posts/1232 posts/1232 
http://srv-backup-4 - posts/1238 
Stuart Cam

Anton, you can save a request to http://srv-4 by loading the posts from http://srv-backup-4.

You might want to consider finding posts that can be serviced from common servers too...

The final result would be:

http://srv-2 - posts/1232 posts/1232 http://srv-backup-4 - posts/1238 posts/1234 post/3214

Saves one round-trip

Ayende Rahien

Anton & Stuart, When you have multiple urls for a single id, that document reside in one of those urls, not any of them.

Ayende Rahien

Stuart, posts/1234 may be in either srv-4 or srv-backup-4, you need to check both

Stuart Cam

@Ayende

So a case '4' could be found in either "http://srv-4" OR "http://srv-backup-4", as opposed to "http://srv-4" AND "http://srv-backup-4"?

Yan Cui

How about something like this then?

var ids = new[] { "posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232" };

var requests = ids.SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) .GroupBy(m => m.Url);

Ayende Rahien

Yan, Nice, now handle the case of an item with multiple Url found on the first item and shouldn't be in the second request to the url

Yan Cui

Ayende, Just to clarify, assuming that the requests are to be made synchronously, and the first request was to srv-4 and posts/1234 was found there, now you want to exclude posts/1234 from the request to srv-backup-4 because you already have it?

Ayende Rahien

Yan, Yes, and if there aren't any more ids to check, don't make the request

Yan Cui

Ayende, I introduced a dummy 'MakeRequest' method here that will tell you that you've got all the keys you ask for:

// Define other methods and classes here // dummy request method that returns all the ids you can ask for public static string[] MakeRequest(string url, string[] ids) { Console.WriteLine("{0} retrieved", string.Join(",", ids)); return ids; }

And the rest of the solution goes like this:

var ids = new[] { "posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232" };

// build up a dictionary of the distinct post ids to keep track of 'if' it has been retrieved var idsDict = ids.Distinct().ToDictionary(id => id, id => false);

var idGroups = idsDict .Keys // map each Id into Id-Url pairs, one for each url .SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) // group by the url to get a Url-Id[] mapping .GroupBy(m => m.Url) // filter groups so only proceed if a group has ids it needs to retrieve // note this works thanks to delay execution .Where(gr => gr.Any(m => !idsDict[m.Id])) // final transformation to make it easier for the loop .Select(gr => new { Url = gr.Key, Ids = gr.Select(m => m.Id).Where(id => !idsDict[id]).ToArray() });

foreach (var idGroup in idGroups) { var retrieved = MakeRequest(idGroup.Url, idGroup.Ids);

// mark the ids as 'retrieved'
retrieved.ToList().ForEach(id => idsDict[id] = true);

}

Running this prints:

posts/1234,post/3214 retrieved posts/1238 retrieved posts/1232 retrieved

Dave

Well, why is there a limitation of one document per GET request in the first place? Loading means retrieving, so one can argue that there's no reason why a request couldn't return multiple documents?

Just thinking outside the box..

Ayende Rahien

Dave, Oh, you can get multiple ids in a single request, yes

Dan Turner

A further optimisation to Yan's solution would be to consider the order in which urls are hit. For e.g. any Load containing an Id ending in "8" means http://srv-backup-4 MUST be hit. If we do this first we MIGHT also get those ending in "4" for free and save a hit to http://srv-backup-4.

Dan Turner

Sorry, save a hit to http://srv-4

Yan Cui

Dan, That's a nice idea, even without formally specifying that one server should have priority another, for this example at least you can save the hit to srv-4 by ordering the groups by the number of ids it contains:

idsDict .Keys // map each Id into Id-Url pairs, one for each url .SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) // group by the url to get a Url-Id[] mapping .GroupBy(m => m.Url) .OrderByDescending(m => m.Count()) // filter groups so only proceed if a group has ids it needs to retrieve // note this works thanks to delay execution .Where(gr => gr.Any(m => !idsDict[m.Id]))

Doing so prints out (plus minor modification to 'MakeRequest' method): posts/1234,post/3214,posts/1238 retrieved from http://srv-backup-4 posts/1232 retrieved from http://srv-2

Valera Kolupaev

With given knowledge it's impossible to get optimal request sequence, because we don't know where document is located (on main or backup server). So, solution with Dynamic Programming might fit well. The idea to load document with iteration, and pick server, who could return the largest number of documents on each turn:

F# Solution:

let mapToUrl (id : System.String) = let lastChar = id.[(String.length id) - 1] match lastChar with | '4' -> ["http://srv-4"; "http://srv-backup-4"] | '2' -> ["http://srv-2"] | '8' -> ["http://srv-backup-4"]

let docsList = ["posts/1234"; "post/3214"; "posts/1232"; "posts/1238"; "posts/1232"] let fetch server docs = printfn "Fetching:\t%s - %A" server docs let serverContents = match server with | "http://srv-4" -> ["posts/1234"] | "http://srv-backup-4" -> ["post/3214"; "posts/1238"] | "http://srv-2" -> ["posts/1232"; "posts/1232"] let r = serverContents |> Set.ofSeq |> Set.intersect (Set.ofSeq docs) printfn "Fetched:\t%A" r r

let rec bestCandidateRec (docsList : string list) processedServers = let (srv, docParis) = docsList |> List.ofSeq |> Seq.collect (fun d -> mapToUrl d |> Seq.map (fun x -> (x, d))) |> Seq.filter (fun (srv, _) -> not (Set.contains srv processedServers)) |> Seq.groupBy fst |> Seq.maxBy (fun(k, v) -> Seq.length v) let docs = docParis |> Seq.map snd let fetched = fetch srv docs let remaining = (Set.difference (Set.ofSeq docsList) (fetched)) printfn "Remaining:\t%A" remaining if not (Set.isEmpty remaining) then bestCandidateRec (Set.toList remaining) (processedServers |> Set.add srv)

bestCandidateRec docsList Set.empty

Result: Fetching: http://srv-backup-4 - seq ["posts/1234"; "post/3214"; "posts/1238"] Fetched: set ["post/3214"; "posts/1238"] Remaining: set ["posts/1232"; "posts/1234"] Fetching: http://srv-2 - seq ["posts/1232"] Fetched: set ["posts/1232"] Remaining: set ["posts/1234"] Fetching: http://srv-4 - seq ["posts/1234"] Fetched: set ["posts/1234"] Remaining: set []

Valera Kolupaev

There is not enough information to determine optimal sequence of server calls. So, my best bet is to use a dynamic programming technique to call server, which could return largest number of documents each time. While this is not guaranteeing to find a best possible solution, it will minimize a number of round trips, compared to naive solution.

For better results, it might be good idea to add some heuristics, based on run-time usage data, to pick "better" servers.

For more complex loading schemes, it's possible to find subset of servers to call, using, for example Minimum Spanning Tree algorithm.

First pass, F# solution

let mapToUrl (id : System.String) = let lastChar = id.[(String.length id) - 1] match lastChar with | '4' -> ["http://srv-4"; "http://srv-backup-4"] | '2' -> ["http://srv-2"] | '8' -> ["http://srv-backup-4"]

let docsList = ["posts/1234"; "post/3214"; "posts/1232"; "posts/1238"; "posts/1232"] let fetch server docs = printfn "Fetching:\t%s - %A" server docs let serverContents = match server with | "http://srv-4" -> ["posts/1234"] | "http://srv-backup-4" -> ["post/3214"; "posts/1238"] | "http://srv-2" -> ["posts/1232"; "posts/1232"] let r = serverContents |> Set.ofSeq |> Set.intersect (Set.ofSeq docs) printfn "Fetched:\t%A" r r

let rec bestCandidateRec (docsList : string list) processedServers = let (srv, docParis) = docsList |> List.ofSeq |> Seq.collect (fun d -> mapToUrl d |> Seq.map (fun x -> (x, d))) |> Seq.filter (fun (srv, _) -> not (Set.contains srv processedServers)) |> Seq.groupBy fst |> Seq.maxBy (fun(k, v) -> Seq.length v) let docs = docParis |> Seq.map snd let fetched = fetch srv docs let remaining = (Set.difference (Set.ofSeq docsList) (fetched)) printfn "Remaining:\t%A" remaining if not (Set.isEmpty remaining) then bestCandidateRec (Set.toList remaining) (processedServers |> Set.add srv)

bestCandidateRec docsList Set.empty

Fetching: http://srv-backup-4 - seq ["posts/1234"; "post/3214"; "posts/1238"] Fetched: set ["post/3214"; "posts/1238"] Remaining: set ["posts/1232"; "posts/1234"] Fetching: http://srv-2 - seq ["posts/1232"] Fetched: set ["posts/1232"] Remaining: set ["posts/1234"] Fetching: http://srv-4 - seq ["posts/1234"] Fetched: set ["posts/1234"] Remaining: set []

Press any key to continue . . .

Stuart Cam

var distinctIds = ids.Distinct(); var allShardsAndServers = distinctIds.SelectMany(id => GetAppropriateUrls(id).Select(server => new { id, server })); var rankedByPopularServer = allShardsAndServers.GroupBy(i => i.server).OrderByDescending(g => g.Count()); var idsForShards = distinctIds.Select(id => rankedByPopularServer.SelectMany(g => g).First(g => g.id == id));

Gives the ouput:

{ id = posts/1234, server = http://srv-backup-4 } { id = post/3214, server = http://srv-backup-4 } { id = posts/1232, server = http://srv-2 } { id = posts/1238, server = http://srv-backup-4 }

Betty

Simply sorting by largest possible result set wouldn't be optimal in some cases. You should pull out any documents with only one possible server and perform them first (along with requesting any other documents that may be on that server), then perform the remaining requests largest -> smallest.

example that breaks the largest -> smallest algorithm (servers are numbers, documents are letters) 1: a,b,c 2: a,e 3: b,f 4: c,g

4 requests vs 3 requests.

Valera Kolupaev

@Betty IMO, this is a good suggestion, however, it would't help in general case. I bet there is a counterexample too for your's algorithm. To come up with least number of round-trips, its required to apply some sort of graph search algorithms. For example do a http://en.wikipedia.org/wiki/Minimumspanningtree , if we encode documents as nodes, and connect nodes, who are on the same servers.

The problem is, that MST seems like n^n (rough guess) complexity, so it might be overkill, to search optimal solution.

Mike

@Valera - hmm... attempting to graph something that you have no knowledge about seems to be an exercise in futility

@Betty - great suggestion... this should do the job

// sorry for any formatting issues // assume MakeRequest returns the "ids" returned from the request

var servers = ids.Distinct() .Select(x => new { Id = x, Servers = GetAppropriateUrls(x), }) .SelectMany(x => x.Servers, (x, c) => new { Id = x.Id, Server = c, ServerCount = x.Servers.Count(), }) .OrderBy(x => x.ServerCount) .GroupBy(x => x.Server, (k, c) => new { Url = k, Concerns = c.Select(a => a.Id) });

var resolved = new List(); foreach (var server in servers) { var concerns = server.Concerns .Except(resolved).ToArray();

if (concerns.Length == 0) continue;

var response = MakeRequest(server.Url, concerns); resolved.AddRange(response); }

Roger

Perhaps I'm being naive, but the usual meta-goal for reducing network round-trips is to optimize for the lowest overall elapsed time due to network latency effects.

That can often be achieved by issuing overllapping asynchronous parallel queries to multiple servers (not that hard to do with .Net parallel Task support these days).

Usually that gives you a total elapsed time slightly greater than the longest running query.

Most of the above-proposed methods introduce serial execution dependencies and are 'stateful'; hence not readily executed in parallel and thus will likely result in longer elapsed times than a naive 'stateless' parallel algorithm, except for some 'edge cases' where the 'stateful' algorithm will perform better.

A 'naive' general algorithmic approach would be:

Step 1 - coalesce the queries into 'batches' with 1 batch/ target server (querying all servers in a group where a target document can be present in any of a group of several servers).....I liked the LINQ approach to that problem.

Step 2 - If only 1 batch, issue the query synchronously (no point in adding extra overhead for 'parallel' execution!), return results when complete

Step 3 - otherwise, take the output batches from step 1 and execute the required queries as asynchronous parallel tasks (1 per server), with a 'join point' when all queries are complete. Note that execution threads need to enforce the rule that existence of a document 'trumps' any 'not-found' results for a document.

Step 4 - Post-process the aggregated query results looking for situations where the net outcome of queries for specific documents did not find the document and generate any related exceptions (since your example searches by primary Key, not finding a document on any target server is probably invalid (?)).

Step 5 - return aggregated results.

To avoid 'premature optimization', Implement the 'naive' algorithm first, then benchmark using a reasonably-sized set of 'random' queries and perform low-level optimizations to fix any 'bottlenecks' identified by the benchmark....rinse and repeat using the same benchmark data.

Valera Kolupaev

@Mike Not exactly, we know at least something about distribution of a documents in server. At fists, for example we know that document A is located at Server 1, 2 or 3 with probability of 33%. After attempt to fetch document from Server 1, it would be 50% for Server 2 and 3. This information can be applied as entry point for MST algorithm.

Alex

A bit lengthy due to introduction of Post and PostRequest classes ...

    public class Post
    {
        public Post(string id, string content)
        {
            Id = id;
            Content = content;
        }
        public string Id { get; set; }
        public string Content { get; set; }
    }
    public class PostRequest
    {
        public PostRequest(string id)
        {
            Id = id;
        }
        public string Id { get; private set; }
        public Post CreatePost(string content)
        {
            return Result = new Post(Id, content);
        }
        public Post Result { get; private set; }
    }
    public static void ShardedGet()
    {
        string[] posts = {"posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232"};
        var shardedRequest = posts
            .Distinct()
            .Select(p => new PostRequest(p))
            .SelectMany(r => GetAppropriateUrls(r.Id)
                .Distinct()
                .Select(u => new { request = r, Url = u })
                )
            .GroupBy(m => m.Url)
            .SelectMany(g => SendRequest(g.Key, g.Select(s => s.request)));

        Console.WriteLine(string.Join("\n", shardedRequest.Select(p => p.Content)));
    }
    public static IEnumerable<Post> SendRequest(string url, IEnumerable<PostRequest> requests)
    {
        return requests
            .Where(r => r.Result == null)
            .Select(r => r.CreatePost("Bla Bla " + r.Id + " from url " + url))
            .ToArray();
    }

Comment preview

Comments have been closed on this topic.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Sharding (2):
    21 May 2015 - Adding a new shard to an existing cluster, the easy way
  2. The RavenDB Comic Strip (2):
    20 May 2015 - Part II – a team in trouble!
  3. Challenge (45):
    28 Apr 2015 - What is the meaning of this change?
  4. Interview question (2):
    30 Mar 2015 - fix the index
  5. Excerpts from the RavenDB Performance team report (20):
    20 Feb 2015 - Optimizing Compare – The circle of life (a post-mortem)
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats