Ayende @ Rahien

It's a girl

Challenge: Minimum number of round trips

We are working on creating better experience for RavenDB & Sharding, and that led us to the following piece of code:

shardedSession.Load<Post>("posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232");

And the following Shard function:

public static string[] GetAppropriateUrls(string id)
{
    switch (id.Last())
    {
        case '4':
            return new[] { "http://srv-4", "http://srv-backup-4" };

        case '2':
            return new[] { "http://srv-2" };

        case '8':
            return new[] { "http://srv-backup-4" };

        default:
            throw new InvalidOperationException();
    }
}

Write a function that would make the minimum number of queries to load of all the posts from all of the servers.

Comments

Anton Gogolev
10/17/2011 10:28 AM by
Anton Gogolev

Assuming that when we have two or more shards a single post can be freely loaded from any:

    var ids = new[] {"posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232"};
    var allEligibleShards = ids.SelectMany(id => GetAppropriateUrls(id)).Distinct().OrderBy(url => url);
    var idsForShard = ids.
        Select(id => new { id, shard = GetAppropriateUrls(id).OrderBy(url => url).Intersect(allEligibleShards).First() }).
        GroupBy(x => x.shard, x => x.id);

This code produces the following results:

http://srv-4 - posts/1234 post/3214 
http://srv-2 - posts/1232 posts/1232 
http://srv-backup-4 - posts/1238 
Stuart Cam
10/17/2011 10:31 AM by
Stuart Cam

Anton, you can save a request to http://srv-4 by loading the posts from http://srv-backup-4.

You might want to consider finding posts that can be serviced from common servers too...

The final result would be:

http://srv-2 - posts/1232 posts/1232 http://srv-backup-4 - posts/1238 posts/1234 post/3214

Saves one round-trip

Ayende Rahien
10/17/2011 10:46 AM by
Ayende Rahien

Anton & Stuart, When you have multiple urls for a single id, that document reside in one of those urls, not any of them.

Ayende Rahien
10/17/2011 10:54 AM by
Ayende Rahien

Stuart, posts/1234 may be in either srv-4 or srv-backup-4, you need to check both

Stuart Cam
10/17/2011 10:55 AM by
Stuart Cam

@Ayende

So a case '4' could be found in either "http://srv-4" OR "http://srv-backup-4", as opposed to "http://srv-4" AND "http://srv-backup-4"?

Yan Cui
10/17/2011 11:00 AM by
Yan Cui

How about something like this then?

var ids = new[] { "posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232" };

var requests = ids.SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) .GroupBy(m => m.Url);

Ayende Rahien
10/17/2011 11:16 AM by
Ayende Rahien

Yan, Nice, now handle the case of an item with multiple Url found on the first item and shouldn't be in the second request to the url

Yan Cui
10/17/2011 11:21 AM by
Yan Cui

Ayende, Just to clarify, assuming that the requests are to be made synchronously, and the first request was to srv-4 and posts/1234 was found there, now you want to exclude posts/1234 from the request to srv-backup-4 because you already have it?

Ayende Rahien
10/17/2011 11:29 AM by
Ayende Rahien

Yan, Yes, and if there aren't any more ids to check, don't make the request

Yan Cui
10/17/2011 11:47 AM by
Yan Cui

Ayende, I introduced a dummy 'MakeRequest' method here that will tell you that you've got all the keys you ask for:

// Define other methods and classes here // dummy request method that returns all the ids you can ask for public static string[] MakeRequest(string url, string[] ids) { Console.WriteLine("{0} retrieved", string.Join(",", ids)); return ids; }

And the rest of the solution goes like this:

var ids = new[] { "posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232" };

// build up a dictionary of the distinct post ids to keep track of 'if' it has been retrieved var idsDict = ids.Distinct().ToDictionary(id => id, id => false);

var idGroups = idsDict .Keys // map each Id into Id-Url pairs, one for each url .SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) // group by the url to get a Url-Id[] mapping .GroupBy(m => m.Url) // filter groups so only proceed if a group has ids it needs to retrieve // note this works thanks to delay execution .Where(gr => gr.Any(m => !idsDict[m.Id])) // final transformation to make it easier for the loop .Select(gr => new { Url = gr.Key, Ids = gr.Select(m => m.Id).Where(id => !idsDict[id]).ToArray() });

foreach (var idGroup in idGroups) { var retrieved = MakeRequest(idGroup.Url, idGroup.Ids);

// mark the ids as 'retrieved'
retrieved.ToList().ForEach(id => idsDict[id] = true);

}

Running this prints:

posts/1234,post/3214 retrieved posts/1238 retrieved posts/1232 retrieved

Dave
10/17/2011 11:54 AM by
Dave

Well, why is there a limitation of one document per GET request in the first place? Loading means retrieving, so one can argue that there's no reason why a request couldn't return multiple documents?

Just thinking outside the box..

Ayende Rahien
10/17/2011 12:24 PM by
Ayende Rahien

Dave, Oh, you can get multiple ids in a single request, yes

Dan Turner
10/17/2011 12:59 PM by
Dan Turner

A further optimisation to Yan's solution would be to consider the order in which urls are hit. For e.g. any Load containing an Id ending in "8" means http://srv-backup-4 MUST be hit. If we do this first we MIGHT also get those ending in "4" for free and save a hit to http://srv-backup-4.

Dan Turner
10/17/2011 01:01 PM by
Dan Turner

Sorry, save a hit to http://srv-4

Yan Cui
10/17/2011 01:54 PM by
Yan Cui

Dan, That's a nice idea, even without formally specifying that one server should have priority another, for this example at least you can save the hit to srv-4 by ordering the groups by the number of ids it contains:

idsDict .Keys // map each Id into Id-Url pairs, one for each url .SelectMany(id => GetAppropriateUrls(id).Select(url => new { Id = id, Url = url })) // group by the url to get a Url-Id[] mapping .GroupBy(m => m.Url) .OrderByDescending(m => m.Count()) // filter groups so only proceed if a group has ids it needs to retrieve // note this works thanks to delay execution .Where(gr => gr.Any(m => !idsDict[m.Id]))

Doing so prints out (plus minor modification to 'MakeRequest' method): posts/1234,post/3214,posts/1238 retrieved from http://srv-backup-4 posts/1232 retrieved from http://srv-2

Valera Kolupaev
10/17/2011 02:01 PM by
Valera Kolupaev

With given knowledge it's impossible to get optimal request sequence, because we don't know where document is located (on main or backup server). So, solution with Dynamic Programming might fit well. The idea to load document with iteration, and pick server, who could return the largest number of documents on each turn:

F# Solution:

let mapToUrl (id : System.String) = let lastChar = id.[(String.length id) - 1] match lastChar with | '4' -> ["http://srv-4"; "http://srv-backup-4"] | '2' -> ["http://srv-2"] | '8' -> ["http://srv-backup-4"]

let docsList = ["posts/1234"; "post/3214"; "posts/1232"; "posts/1238"; "posts/1232"] let fetch server docs = printfn "Fetching:\t%s - %A" server docs let serverContents = match server with | "http://srv-4" -> ["posts/1234"] | "http://srv-backup-4" -> ["post/3214"; "posts/1238"] | "http://srv-2" -> ["posts/1232"; "posts/1232"] let r = serverContents |> Set.ofSeq |> Set.intersect (Set.ofSeq docs) printfn "Fetched:\t%A" r r

let rec bestCandidateRec (docsList : string list) processedServers = let (srv, docParis) = docsList |> List.ofSeq |> Seq.collect (fun d -> mapToUrl d |> Seq.map (fun x -> (x, d))) |> Seq.filter (fun (srv, _) -> not (Set.contains srv processedServers)) |> Seq.groupBy fst |> Seq.maxBy (fun(k, v) -> Seq.length v) let docs = docParis |> Seq.map snd let fetched = fetch srv docs let remaining = (Set.difference (Set.ofSeq docsList) (fetched)) printfn "Remaining:\t%A" remaining if not (Set.isEmpty remaining) then bestCandidateRec (Set.toList remaining) (processedServers |> Set.add srv)

bestCandidateRec docsList Set.empty

Result: Fetching: http://srv-backup-4 - seq ["posts/1234"; "post/3214"; "posts/1238"] Fetched: set ["post/3214"; "posts/1238"] Remaining: set ["posts/1232"; "posts/1234"] Fetching: http://srv-2 - seq ["posts/1232"] Fetched: set ["posts/1232"] Remaining: set ["posts/1234"] Fetching: http://srv-4 - seq ["posts/1234"] Fetched: set ["posts/1234"] Remaining: set []

Valera Kolupaev
10/17/2011 02:51 PM by
Valera Kolupaev

There is not enough information to determine optimal sequence of server calls. So, my best bet is to use a dynamic programming technique to call server, which could return largest number of documents each time. While this is not guaranteeing to find a best possible solution, it will minimize a number of round trips, compared to naive solution.

For better results, it might be good idea to add some heuristics, based on run-time usage data, to pick "better" servers.

For more complex loading schemes, it's possible to find subset of servers to call, using, for example Minimum Spanning Tree algorithm.

First pass, F# solution

let mapToUrl (id : System.String) = let lastChar = id.[(String.length id) - 1] match lastChar with | '4' -> ["http://srv-4"; "http://srv-backup-4"] | '2' -> ["http://srv-2"] | '8' -> ["http://srv-backup-4"]

let docsList = ["posts/1234"; "post/3214"; "posts/1232"; "posts/1238"; "posts/1232"] let fetch server docs = printfn "Fetching:\t%s - %A" server docs let serverContents = match server with | "http://srv-4" -> ["posts/1234"] | "http://srv-backup-4" -> ["post/3214"; "posts/1238"] | "http://srv-2" -> ["posts/1232"; "posts/1232"] let r = serverContents |> Set.ofSeq |> Set.intersect (Set.ofSeq docs) printfn "Fetched:\t%A" r r

let rec bestCandidateRec (docsList : string list) processedServers = let (srv, docParis) = docsList |> List.ofSeq |> Seq.collect (fun d -> mapToUrl d |> Seq.map (fun x -> (x, d))) |> Seq.filter (fun (srv, _) -> not (Set.contains srv processedServers)) |> Seq.groupBy fst |> Seq.maxBy (fun(k, v) -> Seq.length v) let docs = docParis |> Seq.map snd let fetched = fetch srv docs let remaining = (Set.difference (Set.ofSeq docsList) (fetched)) printfn "Remaining:\t%A" remaining if not (Set.isEmpty remaining) then bestCandidateRec (Set.toList remaining) (processedServers |> Set.add srv)

bestCandidateRec docsList Set.empty

Fetching: http://srv-backup-4 - seq ["posts/1234"; "post/3214"; "posts/1238"] Fetched: set ["post/3214"; "posts/1238"] Remaining: set ["posts/1232"; "posts/1234"] Fetching: http://srv-2 - seq ["posts/1232"] Fetched: set ["posts/1232"] Remaining: set ["posts/1234"] Fetching: http://srv-4 - seq ["posts/1234"] Fetched: set ["posts/1234"] Remaining: set []

Press any key to continue . . .

Stuart Cam
10/17/2011 10:51 AM by
Stuart Cam

var distinctIds = ids.Distinct(); var allShardsAndServers = distinctIds.SelectMany(id => GetAppropriateUrls(id).Select(server => new { id, server })); var rankedByPopularServer = allShardsAndServers.GroupBy(i => i.server).OrderByDescending(g => g.Count()); var idsForShards = distinctIds.Select(id => rankedByPopularServer.SelectMany(g => g).First(g => g.id == id));

Gives the ouput:

{ id = posts/1234, server = http://srv-backup-4 } { id = post/3214, server = http://srv-backup-4 } { id = posts/1232, server = http://srv-2 } { id = posts/1238, server = http://srv-backup-4 }

Betty
10/17/2011 08:36 PM by
Betty

Simply sorting by largest possible result set wouldn't be optimal in some cases. You should pull out any documents with only one possible server and perform them first (along with requesting any other documents that may be on that server), then perform the remaining requests largest -> smallest.

example that breaks the largest -> smallest algorithm (servers are numbers, documents are letters) 1: a,b,c 2: a,e 3: b,f 4: c,g

4 requests vs 3 requests.

Valera Kolupaev
10/17/2011 09:02 PM by
Valera Kolupaev

@Betty IMO, this is a good suggestion, however, it would't help in general case. I bet there is a counterexample too for your's algorithm. To come up with least number of round-trips, its required to apply some sort of graph search algorithms. For example do a http://en.wikipedia.org/wiki/Minimumspanningtree , if we encode documents as nodes, and connect nodes, who are on the same servers.

The problem is, that MST seems like n^n (rough guess) complexity, so it might be overkill, to search optimal solution.

Mike
10/18/2011 05:26 AM by
Mike

@Valera - hmm... attempting to graph something that you have no knowledge about seems to be an exercise in futility

@Betty - great suggestion... this should do the job

// sorry for any formatting issues // assume MakeRequest returns the "ids" returned from the request

var servers = ids.Distinct() .Select(x => new { Id = x, Servers = GetAppropriateUrls(x), }) .SelectMany(x => x.Servers, (x, c) => new { Id = x.Id, Server = c, ServerCount = x.Servers.Count(), }) .OrderBy(x => x.ServerCount) .GroupBy(x => x.Server, (k, c) => new { Url = k, Concerns = c.Select(a => a.Id) });

var resolved = new List(); foreach (var server in servers) { var concerns = server.Concerns .Except(resolved).ToArray();

if (concerns.Length == 0) continue;

var response = MakeRequest(server.Url, concerns); resolved.AddRange(response); }

Roger
10/18/2011 06:10 AM by
Roger

Perhaps I'm being naive, but the usual meta-goal for reducing network round-trips is to optimize for the lowest overall elapsed time due to network latency effects.

That can often be achieved by issuing overllapping asynchronous parallel queries to multiple servers (not that hard to do with .Net parallel Task support these days).

Usually that gives you a total elapsed time slightly greater than the longest running query.

Most of the above-proposed methods introduce serial execution dependencies and are 'stateful'; hence not readily executed in parallel and thus will likely result in longer elapsed times than a naive 'stateless' parallel algorithm, except for some 'edge cases' where the 'stateful' algorithm will perform better.

A 'naive' general algorithmic approach would be:

Step 1 - coalesce the queries into 'batches' with 1 batch/ target server (querying all servers in a group where a target document can be present in any of a group of several servers).....I liked the LINQ approach to that problem.

Step 2 - If only 1 batch, issue the query synchronously (no point in adding extra overhead for 'parallel' execution!), return results when complete

Step 3 - otherwise, take the output batches from step 1 and execute the required queries as asynchronous parallel tasks (1 per server), with a 'join point' when all queries are complete. Note that execution threads need to enforce the rule that existence of a document 'trumps' any 'not-found' results for a document.

Step 4 - Post-process the aggregated query results looking for situations where the net outcome of queries for specific documents did not find the document and generate any related exceptions (since your example searches by primary Key, not finding a document on any target server is probably invalid (?)).

Step 5 - return aggregated results.

To avoid 'premature optimization', Implement the 'naive' algorithm first, then benchmark using a reasonably-sized set of 'random' queries and perform low-level optimizations to fix any 'bottlenecks' identified by the benchmark....rinse and repeat using the same benchmark data.

Valera Kolupaev
10/18/2011 05:15 PM by
Valera Kolupaev

@Mike Not exactly, we know at least something about distribution of a documents in server. At fists, for example we know that document A is located at Server 1, 2 or 3 with probability of 33%. After attempt to fetch document from Server 1, it would be 50% for Server 2 and 3. This information can be applied as entry point for MST algorithm.

Alex
10/20/2011 01:30 AM by
Alex

A bit lengthy due to introduction of Post and PostRequest classes ...

    public class Post
    {
        public Post(string id, string content)
        {
            Id = id;
            Content = content;
        }
        public string Id { get; set; }
        public string Content { get; set; }
    }
    public class PostRequest
    {
        public PostRequest(string id)
        {
            Id = id;
        }
        public string Id { get; private set; }
        public Post CreatePost(string content)
        {
            return Result = new Post(Id, content);
        }
        public Post Result { get; private set; }
    }
    public static void ShardedGet()
    {
        string[] posts = {"posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232"};
        var shardedRequest = posts
            .Distinct()
            .Select(p => new PostRequest(p))
            .SelectMany(r => GetAppropriateUrls(r.Id)
                .Distinct()
                .Select(u => new { request = r, Url = u })
                )
            .GroupBy(m => m.Url)
            .SelectMany(g => SendRequest(g.Key, g.Select(s => s.request)));

        Console.WriteLine(string.Join("\n", shardedRequest.Select(p => p.Content)));
    }
    public static IEnumerable<Post> SendRequest(string url, IEnumerable<PostRequest> requests)
    {
        return requests
            .Where(r => r.Result == null)
            .Select(r => r.CreatePost("Bla Bla " + r.Id + " from url " + url))
            .ToArray();
    }
Comments have been closed on this topic.