Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

, @ Q c

Posts: 5,953 | Comments: 44,409

filter by tags archive

Challengeprobability based selection


Here is an interesting problem that I just run into. I need to select a value from a (small) set based on percentage. That seems like it would be simple, but for some reason I can’t figure out an elegant way of doing this.

Here is my current solution:

var chances = new Page[100];
int index = 0;
foreach (var page in pages)
{
    for (int i = index; i < index + page.PercentageToShow; i++)
    {
        chances[i] = row;
    }
    index += page.PercentageToShow;
}
return chances[new Random().Next(0, 100)];

This satisfy the requirement, but it is… not as elegant as I would wish it to be.

I may have N number of values, for small N. There isn’t any limitation on the percentage allocation, so we may have (50%, 10%, 12%, 28%). We are assured that the numbers will always match to a 100.

More posts in "Challenge" series:

  1. (28 Apr 2015) What is the meaning of this change?
  2. (26 Sep 2013) Spot the bug
  3. (27 May 2013) The problem of locking down tasks…
  4. (17 Oct 2011) Minimum number of round trips
  5. (23 Aug 2011) Recent Comments with Future Posts
  6. (02 Aug 2011) Modifying execution approaches
  7. (29 Apr 2011) Stop the leaks
  8. (23 Dec 2010) This code should never hit production
  9. (17 Dec 2010) Your own ThreadLocal
  10. (03 Dec 2010) Querying relative information with RavenDB
  11. (29 Jun 2010) Find the bug
  12. (23 Jun 2010) Dynamically dynamic
  13. (28 Apr 2010) What killed the application?
  14. (19 Mar 2010) What does this code do?
  15. (04 Mar 2010) Robust enumeration over external code
  16. (16 Feb 2010) Premature optimization, and all of that…
  17. (12 Feb 2010) Efficient querying
  18. (10 Feb 2010) Find the resource leak
  19. (21 Oct 2009) Can you spot the bug?
  20. (18 Oct 2009) Why is this wrong?
  21. (17 Oct 2009) Write the check in comment
  22. (15 Sep 2009) NH Prof Exporting Reports
  23. (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
  24. (01 Sep 2009) Why isn’t select broken?
  25. (06 Aug 2009) Find the bug fixes
  26. (26 May 2009) Find the bug
  27. (14 May 2009) multi threaded test failure
  28. (11 May 2009) The regex that doesn’t match
  29. (24 Mar 2009) probability based selection
  30. (13 Mar 2009) C# Rewriting
  31. (18 Feb 2009) write a self extracting program
  32. (04 Sep 2008) Don't stop with the first DSL abstraction
  33. (02 Aug 2008) What is the problem?
  34. (28 Jul 2008) What does this code do?
  35. (26 Jul 2008) Find the bug fix
  36. (05 Jul 2008) Find the deadlock
  37. (03 Jul 2008) Find the bug
  38. (02 Jul 2008) What is wrong with this code
  39. (05 Jun 2008) why did the tests fail?
  40. (27 May 2008) Striving for better syntax
  41. (13 Apr 2008) calling generics without the generic type
  42. (12 Apr 2008) The directory tree
  43. (24 Mar 2008) Find the version
  44. (21 Jan 2008) Strongly typing weakly typed code
  45. (28 Jun 2007) Windsor Null Object Dependency Facility

Comments

Rafal

Look at roulette wheel selection for genetic algorithms. But I haven't seen any significantly better implementation.

Brian

Maybe I'm missing the something, but aren't you just finding the page that has a percentage range (offset by the sum of previous percentages) in which a random number falls? If so, you should be able to remove at least one loop.

Remco Ros

how about:

var pagesperc = new Dictionary <page,> ();

int total = 0;

foreach(var page in pages)

{

pagesperc.Add(page, page.PercentageToShow + total)

total += page.PercentageToShow;

}

int rnd = new Random().Next(0, 100);

foreach(var kv in pagesperc)

{

if (kv.Value == rnd)

{

    return kv.Key

}

}

Remco Ros

Sorry, comment messed up.

The dictionary should be generic {Page, int}

Brian

@Remco at that point, why don't you just put the random number selection before the first loop and do away with the second loop entirely?

Ayende Rahien

I don't understand how this is suppose to work.

Let us assume that you have 50% / 50%.

And random returns 7.

Brian

Maybe the one loop approach (you'd have to use a different first loop then Remco) isn't more elegant, just thought you might be looking at the problem backwards.

Adam

I'm not sure about the language, but wouldn't this work?

int index = new Random().Next(0, 100);

foreach (var page in pages)

{

if (index < page.PercentageToShow) return page;

index -= page.PercentageToShow;

}

?

Remco Ros

@Ayende

sorry, didn't test it.

This should work:

replace the last loop with:

        foreach (var kv in pagesperc)

        {

            if (number <= kv.Value)

            {

                foundpage = kv.Key;

                break;

            }

        }
Peter Morris

Importantly

1: It's fast enough

2: It's very easy to understand

3: It works

So I wouldn't change it.

Matt

int rand = new Random().Next(0, 100);

int percentSoFar = 0;

foreach (var page in pages)

{

percentSoFar += page.PercentageToShow;

if (percentSoFar >= rand) return page;

}

// error?

Remco Ros

@Matt

that works too ! nice one.

innesm

You could iterate through the pages and for each page generate a boolean value from the weighted probability of that page being selected (the 'percent' of the page over the 'percent' of all remaining pages including the page). If that value is true, return the page.

Eg If you get to the last page, the probability will be 1.0 that you return that page.

Peter Morris

Matt, I tried that too but with 2 pages (10% and 20%) I didn't get anything near a 2 to 1 ratio, more of an 8 to 1.

innesm

Pseudocode:

percent = 100.0

for each page in pages {

if testprobability(page.percent / percent)

return page

percent -= page.percent

}

bool testprobability(probability) {

// return true if random between 0 and 1.0 is less than probability

}

sample:

page1 20% probability == 20/100

page2 50% probability == 50/80

page3 30% probability == 30/30

Brian

@Peter - 10 + 20 != 100.

"We are assured that the numbers will always match to a 100."

Yann Schwartz

This reminds me of my probability vectors when playing with Markov Chains. There are several ways, when you don't have a lot of items in your array, best way is to use some kind of sparse array

int rnd = new Random().Next(0, 100);

pages.SkipWhile(p => p.PercentageToShow < limit).Take(1);

Note: percentage to show is actually a weighted percentage ( p.Percentage + every p.Percentage less than that. If you want to stick to absolute percentage, you have to massage your collection first, with a loop, a aggregate of some sort and a sort)

Peter Morris

Brian, 100 is irrelevant, the code is functionally identical whether they add up to 100, 30, or 99. The ratio should still be 2 to 1.

Yann Schwartz

(continued)

read rnd instead of limit

int rnd = new Random().Next(0, 100);

pages.SkipWhile(p => p.PercentageToShow < rnd).Take(1);

your weights must be ordered ascending, and it must be weights, not absolute pc. Going from absolutes to weighted percentages is trivial and could be done ahead of time, once and for all.

FallenGameR

I have had a similar issue. There was a dictionary of elements (T) and there was a probability of choosing each element (double) [Dictionary <t,> Weights]:

// Make sure that summ of probabilities = 1.0

Normalize( );

// All weights are on single tape with length 1.0

// Tape is divided by regions whose length equals to their probability

// Revolve the rulette and stop somewhere on the tape

double rouletteStop = Random.NextDouble( );

// Search for element that we stopped at

T lastElement = default( T );

foreach( T key in Weights.Keys )

{

lastElement = key;

rouletteStop -= Weights[ key ];

if( rouletteStop <= 0.0 )

{

    return lastElement;

}

}

// We are at the end of the tape or there was rounding error

return lastElement;

Cory Foy

I think it's because you are combining the weighted distribution with the selection of the next page.

I don't know if this is any better, but maybe something like:

public class WeightedString

    {

        public string aString { get; set; }

        public int Weight { get; set; }


        public WeightedString(string s, int weight)

        {

            aString = s;

            Weight = weight;

        }

    }


    public class WeightedStrings : List

<string
{

        List

<weightedstring distributedStrings = new List <weightedstring (100);

        List

<weightedstring weightedStrings = new List <weightedstring ();

        public new void Add(WeightedString ws)

        {

            weightedStrings.Add(ws);

            foreach (WeightedString weightedString in weightedStrings)

            {

                for (int i = 0; i < weightedString.Weight; i++)

                {

                    distributedStrings.Add(weightedString);

                }

            }

        }


        public string GetNextString()

        {

            return distributedStrings[new Random().Next(0, 100)].aString;

        }

    }


    public class Weighted

    {

        public string GetNextString()

        {

            WeightedStrings strings = new WeightedStrings();

            strings.Add(new WeightedString("Hello", 80));

            strings.Add(new WeightedString("World", 20));

            return strings.GetNextString();

        }

    }
Yann Schwartz

(me again)

it's really useful for markov chains when you can have big probability differences between items, greater than 1%. Also, if you have a lot of items, you can implement a nice binary search to get to your value, but then you can kiss LINQ bye bye (because of its forward only streaming)

Cory Foy

Subtext stripped out all the generic statements. Let's try again:

public class WeightedString

    {

        public string aString { get; set; }

        public int Weight { get; set; }


        public WeightedString(string s, int weight)

        {

            aString = s;

            Weight = weight;

        }

    }


    public class WeightedStrings : List<string>

    {

        List<WeightedString> distributedStrings = new List<WeightedString>(100);

        List<WeightedString> weightedStrings = new List<WeightedString>();


        public new void Add(WeightedString ws)

        {

            weightedStrings.Add(ws);

            foreach (WeightedString weightedString in weightedStrings)

            {

                for (int i = 0; i < weightedString.Weight; i++)

                {

                    distributedStrings.Add(weightedString);

                }

            }

        }


        public string GetNextString()

        {

            return distributedStrings[new Random().Next(0, 100)].aString;

        }

    }


    public class Weighted

    {

        public string GetNextString()

        {

            WeightedStrings strings = new WeightedStrings();

            strings.Add(new WeightedString("Hello", 80));

            strings.Add(new WeightedString("World", 20));

            return strings.GetNextString();

        }

    }
Peter Morris

Aha! It was the RND that was the problem, it was returning the same value because it was being created each time. Making it static fixed it.

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

namespace ConsoleApplication45

{

class Program

{

    static void Main(string[] args)

    {

        var pages = new List

<page();

        pages.Add(new Page { PercentageToShow = 10 });

        pages.Add(new Page { PercentageToShow = 90 });


        for (int i = 0; i < 10; i++)

            Console.WriteLine(" = Page " + ChoosePage(pages).PercentageToShow.ToString());

        Console.ReadLine();


    }


    static Random RND = new Random();

    public static Page ChoosePage(List

<page pages)

    {

        int totalWeight = pages.Sum(p => p.PercentageToShow);

        int randomNumber = RND.Next(totalWeight);

        Console.Write("Chose RND " + randomNumber.ToString());

        return pages.SkipWhile(p => (randomNumber -= p.PercentageToShow) > 0).Take(1).Single();

    }

}


public class Page

{

    public int PercentageToShow { get; set; }

}

}

Cory Foy

Must...Test...First...

I meant to have the Add method clear the list before redistributing the elements to it. Sorry about that.

Peter Morris

Sorry, that would be better as...

    public static Page ChoosePage(List

<page pages)

    {

        int totalWeight = pages.Sum(p => p.PercentageToShow);

        int randomNumber = RND.Next(totalWeight);

        return pages.SkipWhile(p => (randomNumber -= p.PercentageToShow) > 0).First();

    }
Brian

@Peter - Its probably a bit of a silly argument in this case, but the fact that it works for values that don't add to 100 is irrelevant. The spec specifically stated that they would sum to 100.

James Curran

The first question is "Why are you rebuilding the chance[] array every time?"

If the answer is "Because the PercentageToShow values may change between calls", then you are better off with some variant of the code offered by Adam/Matt/Peter. They are O(N) versus your O(100) (where N must be <100, or the algorithm won't work).

However, if the answer is "I'm not. It just looks that way in the snippet", then you probably better off with what you are doing. It's O(1) with a presumably amortizable O(100) one-time set-up.

Mathias

Matt's solution looks correct to me, and is the standard approach used in simulation. You can even optimize it a bit, by sorting your pages by decreasing probability: starting with the highest probability will likely terminate your loop earlier. Most likely irrelevant, given the size of the problem, though!

Avish

Matt's solution is pretty much what I had in mind, so +1 there.

Peter Morris

Brian. Your original statement implied that my code was wrong because it doesn't assume the percentages add up to 100. My point is that it doesn't matter what the percentages add up to, my routine will work with the correct ratio anyway so there is no point in restricting it.

Besides, when a customer says "Always" what they actually mean is "Mostly", and when they say "Never" they mean "Hardly ever".

Comment preview

Comments have been closed on this topic.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. The RavenDB Comic Strip (3):
    28 May 2015 - Part III – High availability & sleeping soundly
  2. Special Offer (2):
    27 May 2015 - 29% discount for all our products
  3. RavenDB Sharding (3):
    22 May 2015 - Adding a new shard to an existing cluster, splitting the shard
  4. Challenge (45):
    28 Apr 2015 - What is the meaning of this change?
  5. Interview question (2):
    30 Mar 2015 - fix the index
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats