Monday, August 5, 2013

3.7.4: Partitioning

The point of making a generic memoize is to be able to use it with different algorithms. The partition algorithm from 1.8: Partitioning has a different signature than the fibonacci algorithm yet we can use memoizer unchanged. First for the type definitions, the only difference being the signature of the Algofunc:
type ( Keyfunc func(...interface{}) string Memfunc func(...interface{}) interface{} Algofunc func(int, []int) []int )

memoize is exactly the same as before:
func memoize(f Memfunc, kf Keyfunc) Memfunc {
 if kf == nil {
  kf = func(v ...interface{}) string {
   // Very naive keygen func, simply use string representation
   // of every param
   var buffer bytes.Buffer
   for i := 0; i < len(v); i++ {
    buffer.WriteString(fmt.Sprint(v[0]))
    buffer.WriteString(",")
   }
   return buffer.String()
  }
 }

 cache := make(map[string]interface{})
 return func(v ...interface{}) interface{} {
  cachekey := kf(v)
  if r, ok := cache[cachekey]; ok {
   // Hit, return previous result
   return r
  }
  // Miss, call calculator
  r := f(v...)
  cache[cachekey] = r
  return r
 }
}

The only change we need to make to findshare is to change it to a constructor that accepts a Memfunc and returns the actual algorithm. It is the same patterns as we used for fib:
func findshare(mf Memfunc) Algofunc {

 f := func(target int, treasures []int) []int {
  if target == 0 {
   return make([]int, 0)
  }
  if target < 0 || len(treasures) == 0 {
   return nil
  }
  first, rest := treasures[0:1], treasures[1:]
  solution := toIntSlice(mf(target-first[0], rest))
  if solution != nil {
   return append(first, solution...)
  }
  return toIntSlice(mf(target, rest))
 }

 if mf == nil {
  mf = func(v ...interface{}) interface{} {
   return f(toInt(v[0]), toIntSlice(v[1]))
  }
 }
 return f
}

In the main function we start with doing unmemoized calculations:
func main() {
 treasure := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}
 target := 209

 rawfindshare := findshare(nil)
 duration, res := timeThis(rawfindshare, target, treasure)
 fmt.Printf("Raw partition %v into %d: %d at %s\n", treasure, target, res, duration)

Then we proceed to do the memoized calculations on the same data, constructing the same typecasting code structure as before:
 mfindshare := func() Algofunc {
  // Forward decleration of calculator
  var calcshare Algofunc

  // Memoizer
  memfindshare := memoize(func(values ...interface{}) interface{} {
   return calcshare(toInt(values[0]), toIntSlice(values[1]))
  }, nil)

  // Construct the calculator, use the memonizer defined above
  calcshare = findshare(memfindshare)

  // Return a function that will do the neccessary typecasting
  return func(target int, treasures []int) []int {
   return toIntSlice(memfindshare(target, treasures))
  }
 }()

 duration, res = timeThis(mfindshare, target, treasure)
 fmt.Printf("Memoized partition %v into %d: %d at %s\n", treasure, target, res, duration)

}

The speedup we gain for the memoization is about 96% for this example:
Raw partition [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] into 209: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] at 558.018056ms
Memoized partition [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] into 209: [2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] at 18.783834ms

Get the source at GitHub.