Monday, October 7, 2013

5.4.3: Explicit Stacks

So far the strategy has been to identify what the state is that the call stack is managing for us and build a loop around that state to eliminate the function calls. But sometimes the state is complex enough that we need to bring in the heavy machinery.

``````func fib1(n int) int {
if n < 2 {
return n
}
return fib1(n-2) + fib1(n-1)
}
``````

The Fibonacci function is a tricky thing when it comes to eliminating the tail call. First we have to account for the final computation in the function, the adding of the two values. But we also have to account for the fact that the function calls it self recursively twice every time, with two different values. So this is not a linear recursive calculation but an execution tree. There is some branching logic we have to account for if we wish to eliminate the recursion.

A sample call of fib(6) looks like this:

``````fib(6)
fib(4)
fib(2)
fib(0)
fib(1)
fib(3)
fib(1)
fib(2)
fib(1)
fib(0)
fib(5)
fib(3)
fib(1)
fib(2)
fib(0)
fib(1)
fib(1)
``````

Flattening this to something that can be run in a for loop will require us to do branch management along with stack management.

``````func fib2(n int) int {
for {
if n < 2 {
return n
}
s1 := fib2(n - 2)
s2 := fib1(n - 1)
return s1 + s2
}
}
``````

First we make some cosmic changes to frame the problem a bit. The for loop is what will replace the recursion. And breaking up the calculation seems to point to a state consisting of three variables, n, s1 and s2. We will use a stack to track these states along with the branch control.

``````type state3 struct {
BRANCH, s1, s2, n int
}``````
``````
func fib3(n int) int {
var s1, s2, retval int
BRANCH := 0
var STACK list.List
for {
if n < 2 {
retval = n
} else {
if BRANCH == 0 {
STACK.PushBack(&state3{BRANCH, s1, s2, n})
n -= 2
BRANCH = 0
continue
} else if BRANCH == 1 {
s1 = retval
STACK.PushBack(&state3{BRANCH, s1, s2, n})
n -= 1
BRANCH = 0
continue
} else if BRANCH == 2 {
s2 = retval
retval = s1 + s2
}
}
if STACK.Len() == 0 {
return retval
}
s := STACK.Back().Value.(*state3)
STACK.Remove(STACK.Back())
BRANCH, s1, s2, n = s.BRANCH, s.s1, s.s2, s.n
BRANCH++
}
}
``````

The key part here is the value of BRANCH. 0 corresponds to fib(n-2), 1 to fib(n-1) and 2 to s1+s2. If we compare to the fib(6) tree illustrated above, the first three iterations BRANCH will be 0 and n will go 6, 4, 2 as the loop continues from the BRANCH == 0 block.

On the next iteration n will be 0 and we drop to the block below the branching logic. There the stack will be popped bringing back the values for n == 2 (restoring n to 2) and the BRANCH will be increased to 1.

So on the next iteration we hit the BRANCH == 1 block, n becomes 1. As we continue we hit n < 2 again and pop the state from the BRANCH == 1 block bringing BRANCH to 2. As we loop back we hit the BRANCH == 2 bock and perform the addition. The addition is stored in retval. Now we pop n == 4 from the stack. This again brings us to the BRANCH == 1 block and brings n down to 3.

This sequence corresponds to fib(6) -> fib(4) -> fib(2) -> fib(0) -> fib(1) -> fib(3). The tree branching logic has been flattened in such a way that the left child is always travelled first and then we go down the right branch once we pop back. This is called pre-order tree traversal. The rest of the tree will we traversed in the order of fib(1) -> fib(2) -> fib(1) -> fib(0) -> fib(5) -> fib(3) -> fib(1) -> fib(2) -> fib(0) -> fib(1) -> fib(1), completing the same calculation as the recursive version.

``````type state4 struct {
BRANCH, s1, n int
}
``````
``````func fib4(n int) int {
var s1, retval int
BRANCH := 0
var STACK list.List
for {
if n < 2 {
retval = n
} else {
if BRANCH == 0 {
for n >= 2 {
STACK.PushBack(&state4{BRANCH, 0, n})
n -= 2
}
retval = n
} else if BRANCH == 1 {
s1 = retval
STACK.PushBack(&state4{BRANCH, retval, n})
n -= 1
BRANCH = 0
continue
} else if BRANCH == 2 {
retval += s1
}
}
if STACK.Len() == 0 {
return retval
}
s := STACK.Back().Value.(*state4)
STACK.Remove(STACK.Back())
BRANCH, s1, n = s.BRANCH, s.s1, s.n
BRANCH++
}
}
``````

The hard part is over and all that is left is to do some code clean up and optimizations. The BRANCH == 0 can be optimized to include its own for loop to calculate the entire branch from n to 0 without involving the outer loop. So once we hit it with e.g. n == 6, n == 4 and n == 2 get pushed on the stack straight away before we continue with the main loop.

Furthermore, s2 is not really needed. Its value is always either 0 (when BRANCH == 0) or the same as retval (when BRANCH == 1).

``````func fib5(n int) int {
var s1, retval int
BRANCH := 0
var STACK list.List
for {
if n < 2 {
retval = n
} else {
if BRANCH == 0 {
for n >= 2 {
STACK.PushBack(&state4{1, 0, n})
n -= 1
}
retval = n
} else if BRANCH == 1 {
s1 = retval
STACK.PushBack(&state4{2, retval, n})
n -= 2
BRANCH = 0
continue
} else if BRANCH == 2 {
retval += s1
}
}
if STACK.Len() == 0 {
return retval
}
s := STACK.Back().Value.(*state4)
STACK.Remove(STACK.Back())
BRANCH, s1, n = s.BRANCH, s.s1, s.n
}
}
``````

As a final optimization we swap the logic for BRANCH == 0 and BRANCH == 1. It doesn't really matter whether we descend down the execution tree left child first or right child first, the final result is the same. After all, a+b is equal to b+a. But the inner loop for the left path is executed n/2 times while the inner loop for the right path is executed n times. If the optimization is beneficial for n/2, it should be even better for n.

Furthermore, rather than pushing a branch value on the stack and then increment it by one when we get it back, we can simply push the incremented value on the stack and get rid of the trailing BRANCH++ statement.

After all this hard work, the results are a bit disappointing  The optimized version is around 100 times slower than the recursive version.

``````Fib1: fib(20) = 6765 at 83.407us
Fib5: fib(20) = 6765 at 6.380718ms
Fib1: fib(20) = 6765 at 83.304us
Fib6: fib(20) = 6765 at 5.68734ms
Fib1: fib(20) = 6765 at 83.497us
Fib6: fib(20) = 6765 at 5.682758ms
``````

So apparently Go is doing a good job here, at least better than we can accomplish.

Get the source at GitHub.

Sunday, October 6, 2013

5.4.2: Creating Tail Calls

If a recursive function doesn't have a tail call to eliminate, we can often create it. And then promptly eliminate it. A truly creative destruction.

The binary function accepts an integer and returns a string that is the binary representation of the integer. E.g. binary(10) returns the string "1010".

``````func binary1(n int) string {
if n <= 1 {
return strconv.Itoa(n)
}
k := int(n / 2)
b := int(math.Mod(float64(n), 2))
return binary1(k) + strconv.Itoa(b)
}
``````

Even though the calls is at the tail of the function, this is not a tail call since the last computation performed in the function is the adding of the two strings; the return of the recursive call and the conversion of b.

``````func binary2(n int, retval string) string {
if n < 1 {
return retval
}
k := int(n / 2)
b := int(math.Mod(float64(n), 2))
retval = strconv.Itoa(b) + retval
return binary2(k, retval)
}
``````

To eliminate the final computation we introduce a new state variable, retval, which will contain the state of the computation at all times and be passed along the recursive calls. Now the final computation is the recursive call which is now a tail call that we can eliminate. The cost is that the computation takes one extra recursive call. But that is acceptable since that one extra call will be turned into one extra iteration.

``````func binary3(n int) string {
var retval string
for n >= 1 {
b := int(math.Mod(float64(n), 2))
retval = strconv.Itoa(b) + retval
n = int(n / 2)
}
return retval
}
``````

The recursive function now becomes a simple function with a for loop, just as the gcd example. n>=1 guards our loop that allows us to build up the string in retval that we return once the loop exists.

``````Binary1: binary(10) ="1010" at 13.448us
Binary3: binary(10) ="1010" at 2.461us
Binary1: binary(10) ="1010" at 8.896us
Binary3: binary(10) ="1010" at 1.581us
Binary1: binary(10) ="1010" at 12.516us
Binary3: binary(10) ="1010" at 2.02us
``````

Again, the loop version of the computation destroys the recursive version. And this is even though we are using a somewhat inefficient (and not recommended by the Go authors) method of string concatenation.

The strategy can be summarized as to identify the tail computation that prevents the tail-call elimination, store the computation in a variable that can be passed with the recursion, isolate the tail-call and finally eliminate it.

The source is at GitHub. Also, another example rewriting a factorial function in the same way is available. The strategy is exactly the same so I won't go further into it.

Saturday, October 5, 2013

5.4.1: Someone Else's Problem

A power set P of set S is a collection of sets constructed in such a way that each set Sn contains a different combination of the elements of S and P contains all such distinct sets Sn.

``````P(S={s1, s2, s3}) = [
S1={s1, s2, s3},
S2={s1, s2},
S3={s1, s3},
S4={s1},
S5={s2, s3},
S6={s2},
S7={s3}
]
``````

The Perl code in HOP is taken from the book Mastering Algorithms with Perl and the power set is constructed as a hash of hashes. It is full of weird perlisms and I even had to seek the help of MJD (thanks!) to understand it. Still I can't say I fully understand it and why it is implemented the way it is implemented. If anyone is interested, the original code is here.
``````
``````
``````type set map[string]string
type powerset []set
``````

For me the most natural way to construct this is to represent the powerset as a slice of maps. Perls magic hash foo is hard to grok and I'm not sure what the benefit is to do it like that. Feel free to tell me how my version is way worse Big Oh wise!

``````func keysAndValues(s set) ([]string, []string) {
var ks []string
var vs []string
for k := range s {
ks = append(ks, k)
vs = append(vs, s[k])
}
return ks, vs
}
``````

Perl has keys and values to retrieve the keys and values from hashes. We need something similar to retrieve them from maps. Doing it in a single function is beneficial since then we can do it in a single pass.

The signature of the powerset function is powerset_recurse(s set, p powerset, keys, values []string, n, i int) powerset. s is our set S that we wish to use as a seed for the powerset. All of the other parameters are the state that needs to be passed from each invocation to the next. This is something that the function maintains and the caller does not need to know about. In Perl we simply call the function without these parameters:

``````powerset_recurse(\$set)
``````

Go has no default parameters which forces us to provide nil values for all the state parameters:

``````powerset_recurse(s, nil, nil, nil, 0, 0)
``````

This is clumsy and ugly. To avoid this, we can wrap the recursive function in another function that provides the nicer interface to the caller:

``````func powerset_recurse(s set) powerset {
var f func(s set, p powerset, keys, values []string, n, i int) powerset
f = func(s set, p powerset, keys, values []string, n, i int) powerset {
// function body
return f(s, p, keys, values, n, i+1)
}
return f(s, nil, nil, nil, 0, 0)
}
``````

Now we can call the function and start the recursive computation in the same way as in Perl.

``````    if p == nil {
keys, values = keysAndValues(s)
n = len(keys)
p = make(powerset, int(math.Pow(2, float64(n)))-1)
i = 1
}
if i-1 == n {
return p
}
``````

The first part of the body simply sets up the state variables for the recursion if the power set that we wish to create is nil. We retrieve a slice of keys and values from the source set and make the powerset to contain (2 to the power of n) - 1 values, n being the number of keys in the source set. i is our counter, it represent the number of recursions we've executed. Once i-1 is equal to n the powerset is ready and we stop. Naturally, this initialization code could simply be outside the recursive function and inside the wrapping function that we created to improve the interface. But it is here in the Perl version so I'll do the same.

``````    c := int(math.Pow(2, float64(i-1)))
for j := 0; j < c; j++ {
ss := make(set, i)
for k := 0; k < i; k++ {
flag := 1 << uint(k)
if (c+j)&flag == flag {
ss[keys[k]] = values[k]
}
}
p[c-1+j] = ss
}
``````

This is where we generate each Sn in the powerset and where the Perl code contains much of its black magic. My strategy is to use bitmasks to generate the sets.

c is the number of sets to generate in this invocation, 1 for the first, 2 for the second, and then 4, 8, 16, etc., as needed. The invocation counter i controls the size of each set, 1 for the first time, 2 then, and 3, 4, 5, etc.

In the inner loop, we copy keys and values from the source set to set Sn. We generate a flag for each value such that value n is the flag ...0001..., that is, contains the value 1 at the nth position and 0 elsewhere. c+j is the mask for set Sn. A mask such as 1101 means the set contains keys and values 1, 2 and 4 while a mask such as 001011 means that the set contains keys and values 3, 5 and 6.

After this loop is done we assign the set Sn to the correct location in the powerset. The complete function is as follows:

``````func powerset_recurse(s set) powerset {
var f func(s set, p powerset, keys, values []string, n, i int) powerset
f = func(s set, p powerset, keys, values []string, n, i int) powerset {
if p == nil {
keys, values = keysAndValues(s)
n = len(keys)
p = make(powerset, int(math.Pow(2, float64(n)))-1)
i = 1
}
if i-1 == n {
return p
}
c := int(math.Pow(2, float64(i-1)))
for j := 0; j < c; j++ {
ss := make(set, i)
for k := 0; k < i; k++ {
flag := 1 << uint(k)
if (c+j)&flag == flag {
ss[keys[k]] = values[k]
}
}
p[c-1+j] = ss
}
return f(s, p, keys, values, n, i+1)
}
return f(s, nil, nil, nil, 0, 0)
}
``````

To remove the recursion we simply have to add a for loop around the latter part of the body. And now we can remove the inner function body as well, making the whole thing a lot simpler:

``````func powerset_loop(s set) powerset {
keys, values := keysAndValues(s)
n := len(keys)
p := make(powerset, int(math.Pow(2, float64(n)))-1)
for i := 1; i <= n; i++ {
c := int(math.Pow(2, float64(i-1)))
for j := 0; j < c; j++ {
ss := make(set, i)
for k := 0; k < i; k++ {
flag := 1 << uint(k)
if (c+j)&flag == flag {
ss[keys[k]] = values[k]
}
}
p[c-1+j] = ss
}
}
return p
}
``````

The benchmarks point to something around a 40% improvement in execution speed over the recursive version:

``````Powerset recursive at 14074
Powerset looping at 8858
Powerset recursive at 15000
Powerset looping at 10000
Powerset recursive at 14138
Powerset looping at 11909
``````

Get the source at GitHub.

Friday, October 4, 2013

5.4.1: Tail-Call Elimination

One of the reasons recursive solutions often look so nice is the automatic stack management the computer performs for us. Every time we call a function the parent functions parameters get saved on the stack and then popped back when the function returns and the parent regains control. Furthermore, any return value of the function gets pushed on the stack and the tail end of the function and then popped back in the parent function. As nice as such automatic stack management is, it is not free. Pushing and popping takes time.

A tail call occurs when a function ends by calling another function (such as itself) and then immediately returning the value of the last function call without any additional computation. A recursive function R that calls itself three times R1, R2, and R3 at the end will end up pushing and popping the same return value three times as the call stack unwinds.

Tail-Call elimination is the process of eliminating this unnecessary pushing-and-popping, simply returning the value of R3 straight to the caller of R1 without going through R1 and R2.

Many compilers and runtimes provide for this optimization automaticly. As I write this, Go does not. But we can usually refactor a recursive tail-call function into a simple for loop without any recursion, therefore removing the tail call.
``````
``````
``````func gcd(m, n int) int {
if n == 0 {
return m
}
m = int(math.Mod(float64(m), float64(n)))
return gcd(n, m)
}
``````

The gcd computes the Greatest Common Divisor between m and n, using Euclid's algorithm. At each call we make m be equal to n and n be equal to the modulus of m and n. Once n hits 0 we are done and return m as the final result. Notice that no additional computation is performed on this value of m, it is simply passed back up the entire call stack as the result of the original computation.

``````func gcd_notail(m, n int) int {
for n != 0 {
m, n = n, int(math.Mod(float64(m), float64(n)))
}
return m
}
``````

To eliminate the tail call we rewrite the function as a for loop. n is the condition of the loop as it was the condition of the call stack, once it hits zero we are done. And at each iteration m becomes n and n becomes the modulus of m and n.

``````GCD Recursive of 48 and 20: 4 at 3.298us
GCD NoTail of 48 and 20: 4 at 431ns
GCD Recursive of 48 and 20: 4 at 3.55us
GCD NoTail of 48 and 20: 4 at 624ns
GCD Recursive of 48 and 20: 4 at 4.669us
GCD NoTail of 48 and 20: 4 at 1.05us
``````

As we can see, the non recursive version of GCD is always faster than the recursive one, by a factor of 4 to 8 in these three samples.

Get the source at GitHub.

Thursday, October 3, 2013

5.3: A Generic Search Iterator

We can encapsulate the core strategy of the previous example in a code structure that we can then reuse to create iterators such as the IntPartition iterator. The idea is to extract the code that revolves around the agenda and the iteration. To use this very generic iteration, we need to supply a function that will generate new items for the agenda from the current item.

``type ChildrenFunc func(interface{}) []interface{}``
``````
type depthfirst struct {
children ChildrenFunc
agenda   []interface{}
node     interface{}
err      error
}
``````

The iterator consists of the agenda, the current value in the node item, and a function to generate new items.

``````func Dfs(root interface{}, children ChildrenFunc) i.Forward {
dfs := depthfirst{children: children}
dfs.agenda = append(dfs.agenda, root)
dfs.Next()
return &dfs
}
``````

To create the iterator we set the agenda to the root item, call Next() to generate the first value and return.

``````func (dfs *depthfirst) AtEnd() bool {
return len(dfs.agenda) == 0
}``````
``````
func (dfs *depthfirst) Value() interface{} {
if dfs.AtEnd() {
dfs.err = fmt.Errorf("Next: Beyond end")
return nil
}
return dfs.node
}
``````

node is the current value of the iteration and the iteration is over once the agenda is empty.

``````func (dfs *depthfirst) Next() error {
if dfs.AtEnd() {
dfs.err = fmt.Errorf("Next: Beyond end")
return dfs.err
}
dfs.node, dfs.agenda = dfs.agenda, dfs.agenda[1:len(dfs.agenda)]
dfs.agenda = append(dfs.agenda, dfs.children(dfs.node)...)
return nil
}
``````

The iteration should be familiar. Pop the head of the agenda and call the function we received to generate the next batch of items from the current item and append them to the agenda.

The IntPartition iterator is now a short piece of code that focuses on solving only the problem of partitioning an integer; all the iteration bookkeeping is gone. The only new thing here is, rather than adding items to the agenda for further iteration, we create a temporary slice and add our generated nodes to it. This slice is the list of children we return to the Dfs iterator.

``````func IntPartition(n int) i.Forward {
return Dfs([]int{n}, func(node interface{}) []interface{} {
items, _ := node.([]int)
largest, rest := items, items[1:len(items)]
min, max := 1, largest/2
if len(rest) > 0 {
min = rest
}
next := make([]interface{}, 0)
for r := igen.Range(min, max+1); !r.AtEnd(); r.Next() {
next = append(next, (append([]int{largest - r.Int(), r.Int()}, rest...)))
}
return next
})
}
``````

Now its 15 lines of code rather than 50 lines of code. Yeah for iterator building blocks.

Get the source at GitHub. The Dfs iterator is also available in the iterator library.

Wednesday, October 2, 2013

5.2: How to Convert a Recursive Function to an Iterator

The trick to turn a recursive function is to maintain an agenda of todo items, where each item is the state of each invocation of the recursive version. The agenda replaces the stack management we get when we use recursion.

The integer partition problem is to break an integer into all possible integer components that sum to the original integer. E.g 6 breaks in to:

``````6
5 1
4 1 1
3 1 1 1
2 1 1 1 1
1 1 1 1 1 1
2 2 1 1
3 2 1
4 2
2 2 2
3 3
``````

A recursive solution to this problem looks a bit like this:

``````func IntPartitionRecursive(item []int) {
fmt.Println(item)
largest, rest := item, item[1:len(item)]
min, max := 1, largest/2
if len(rest) > 0 {
min = rest
}
for r := igen.Range(min, max+1); !r.AtEnd(); r.Next() {
IntPartitionRecursive(append([]int{largest - r.Int(), r.Int()}, rest...))
}
}
``````

For every slice, starting with, in this case, , we print out the slice. We when split it into a first element (6) and rest ([]) and set the min (1) and max (3) values. We then loop through the range [1,4) and call the function again 3 times with the values [5,1], [4,2] and [3,3]. The call with [5,1] will in turn results to calls with [4,1,1] and [3,2,1]. This strategy, of looping only to largest/2, prunes the solution tree so that we don't return solutions such as [2,4] since that's the same as [4,2]. All of our solutions are in decreasing-or-equal order.

To turn this into a iterative solution, we need to identify the essential state of the iteration. In this case, it is the slice of integers that we are generating new solutions from (e.g. the state [5,1] results in new states [4,1,1] and [3,2,1]). On change from the recursive solution is that the iterative solution will return the list in a sorted order such as this:

``````6
5 1
4 2
4 1 1
3 3
3 2 1
3 1 1 1
2 2 2
2 2 1 1
2 1 1 1 1
``````

To produce the sorted version, we leverage the sort package provided by Go. We need to define three methods on our to-be-sorted collection, Len(), Swap() and Less().

``type items [][]int``
``````
func (it items) Len() int {
return len(it)
}
``````
``````
``````
``````func (it items) Swap(i, j int) {
it[i], it[j] = it[j], it[i]
}
``````
``````
``````
``````func (it items) Less(i, j int) bool {
a, b := it[i], it[j]
for k := 0; k < len(a); k++ {
if a[k] < b[k] {
return false
} else if a[k] > b[k] {
return true
}
}
// we will never get here since we've alredy pruned the
// other half of the solution tree
return true
}
``````

Go's package is geared to sort a container from the smallest element to the largest, we want to sort in reverse order so we switch the expression; return true if item i is larger than j and vice versa.

``````type intpartition struct {
agenda items
item   []int
err    error
}
``````

agenda is the state of the iteration. item is the current solution that Value() will return.

``````func IntPartition(n int) i.Forward {
var p intpartition
item := make([]int, 1)
item = n
p.agenda = append(p.agenda, item)
p.Next()
return &p
}
``````

To start the iteration we simply construct a slice from the supplied starting value and call the Next() method.

``````func (p *intpartition) AtEnd() bool {
return len(p.agenda) == 0
}
func (p *intpartition) Value() interface{} {
if p.AtEnd() {
p.err = fmt.Errorf("Value: Beyond end")
return nil
}
return p.item
}
``````

An empty agenda means the iteration is over.

``````func (p *intpartition) Next() error {
if p.AtEnd() {
p.err = fmt.Errorf("Calling next AtEnd")
return p.err
}
p.item, p.agenda = p.agenda, p.agenda[1:len(p.agenda)]
largest, rest := p.item, p.item[1:len(p.item)]
min, max := 1, largest/2
if len(rest) > 0 {
min = rest
}
for r := igen.Range(min, max+1); !r.AtEnd(); r.Next() {
p.agenda = append(p.agenda, (append([]int{largest - r.Int(), r.Int()}, rest...)))
}
sort.Sort(p.agenda)
return nil
}
``````

The Next() method starts py popping the first item from the agenda and assigning it to item. Then we break the item into two parts, largest and rest and proceded exacly like the recursive version, except we push new solutions on the agenda where we would call the function again. Lastly, we sort the agenda.

``````    itr := IntPartition(n)
for ; !itr.AtEnd(); itr.Next() {
fmt.Println(itr.Value())
}
``````

Looping through the solution space is trivial.

A version that only prints out sums that contain distinct components, e.g. no number repeats itself, is easy. Simply leverage the i.Filter function and write a filter for the serie of integers.

``````func distinct(itr i.Iterator) bool {
item, _ := itr.Value().([]int)
for i := 1; i < len(item); i++ {
if item[i-1] == item[i] {
return false
}
}
return true
}
``````

Since the list is in a descending order we simply need to check if the current component is equal to the previous component; if so we return false. If we can run through all components without this happening, we return true.

``````    itr = hoi.Filter(distinct, IntPartition(n))
for ; !itr.AtEnd(); itr.Next() {
fmt.Println(itr.Value())
}
``````

Get the source at GitHub.

Tuesday, October 1, 2013

5.1: The Partition Problem Revisited

The recurisve partition algorithm form 1.8 solved for a single solution when trying to search a list of treasures for a share of certain value. The iterator version will return single solution on each iteration, successive iterations will return new solutions until all possible solutions have been found when we reach AtEnd().

``````type entry struct {
target      int
pool, share []int
}``````
``````
type partition struct {
todo  *list.List
err   error
share []int
}
``````

The entry item contains the current target, the pool of items to consider, and the share that make up a possible solution. The todo is a list of entries to inspect.

``````func Partition(target int, treasures []int) i.Forward {
p := partition{}
p.todo = list.New()
if target == 0 {
p.todo.PushBack(&entry{target: target, pool: make([]int, 0)})
} else {
p.todo.PushBack(&entry{target: target, pool: treasures})
}
p.Next()
return &p
}
``````

To construct the iterator we create the first entry item for the todo list from the target and treasures parameters. If target is zero we put a empty list for the pool so the iterator will return right after the first iteration. Then we execute the first iteration.

``````func (p *partition) AtEnd() bool {
return p.todo.Len() == 0
}``````
``````
func (p *partition) Value() interface{} {
if p.AtEnd() {
p.err = fmt.Errorf("Next: Beyond end")
return nil
}
return p.share
}
``````

An empty todo list indicates that we are at the end of the iteration. The share is the current value.

``````func (p *partition) Next() error {
if p.AtEnd() {
p.err = fmt.Errorf("Next: Beyond end")
return p.err
}
for !p.AtEnd() {
e, _ := p.todo.Front().Value.(*entry)
p.todo.Remove(p.todo.Front())
first, rest := e.pool, e.pool[1:len(e.pool)]
if len(rest) > 0 {
p.todo.PushBack(&entry{e.target, rest, e.share})
}
if e.target == first {
p.share = append(e.share, first)
return nil
} else if e.target > first && len(rest) > 0 {
p.todo.PushBack(&entry{e.target - first, rest, append(e.share, first)})
}
}
return nil
}
``````

We start by popping the first entry off the todo list and splitting the pool into a first item and the rest. If the rest contains items there are more possible solutions to consider so we put it on the todo list for future inspection.
If the target is equal to the first element of the pool we've found a solution and return. Otherwise if the target is larger than the first element and the rest contains more elements there could be a solution or two still in the rest list, so we put it on the todo list to inspect.
If we reach this point we did not find a solution so we pop the next item of the todo list and start again, or return if it is empty.

``````itr := Partition(target, treasures)
for ; !itr.AtEnd(); itr.Next() {
fmt.Println(itr.Value())
}
``````

Construct the iterator and loop through it to find all solutions.
Get the source at GitHub.