## Sunday, July 09, 2017

### Dynamic programming for the binomial coefficient

More fun things, this time with some visualisation of what happens when memoisation is used and what happens when we don't. I don't these these algorithms are the most efficient algorithms

#
# dynamic programming for binomial co-efficient
#
def dynamic_bc(n, k, indent=0):
#
# return (n k) using dynamic programming
# tail cases are k == 0 or k == 1
# (n 0) = 1 and (n 1) = n
for i in range(0, indent):
print " ",
print "%d:%d" %(n, k)
if (k == 0):
arr[n][k] = 1
return 1
if (k == 1):
arr[n][k] = n
return n
if (n == k):
arr[n][k] = 1
return 1
# (n k) = (n!)/(n-k)!k!, lets split it further
# n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!
# or (n k) = (n-1 k) + (n -1 k-1)
if (arr[n-1][k-1] == 0):
arr[n-1][k-1] = dynamic_bc(n-1,k-1,indent+1)
if (arr[n-1][k] == 0):
arr[n-1][k] = dynamic_bc(n-1, k,indent+1)
return arr[n-1][k] + arr[n-1][k-1]

def bc(n, k, indent=0):
#
# tail cases are k == 0 or k == 1
# (n 0) = 1 and (n 1) = n
for i in range(0, indent):
print " ",
print "%d:%d" %(n, k)
if (k == 0):
return 1
if (k == 1):
return n
if (n == k):
return 1
# (n k) = (n!)/(n-k)!k!, lets split it further
# n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!
# or (n k) = (n-1 k) + (n -1 k-1)
return bc(n-1,k,indent+1) + bc(n-1,k-1,indent+1)

number = 6
select = 3
arr = [[0 for x in range(0, number)] for x in range(0, number)]

print bc(number, select)
print dynamic_bc(number, select)
The output for the first call with full recursion

6:3
5:3
4:3
3:3
3:2
2:2
2:1
4:2
3:2
2:2
2:1
3:1
5:2
4:2
3:2
2:2
2:1
3:1
4:1
20


The output for For the first call with full recursion

6:3
5:2
4:1
4:2
3:1
3:2
2:1
2:2
5:3
4:3
3:3
20

Python supports memoization via functools (wraps, lru_cache, etc). I am using the wrapper (decorator pattern). Using the following pattern makes the programming so transparent
#
# dynamic programming for binomial co-efficient
#
from functools import wraps

def dynamic_bc2(func):
cache = {}
def wrap(*args):
if args not in cache:
cache[args] = func(*args)
return cache[args]
return wrap

@dynamic_bc2
def bc2(n, k, indent=0):
#
# return (n k) using dynamic programming
# tail cases are k == 0 or k == 1
# (n 0) = 1 and (n 1) = n
for i in range(0, indent):
print " ",
print "%d:%d" %(n, k)
if (k == 0):
return 1
if (k == 1):
return n
if (n == k):
return 1
# (n k) = (n!)/(n-k)!k!, lets split it further
# n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!
# or (n k) = (n-1 k) + (n -1 k-1)
return bc2(n-1,k-1,indent+1) + bc2(n-1,k,indent+1)

number = 6
select = 3

print bc2(number, select)

Comparing the output will show the affect of functools. The output with functools is:
6:3
5:2
4:1
4:2
3:1
3:2
2:1
2:2
5:3
4:3
3:3
20



## Thursday, July 06, 2017

### Iterative combinations algorithm

The algorithm is quite straight forward, see the code in python below

#
# Do iterative combinations generation
# From Knuth's observation, we know for (6 3) that
# [0, 1, 2]
# [0, 1, 3]
# ..
# [3, 4, 5]
# Basically we can see a set of for loops, if we call the columns c1,c2 and c3 then
#
# for c3 = 2 to n-1
#  for c2 = 1 to c3-1
#   for c1 = 0 to c2-1
#
# The loop gives us what we need
# My implementation is based on an index (i) and max (n)
# which are used to determine the next value to generate
#
def iterative_combination(n, k):
a = [i for i in range(0, n)]
index = k - 1
done = False
while (done == False):
print a[0:k]
while (a[index] == n - (k - index)): # boundary for that index
index = index - 1
if (index < 0):
done = True
break
a[index] = a[index] + 1
while (index < k - 1):
index = index + 1
if (a[index] == (n - (k - index))): # initialize the next neighbour
a[index] = a[index - 1] + 1



### Random CS picture

Here is a random picture from a computer science topic

The picture is a counter example of why greedy selection does not work optimally for the set cover problem. If the picture seems not so well done, it's because my asymptote skills are lacking :)

You can probably guess what I'm reading by connecting set cover with combinatorial generation of combinations.

## Wednesday, July 05, 2017

### Combinations revisited

I've been trying to relearn some of the combinatorics I used to know, I redid a nice recursive algorithm (from early university days), with python it looks so beautiful and nice. A simple implementation is below, so nice and simple.

#
# I'm inspired by two algorithms I saw, both are recursive.
#
# Example:
# ((1, 2, 3, 4), 2) - means select 2 out of 3
# should produce (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)
#
# The idea from a recursive perspective is this
# we pick an index - say 1, then we pick the next (n-1) elements
# to go with it
#

def rec_comb(a, i, j, k, n):
#
# array a built upto index i
# selected k at a time of max
# length n
#
if (i == k):
print a[0:k]
return
for l in range(j, n):
a[i] = l
rec_comb(a, i+1, l+1, k, n)
return

def combination(n, k):
#
# n (numbers from 1..n), chosen k at a time
#
a = [0 for i in range(0, n)]
rec_comb(a, 0, 0, k, n)
The sample output for ${6 \choose 3}$ is:

[0, 1, 2]
[0, 1, 3]
[0, 1, 4]
[0, 1, 5]
[0, 2, 3]
[0, 2, 4]
[0, 2, 5]
[0, 3, 4]
[0, 3, 5]
[0, 4, 5]
[1, 2, 3]
[1, 2, 4]
[1, 2, 5]
[1, 3, 4]
[1, 3, 5]
[1, 4, 5]
[2, 3, 4]
[2, 3, 5]
[2, 4, 5]
[3, 4, 5]

The output lends itself to a simple and nice iterative algorithm, I'll follow up on.

The interesting follow up to combinatorial generation algorithms is using ranking and unranking. At this point, I think we need a O(n) to identify if two combinations are the same -- given in any order of elements. For example to compare $\{1, 2, 3\}, \{1, 3, 2\}, \{2, 1, 3\}, \{3, 1, 2\}, \{2, 3, 1\}, \{3, 2, 1\}$ we need to identify them and associate the same rank with them. I think for a subset of size $k$, the cost of ranking is $\theta(k)$, but I need to see if there is a better way to do it.