I've been a big fan of Skiena's Algorithm Design Manual, I recently found my first edition of the book (although I own the third edition), I felt guilty for not using and started with the much more compact first edition. I do run into errata, but being able to discover it on my own and see it fixed in the new editions, provides a lot of satisfaction :)

One of the commonly used algorithms for combinatorics is ranking and unranking of permutations. The algorithm provided in the book is derived from the Combinatorica package. The book provides an example or two, which is quite terse, so I decided to implement things myself. There is a slightly better explanation in Computational Discrete Mathematics, also written by the same author.

Here is my python implementation of the rank/unrank algorithm(s)

importfunctools@functools.lru_cache(maxsize=None)
deffact(n):
if n ==1or n ==0:
return1else:
return n*fact(n-1)
assert fact(5) ==120assert fact(0) ==1#@param perm - given a permutation as an array, calculate it's rankdefrank(perm):
ifnotisinstance(perm, list):
raise"Invalid input"else:
n =len(perm)
if n ==0or (n ==1and perm[0] ==1):
return0
rest =list(map(lambda x: x -1if x > perm[0] else x, perm[1:]))
return (perm[0]-1) * fact(n-1) + rank(rest)
#@param n, k - given rank k of an n element permutation, print itdefunrank(k, n, perm):
q, r =divmod(k, fact(n -1))
perm[0] = q +1if n ==1:
return [perm[0]]
rest = perm[1:]
newperm = unrank(r, n -1, rest)
rest =list(map(lambda x: x+1if x >= perm[0] else x, newperm))
return [perm[0]] + rest

Here are some simple tests and their output

for i in range(24): perm=[0 for i in range(4)] perm=unrank(i, 4, perm) print(perm, "->", rank(perm))

Rosetta Code Ranking of Permutations has a comprehensive implementation of ranking/unranking algorithms, including iterative implementations. I would recommend moving to these set as they can be much more performant

The book is almost out there. There is code and selected solutions as well. The book is supposed to be in full colour from what I heard. What do you think of the code? There is an interesting interview as well.

It's a well known trick, but I loved the ability to defer functions in golang. In C, gcc provides an attribute for cleanup. I posted a pull request for the libcgroup. Turns out systemd has something similar, but I do like that the attributes have implicit functions and the overall design seems cleaner. I do not like dereferencing a double void star pointer though.

I've been learning rust programming language. The addition of the ability to write tests in-built is quite a welcome change to most programming languages. I found that I spent a lot of time debugging corner cases (bounds) in my algorithms, so I decided to do a test driven approach (not a chaos/random approach), but more targetted towards my weakness with boundary conditions. My favourite small/short/complex algorithm is quicksort.

Here is the code I wrote

use std::cmp::PartialOrd;

/// Quicksort

/// Simple sort algorithm that is type aware and is unique in that it's test driven

Notice, the identification of the various conditions as bounds with a number in the partition algorithm. Looking at the algorithm, the following caught my attention

The use of the parameter high in the algorithm is unique, high is always one greater than the last element, so should the contract abstract it and expect a pointer to the last element? Notice the use of high-1 more than once in the code.

Rust itself has some quirks associated with the Borrow Checker, after taking a mutable reference to the vector, one cannot use elements.len(), since that internally grabs a shared reference (the compiler tells you).

The while loops use elements[i] <= elements[low], even though elements[low] is constant, using it before the loop causes a move of the element to the temporary variable used to cache it and that causes the Borrow Checker to complain, it might be possible to do some interesting things by having the template type T have a constraint that requires it to implement the Copy trait.

Typically quicksort partitions the elements at p and do a quicksort with 0, p-1 and p+1 to high, but since the bounds is [low, high), we need to ensure we pass p as oposed to p-1

To deal with the complexity of the algorithm (points 1 through 4 above), I decided to use a test driven

approach and tried to cover as many cases as I could think off and extracted coverage.

(you need the nightly compiler for it).

Here is what the output looks like

1| 1|use std::cmp::PartialOrd;use std::cmp::PartialOrd;
2| |
3| |/// Quicksort
4| |/// Simple sort algorithm that is type aware and is unique in that it's test driven
5| |///
6| 49|fn partition<T: PartialOrd>(elements: &mut [T], low: usize, high: usize) -> usize {
7| 49| // Explictly set the pivot to the lowest element
8| 49| let mut i : usize;
9| 49| let mut j : usize;
10| 49|
11| 49| if high <= low {
12| 15| return low;
13| 34| }
14| 34|
15| 34| i = low; j = high -1; // Bounds 1, check
16| |
17| 37| while i < j {
18| 63| while elements[i] <= elements[low] && i < (high-1) { i = i + 1; } // Bounds 2
^45 ^36
19| 65| while elements[j] >= elements[low] && j > low { j = j - 1; } // Bounds 3
^50 ^38
20| |
21| 27| if i >= j {break; } // Bounds 4
^24
22| 3|
23| 3| elements.swap(i, j); // Check 5
24| | }
25| 34| elements.swap(low, j); // Check 6
26| 34| j
27| 49|}

28| |
29| 43|fn __quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T], low: usize, high: usize)
30| 43|{
31| 43| let p = partition(elements, low, high);
32| 43| // p - 1 should not be negative
33| 43| if p > low {
34| 9| __quicksort(elements, low, p);
35| 34| }
36| 43| if p < high {
37| 28| __quicksort(elements, p+1, high);
38| 28| }
^15
39| 43|}

Not bad, but the branch coverage needs improvement. This variant of quicksort is prone to several errors, although it might seem faster at first look. Interestingly, the algorithms book by Jeff Erickson very correctly states

Hoare proposed a more complicated “two-way” partitioning algorithm that has some practical advantages over Lomuto’s algorithm. On the other hand, Hoare’s partitioning algorithm is one of the places off-by-one errors go to die

I do find the Lomuto's algorithm easier to parse and implement. What do you think? Is test-driven iterative development the right way forward for even things like data structures? What's the best way to compute the total number of branches and cover them? Should we search for easier alternatives that perform well and are easier to test?

Update: I made some slice based improvements to the code, with the Lomuto algorithm as well

/// Quicksort /// Simple sort algorithm that is type aware and is unique in that it's test driven /// fn partition<T: PartialOrd>(elements: &mut [T]) -> usize { // Explictly set the pivot to the lowest element let mut i : usize; let mut j : usize;

if elements.is_empty() { // Bounds 1 return 0; }

let high = elements.len();

i = 0; j = high - 1;

while i < j { // Bounds 2 while elements[i] <= elements[0] && i < (high-1) { i += 1; } // Bounds 3 while elements[j] >= elements[0] && j > 0 { j -= 1; } // Bounds 4 if i >= j {break; } // Bounds 5 elements.swap(i, j); // Check 6 } elements.swap(0, j); // Check 7 j }

fn quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T]) { let low = 0; let high = elements.len();

if elements.is_empty() { return; }

let p = partition(&mut elements[low..high]); // p - 1 should not be negative quicksort(&mut elements[low..p]); quicksort(&mut elements[p+1..high]); }

/// /// Use the algorithm that has fewer boundary conditions to worry about fn lomuto_partition<T: std::cmp::PartialOrd>(elements: &mut [T]) -> usize { let mut i: usize = 0; let size: usize = elements.len(); let mut j : usize = 0;

if elements.is_empty() { // bounds 1 return 0; }

while i < size-1 { // bounds 2 i += 1; if elements[i] <= elements[0] { // bounds 3 j += 1; elements.swap(i, j); // bounds 4 } }

elements.swap(0, j); // bounds 5 j }

fn quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T]) { let low = 0; let high = elements.len();

if elements.is_empty() { return; }

let p = lomuto_partition(&mut elements[low..high]); // p - 1 should not be negative quicksort(&mut elements[low..p]); quicksort(&mut elements[p+1..high]); }

"Courses in theoretical computer science covered finite automata, regular expressions, context-free languages, and computability. In the 1970’s, the study of algorithms was added as an important component of theory. The emphasis was on making computers useful. Today, a fundamental change is taking place and the focus is more on a wealth of applications. There are many reasons for this change. The merging of computing and communications has played an important role. The enhanced ability to observe, collect, and store data in the natural sciences, in commerce, and in other fields calls for a change in our understanding of data and how to handle it in the modern setting. The emergence of the web and social networks as central aspects of daily life presents both opportunities and challenges for theory.

While traditional areas of computer science remain highly important, increasingly researchers of the future will be involved with using computers to understand and extract usable information from massive data arising in applications, not just how to make computers useful on specific well-defined problems. With this in mind we have written this book to cover the theory we expect to be useful in the next 40 years, just as an under standing of automata theory, algorithms, and related topics gave students an advantage in the last 40 years. One of the major changes is an increase in emphasis on probability, statistics, and numerical methods.
...

Modern data in diverse fields such as information processing, search, and machine learning is often advantageously represented as vectors with a large number of components"

Sounds like I am a student of the past that focused more on Automata Theory and algorithms (traditional) in the past. I am yet to catch up with the mentioned emergence and opportunities topics. Time to learn new things, but it's going to be tricky to do it by myself, but I am glad to see someone thinking of the next 40 years.

I remember growing up and enjoying math in class. The learning method used in school was not very exploratory, but my dad (who was a genius in my opinion) made it a lot of fun. My dad was very protective also about what I learnt, I used to look at my elder brothers book and learn calculus in grade 8, but my dad suggested that the wrong background would lead me down to a path of losing interest. I guess he was right, but then by 8th grade I was able to calculate square and cube roots using the Newton Raphson method.

NOTE: I had learnt the formula for squares and cubes without necessarliy mastering how we arrived at the differential, it was algorithm/formula for me to follow

My frustration is with the math of today, some with the teachers where I put my son for some extra classes and he was asked to remember formulae without detailed concepts. I see why those topics are important, but it seems like:

Teachers who want to clarify concepts and teach nicely are too slow in catching up with topics to be covered

Teachers catching up with topics and keeping good pace, don't spend enough time on concepts

Short of explaining everything to my son and writing a book or blog posts it's going to be hard to find a balance. I have some hope in the form of Khan Academy, but my son is not too interested in it at the moment.

If someone has pointers to great graphics/programs that can help, please do. Otherwise, I'll try and cover ratio and proportions for my first post.

More fun things, this time with some visualisation of what happens when memoisation is used and what happens when we don't. I don't these these algorithms are the most efficient algorithms

## dynamic programming for binomial co-efficient#defdynamic_bc(n, k, indent=0):
## return (n k) using dynamic programming# tail cases are k == 0 or k == 1# (n 0) = 1 and (n 1) = nfor i inrange(0, indent):
print"",
print"%d:%d" %(n, k)
if (k == 0):
arr[n][k] = 1return1if (k == 1):
arr[n][k] = n
return n
if (n == k):
arr[n][k] = 1return1# (n k) = (n!)/(n-k)!k!, lets split it further# n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!# or (n k) = (n-1 k) + (n -1 k-1)if (arr[n-1][k-1] == 0):
arr[n-1][k-1] = dynamic_bc(n-1,k-1,indent+1)
if (arr[n-1][k] == 0):
arr[n-1][k] = dynamic_bc(n-1, k,indent+1)
return arr[n-1][k] + arr[n-1][k-1]
defbc(n, k, indent=0):
## tail cases are k == 0 or k == 1# (n 0) = 1 and (n 1) = nfor i inrange(0, indent):
print"",
print"%d:%d" %(n, k)
if (k == 0):
return1if (k == 1):
return n
if (n == k):
return1# (n k) = (n!)/(n-k)!k!, lets split it further# n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!# or (n k) = (n-1 k) + (n -1 k-1)return bc(n-1,k,indent+1) + bc(n-1,k-1,indent+1)
number = 6
select = 3
arr = [[0for x inrange(0, number)] for x inrange(0, number)]
print bc(number, select)
print dynamic_bc(number, select)

The output for For the first call with full recursion

6:3
5:2
4:1
4:2
3:1
3:2
2:1
2:2
5:3
4:3
3:3
20

Python supports memoization via functools (wraps, lru_cache, etc). I am using the wrapper (decorator pattern). Using the following pattern makes the programming so transparent

## dynamic programming for binomial co-efficient#from functools import wraps
defdynamic_bc2(func):
cache = {}
defwrap(*args):
if args notin cache:
cache[args] = func(*args)
return cache[args]
return wrap
@dynamic_bc2defbc2(n, k, indent=0):
## return (n k) using dynamic programming# tail cases are k == 0 or k == 1# (n 0) = 1 and (n 1) = nfor i inrange(0, indent):
print"",
print"%d:%d" %(n, k)
if (k == 0):
return1if (k == 1):
return n
if (n == k):
return1# (n k) = (n!)/(n-k)!k!, lets split it further# n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!# or (n k) = (n-1 k) + (n -1 k-1)return bc2(n-1,k-1,indent+1) + bc2(n-1,k,indent+1)
number = 6
select = 3print bc2(number, select)

Comparing the output will show the affect of functools. The output with functools is: