Wednesday, December 27, 2023

Ranking and Unranking permutations

I've been a big fan of Skiena's Algorithm Design Manual, I recently found my first edition of the book (although I own the third edition), I felt guilty for not using and started with the much more compact first edition. I do run into errata, but being able to discover it on my own and see it fixed in the new editions, provides a lot of satisfaction :)

One of the commonly used algorithms for combinatorics is ranking and unranking of permutations. The algorithm provided in the book is derived from the Combinatorica package. The book provides an example or two, which is quite terse, so I decided to implement things myself. There is a slightly better explanation in Computational Discrete Mathematics, also written by the same author.

Here is my python implementation of the rank/unrank algorithm(s)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import functools
@functools.lru_cache(maxsize=None)
def fact(n):
    if n == 1 or n == 0:
        return 1
    else:
        return n*fact(n-1)
        
assert fact(5) == 120
assert fact(0) == 1

#@param perm - given a permutation as an array, calculate it's rank
def rank(perm):
    if not isinstance(perm, list):
        raise "Invalid input"
    else:
        n = len(perm)
        if n == 0 or (n == 1 and perm[0] == 1):
            return 0
        rest = list(map(lambda x: x - 1 if x > perm[0] else x, perm[1:]))
        return (perm[0]-1) * fact(n-1) + rank(rest)

#@param n, k - given rank k of an n element permutation, print it
def unrank(k, n, perm):    
    q, r = divmod(k, fact(n - 1))
    perm[0] = q + 1
    if n == 1:
        return [perm[0]]
    rest = perm[1:]    
    newperm = unrank(r, n - 1, rest)
    rest = list(map(lambda x: x+1 if x >= perm[0] else x, newperm))
    return [perm[0]] + rest

Here are some simple tests and their output

for i in range(24):
    perm=[0 for i in range(4)]
    perm=unrank(i, 4, perm)
    print(perm, "->", rank(perm))

Eyeballing them, they look right to me

Next steps:

Rosetta Code Ranking of Permutations has a comprehensive implementation of ranking/unranking algorithms, including iterative implementations. I would recommend moving to these set as they can be much more performant


Sunday, March 27, 2022

Introduction to Algorithms Fourth Edition (almost out, so is the code)





 The book is almost out there. There is code and selected solutions as well. The book is supposed to be in full colour from what I heard. What do you think of the code? There is an interesting interview as well.

Defer style programming in C

 It's a well known trick, but I loved the ability to defer functions in golang. In C, gcc provides an attribute for cleanup. I posted a pull request for the libcgroup. Turns out systemd has something similar, but I do like that the attributes have implicit functions and the overall design seems cleaner. I do not like dereferencing a double void star pointer though.

More details can be found in the pull request

Sunday, November 28, 2021

Test Driven Development

I've been learning rust programming language. The addition of the ability to write tests in-built is quite a welcome change to most programming languages. I found that I spent a lot of time debugging corner cases (bounds) in my algorithms, so I decided to do a test driven approach (not a chaos/random approach), but more targetted towards my weakness with boundary conditions. My favourite small/short/complex algorithm is quicksort.

Here is the code I wrote

use std::cmp::PartialOrd;

/// Quicksort
/// Simple sort algorithm that is type aware and is unique in that it's test driven
///
fn partition<T: PartialOrd>(elements: &mut [T], low: usize, high: usize) -> usize {
// Explictly set the pivot to the lowest element
let mut i : usize;
let mut j : usize;
if high <= low {
return low;
}

i = low; j = high -1; // Bounds 1, check
while i < j {
while elements[i] <= elements[low] && i < (high-1) { i = i + 1; } // Bounds 2
while elements[j] >= elements[low] && j > low { j = j - 1; } // Bounds 3
if i >= j {break; } // Bounds 4
elements.swap(i, j); // Check 5
}
elements.swap(low, j); // Check 6
j
}

fn __quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T], low: usize, high: usize)
{
let p = partition(elements, low, high);
if p > low {
__quicksort(elements, low, p);
}
if p < high {
__quicksort(elements, p+1, high);
}
}

fn quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T])
{
let len = elements.len();

__quicksort(elements, 0, len);
}

#[test]
fn partition_bounds_test_single()
{
let mut elements = vec![1];
assert_eq!(0, partition(&mut elements, 0, 1));
}

#[test]
fn partition_bounds_test_mostly_sorted()
{
let mut elements = vec![10, 2, 4, 5, 6];
let expected = vec![6, 2, 4, 5, 10];
let len = elements.len();

assert_eq!(4, partition(&mut elements, 0, len));
assert_eq!(expected, elements);

let mut elements = vec![2, 4, 5, 6, 10];
let expected = vec![2, 4, 5, 6, 10];
let len = elements.len();

assert_eq!(0, partition(&mut elements, 0, len));
assert_eq!(expected, elements);
}

#[test]
fn partition_bounds_test_2_elements_sorted()
{
let mut elements = vec![15, 10];
let expected = vec![10, 15];
let len = elements.len();

assert_eq!(1, partition(&mut elements, 0, len));
assert_eq!(expected, elements);

let mut elements = vec![10, 15];
let expected = vec![10, 15];
let len = elements.len();

assert_eq!(0, partition(&mut elements, 0, len));
assert_eq!(expected, elements);

let mut elements = vec![15, 5, 10];
let expected = vec![10, 5, 15];
let len = elements.len();

assert_eq!(2, partition(&mut elements, 0, len));
assert_eq!(expected, elements);

}

#[test]
fn quicksort_test()
{
let mut vec = vec![1, 5, 10, 2, 15];
let mut vec2 = vec.clone();

quicksort(&mut vec);
vec2.sort();
assert_eq!(vec![1, 2, 5, 10, 15], vec);
assert_eq!(vec2, vec);
let mut vec = vec![1.1, 1.15, 5.5, 1.123, 2.0];
quicksort(&mut vec);
assert_eq!(vec, vec![1.1, 1.123, 1.15, 2.0, 5.5]);

let mut vec = vec![1, 5, 10, 15, 2];
let mut vec2 = vec.clone();

quicksort(&mut vec);
vec2.sort();
assert_eq!(vec2, vec);

let mut vec = vec![15, 5, 10];
let mut vec2 = vec.clone();

quicksort(&mut vec);
vec2.sort();
assert_eq!(vec2, vec);


let mut vec = vec![1, 2, 3, 4, 5];
let mut vec2 = vec.clone();

quicksort(&mut vec);
vec2.sort();
assert_eq!(vec2, vec);

let mut vec = vec![5, 4, 3, 2, 1];
let mut vec2 = vec.clone();

quicksort(&mut vec);
vec2.sort();
assert_eq!(vec2, vec);

}

Notice, the identification of the various conditions as bounds with a number in the partition algorithm. Looking at the algorithm, the following caught my attention
  1. The use of the parameter high in the algorithm is unique, high is always one greater than the last element, so should the contract abstract it and expect a pointer to the last element? Notice the use of high-1 more than once in the code.
  2. Rust itself has some quirks associated with the Borrow Checker, after taking a mutable reference to the vector, one cannot use elements.len(), since that internally grabs a shared reference (the compiler tells you).
  3. The while loops use elements[i] <= elements[low], even though elements[low] is constant, using it before the loop causes a move of the element to the temporary variable used to cache it and that causes the Borrow Checker to complain, it might be possible to do some interesting things by having the template type T have a constraint that requires it to implement the Copy trait.
  4. Typically quicksort partitions the elements at p and do a quicksort with 0, p-1 and p+1 to high, but since the bounds is [low, high), we need to ensure we pass p as oposed to p-1
To deal with the complexity of the algorithm (points 1 through 4 above), I decided to use a test driven
approach and tried to cover as many cases as I could think off and extracted coverage.
(you need the nightly compiler for it).

Here is what the output looks like

    1|      1|use std::cmp::PartialOrd;use std::cmp::PartialOrd;
    2|       |
    3|       |/// Quicksort
    4|       |/// Simple sort algorithm that is type aware and is unique in that it's test driven
    5|       |/// 
    6|     49|fn partition<T: PartialOrd>(elements: &mut [T], low: usize, high: usize) -> usize {
    7|     49|    // Explictly set the pivot to the lowest element
    8|     49|    let mut i : usize;
    9|     49|    let mut j : usize;
   10|     49|    
   11|     49|    if high <= low {
   12|     15|        return low;
   13|     34|    }
   14|     34|
   15|     34|    i = low; j = high -1; // Bounds 1, check
   16|       |    
   17|     37|    while i < j {
   18|     63|        while elements[i] <= elements[low] && i < (high-1) { i = i + 1; } // Bounds 2
                                                            ^45          ^36
   19|     65|        while elements[j] >= elements[low] && j > low { j = j - 1; } // Bounds 3
                                                            ^50     ^38
   20|       |        
   21|     27|        if i >= j {break; } // Bounds 4
                                 ^24
   22|      3|        
   23|      3|        elements.swap(i, j); // Check 5
   24|       |    }
   25|     34|    elements.swap(low, j); // Check 6
   26|     34|    j
   27|     49|}
   28|       |
   29|     43|fn __quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T], low: usize, high: usize)
   30|     43|{
   31|     43|    let p = partition(elements, low, high);
   32|     43|    // p - 1 should not be negative
   33|     43|    if p > low {
   34|      9|        __quicksort(elements, low, p);
   35|     34|    }
   36|     43|    if p < high {
   37|     28|        __quicksort(elements, p+1, high);
   38|     28|    }
                   ^15
   39|     43|}

   41|      6|fn quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T])
   42|      6|{
   43|      6|    let len = elements.len();
   44|      6|
   45|      6|    __quicksort(elements, 0, len);
   46|      6|}

Not bad, but the branch coverage needs improvement. This variant of quicksort is prone to several errors, although it might seem faster at first look. Interestingly,  the algorithms book by Jeff Erickson very correctly states

Hoare proposed a more complicated “two-way” partitioning algorithm that has some practical advantages over Lomuto’s algorithm. On the other hand, Hoare’s partitioning algorithm is one of the places off-by-one errors go to die

I do find the Lomuto's algorithm easier to parse and implement. What do you think? Is test-driven iterative development the right way forward for even things like data structures? What's the best way to compute the total number of branches and cover them? Should we search for easier alternatives that perform well and are easier to test? 


Update: I made some slice based improvements to the code, with the Lomuto algorithm as well

/// Quicksort
/// Simple sort algorithm that is type aware and is unique in that it's test driven
///
fn partition<T: PartialOrd>(elements: &mut [T]) -> usize {
// Explictly set the pivot to the lowest element
let mut i : usize;
let mut j : usize;

if elements.is_empty() { // Bounds 1
return 0;
}

let high = elements.len();

i = 0;
j = high - 1;

while i < j { // Bounds 2
while elements[i] <= elements[0] && i < (high-1) { i += 1; } // Bounds 3
while elements[j] >= elements[0] && j > 0 { j -= 1; } // Bounds 4

if i >= j {break; } // Bounds 5

elements.swap(i, j); // Check 6
}
elements.swap(0, j); // Check 7
j
}

fn quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T])
{
let low = 0;
let high = elements.len();

if elements.is_empty() {
return;
}

let p = partition(&mut elements[low..high]);
// p - 1 should not be negative
quicksort(&mut elements[low..p]);
quicksort(&mut elements[p+1..high]);
}

///
/// Use the algorithm that has fewer boundary conditions to worry about
fn lomuto_partition<T: std::cmp::PartialOrd>(elements: &mut [T]) -> usize
{
let mut i: usize = 0;
let size: usize = elements.len();
let mut j : usize = 0;

if elements.is_empty() { // bounds 1
return 0;
}

while i < size-1 { // bounds 2
i += 1;
if elements[i] <= elements[0] { // bounds 3
j += 1;
elements.swap(i, j); // bounds 4
}
}

elements.swap(0, j); // bounds 5
j
}

fn quicksort<T: PartialOrd + std::fmt::Debug>(elements: &mut [T])
{
let low = 0;
let high = elements.len();

if elements.is_empty() {
return;
}

let p = lomuto_partition(&mut elements[low..high]);
// p - 1 should not be negative
quicksort(&mut elements[low..p]);
quicksort(&mut elements[p+1..high]);
}


Sunday, June 07, 2020

Volume of a unit hyper sphere of radius 1

Inspired by foundations of data science, the area of a circle is

\( \int_{x=-1}^{x=1}\int_{y=-\sqrt{1-x^2}}^{y=\sqrt{1-x^2}}dydx \)

This further extended to a sphere is

\(\ int_{x=-1}^{x=1}\int_{y=-\sqrt{1-x^2}}^{y=\sqrt{1-x^2}}\int_{z=-\sqrt{1-x^2-y^2}}^{z=\sqrt{1-x^2-y^2}}dzdydx \)

This when implemented via maxima is

(%i17) integrate(integrate(integrate(1, z, -sqrt(1-x^2-y^2), sqrt(1-x^2-y^2)),
        y, -sqrt(1-x^2), sqrt(1-x^2)), x, -1, 1);

Maxima goes on to ask if
"Is "(x-1)*(x+1)" positive or negative?"
For a circle this value is definitely negative and voila we get the answer as $4\pi/3$. Higher dimenisons lead to interesting results

For 4 dimensions, we use

integrate(integrate(integrate(integrate(1, x4, -sqrt(1-x1^2-x2^2-x3^2), 
          sqrt(1-x1^2-x2^2-x3^2)), x3, -sqrt(1-x1^2-x2^2), 
          sqrt(1-x1^2-x2^2)), x2, -sqrt(1-x1^2), sqrt(1-x1^2)), 
          x1, -1, 1);

and get the volume as $\pi^2/2$ as the answer which matches what the book predicts

Saturday, May 30, 2020

Passive reaction: Computer Science has changed/changing

In their book Foundations of Data Science, the authors state in the preface that:

"Courses in theoretical computer science covered finite automata, regular expressions, context-free languages, and computability. In the 1970’s, the study of algorithms was added as an important component of theory. The emphasis was on making computers useful. Today, a fundamental change is taking place and the focus is more on a wealth of applications. There are many reasons for this change. The merging of computing and communications has played an important role. The enhanced ability to observe, collect, and store data in the natural sciences, in commerce, and in other fields calls for a change in our understanding of data and how to handle it in the modern setting. The emergence of the web and social networks as central aspects of daily life presents both opportunities and challenges for theory.

While traditional areas of computer science remain highly important, increasingly researchers of the future will be involved with using computers to understand and extract usable information from massive data arising in applications, not just how to make computers useful on specific well-defined problems. With this in mind we have written this book to cover the theory we expect to be useful in the next 40 years, just as an under standing of automata theory, algorithms, and related topics gave students an advantage in the last 40 years. One of the major changes is an increase in emphasis on probability, statistics, and numerical methods.
...

Modern data in diverse fields such as information processing, search, and machine learning is often advantageously represented as vectors with a large number of components"

Sounds like I am a student of the past that focused more on Automata Theory and algorithms (traditional) in the past. I am yet to catch up with the mentioned emergence and opportunities topics. Time to learn new things, but it's going to be tricky to do it by myself, but I am glad to see someone thinking of the next 40 years.


Sunday, November 03, 2019

Frustration with the state of math in schools

I remember growing up and enjoying math in class. The learning method used in school was not very exploratory, but my dad (who was a genius in my opinion) made it a lot of fun. My dad was very protective also about what I learnt, I used to look at my elder brothers book and learn calculus in grade 8, but my dad suggested that the wrong background would lead me down to a path of losing interest. I guess he was right, but then by 8th grade I was able to calculate square and cube roots using the Newton Raphson method.

NOTE: I had learnt the formula for squares and cubes without necessarliy mastering how we arrived at the differential, it was algorithm/formula for me to follow

My frustration is with the math of today, some with the teachers where I put my son for some extra classes and he was asked to remember formulae without detailed concepts. I see why those topics are important, but it seems like:


  • Teachers who want to clarify concepts and teach nicely are too slow in catching up with topics to be covered
  • Teachers catching up with topics and keeping good pace, don't spend enough time on concepts

Short of explaining everything to my son and writing a book or blog posts it's going to be hard to find a balance. I have some hope in the form of Khan Academy, but my son is not too interested in it at the moment.

If someone has pointers to great graphics/programs that can help, please do. Otherwise, I'll try and cover ratio and proportions for my first post.

Sunday, July 09, 2017

Dynamic programming for the binomial coefficient

More fun things, this time with some visualisation of what happens when memoisation is used and what happens when we don't. I don't these these algorithms are the most efficient algorithms


#
# dynamic programming for binomial co-efficient
#
def dynamic_bc(n, k, indent=0):
    #
    # return (n k) using dynamic programming
    # tail cases are k == 0 or k == 1
    # (n 0) = 1 and (n 1) = n
    for i in range(0, indent):
        print " ",
    print "%d:%d" %(n, k)
    if (k == 0):
        arr[n][k] = 1
        return 1
    if (k == 1):
        arr[n][k] = n
        return n
    if (n == k):
        arr[n][k] = 1
        return 1
    # (n k) = (n!)/(n-k)!k!, lets split it further
    # n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!
    # or (n k) = (n-1 k) + (n -1 k-1)
    if (arr[n-1][k-1] == 0):
        arr[n-1][k-1] = dynamic_bc(n-1,k-1,indent+1)
    if (arr[n-1][k] == 0):
        arr[n-1][k] = dynamic_bc(n-1, k,indent+1)
    return arr[n-1][k] + arr[n-1][k-1]

def bc(n, k, indent=0):
    #
    # tail cases are k == 0 or k == 1
    # (n 0) = 1 and (n 1) = n
    for i in range(0, indent):
        print " ",
    print "%d:%d" %(n, k)
    if (k == 0):
        return 1
    if (k == 1):
        return n
    if (n == k):
        return 1
    # (n k) = (n!)/(n-k)!k!, lets split it further
    # n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!
    # or (n k) = (n-1 k) + (n -1 k-1)
    return bc(n-1,k,indent+1) + bc(n-1,k-1,indent+1)

number = 6
select = 3
arr = [[0 for x in range(0, number)] for x in range(0, number)]

print bc(number, select)
print dynamic_bc(number, select)
The output for the first call with full recursion

6:3
  5:3
    4:3
      3:3
      3:2
        2:2
        2:1
    4:2
      3:2
        2:2
        2:1
      3:1
  5:2
    4:2
      3:2
        2:2
        2:1
      3:1
    4:1
20

The output for For the first call with full recursion

6:3
  5:2
    4:1
    4:2
      3:1
      3:2
        2:1
        2:2
  5:3
    4:3
      3:3
20
Python supports memoization via functools (wraps, lru_cache, etc). I am using the wrapper (decorator pattern). Using the following pattern makes the programming so transparent
#
# dynamic programming for binomial co-efficient
#
from functools import wraps

def dynamic_bc2(func):
    cache = {}
    def wrap(*args):
 if args not in cache:
  cache[args] = func(*args)
 return cache[args]
    return wrap

@dynamic_bc2
def bc2(n, k, indent=0):
    #
    # return (n k) using dynamic programming
    # tail cases are k == 0 or k == 1
    # (n 0) = 1 and (n 1) = n
    for i in range(0, indent):
        print " ",
    print "%d:%d" %(n, k)
    if (k == 0):
        return 1
    if (k == 1):
        return n
    if (n == k):
        return 1
    # (n k) = (n!)/(n-k)!k!, lets split it further
    # n!/(n-k)!k! = n(n-1)!/(n-k)!k(k-1)!
    # or (n k) = (n-1 k) + (n -1 k-1)
    return bc2(n-1,k-1,indent+1) + bc2(n-1,k,indent+1)

number = 6
select = 3

print bc2(number, select)
Comparing the output will show the affect of functools. The output with functools is:
6:3
  5:2
    4:1
    4:2
      3:1
      3:2
        2:1
        2:2
  5:3
    4:3
      3:3
20

Ranking and Unranking permutations

I've been a big fan of Skiena's Algorithm Design Manual , I recently found my first edition of the book (although I own the third ed...