Friday, October 11, 2013

Your code is only as good as the tests you run on it

Computer programs have an interesting well known quality - even incorrect programs can produce the desired output for a subset of inputs. I recently re-learnt this while doing my graphical convex hull implementation

I started with a test strategy of random test generation. Use  a good random number generator to generate

  1. Number of random points as input
  2. The random points themselves


This strategy helped me test the program from crashes/instability and showed a potential functional issue that showed up when a large number of points were generated. See the diagram below
Can you spot the problem? I now need a specific test case to isolate the problem, random inputs did not provide the necessary clue with smaller inputs. Luckily for me, I found a good set of inputs at http://stackoverflow.com/questions/482278/test-case-data-for-convex-hull

I ran three test cases and found the first two ran fine


The third one was a hit, it broke my algorithm
Beautiful visualization, but the code was clearly broken. After looking at the diagram and looking at the code, I realized that I was not handling co-linear points correctly. Finally, I got


Now, I am back to the random testing strategy and have gotten satisfactory results so far

NOTE: My test feedback strategy is based on visualization, one could even automate the tests

No comments:

Ranking and Unranking permutations

I've been a big fan of Skiena's Algorithm Design Manual , I recently found my first edition of the book (although I own the third ed...