Anti-Chess Progress Report

Steven G. Johnson, Tim Macinta, & Patrick Pelletier

TA: Leo Chang


Overview of Progress

At this point, we have a working version of the program running, which includes timing and a prototype computer player with artificial intelligence. The main program and chess_board data structures were implemented on schedule, although a few bugs were found and corrected afterwards. No known bugs remain in this code, although more validation may be done.

The computer_player is not in its final form, but it is functional as set forth by the schedule. The main additions that are left to be made before the May 1st deadline are a distinction between skill levels and a strategy for dealing with the time limit. The program for determining an optimal static_evaluator was finished a day late due to some unanticipated bugs in the static_evaluator and chess_board clusters. However, a program that searches for an optimal static_evaluator using a genetic algorithm is now complete and is currently running. At last check, the algorithm had gone through 1,363 generations. The results from this run will be fed into future runs using enhanced computer_player's.The machine_player cluster is already written, but remains to be debugged by the May 6th deadline.

Two preliminary versions of the user interface (an all-text version and a text/window hybrid) have all ready been written and are fully functional. User interface will not be adapted to accept mouse input by Tuesday. Probably won't be ready until this weekend. The three reasons for this are: Patrick has too much work in his other classes, Patrick underestimated the amount of time it would take to write the user interface, and Patrick has hurt his fingers typing. Otherwise, things are on schedule. Blackbox tests have already been written and run for most of the clusters that have already been written. No known bugs or expected problems.


Validation Strategies

It was left to each member of the group to decide on a validation strategy most appropriate to his modules. In addition, Steven verifies the correct behavior of the entire program when he links it together and tests it as a whole.

Steven

The main program itself mainly involves calling the other modules and interaction with the user, which does not lend itself to automated testing. However, it is fairly straight-forward to run the program myself and test that the various commands work as they should. This is made possible by the fact that (mostly) working versions of the other modules have been in existence for some time now. I have also constructed several erroneous data files in order to verify that file format errors are correctly detected.

In order to test the chess_board data structure, I initially simply used the main program, which causes invocations to all of its functions. This is because it is difficult to verify that the correct moves have been made, etc., except by eye, looking at the output of the program. However, it has been possible to automate a series of tests by saving a sequence of commands in a text file and pasting them into the terminal window. In addition, the operation of the computer_player algorithm exercises the functions of chess_board in a very vigorous way, making thousands of different moves as it looks ahead into the future. This process would be more certain to cause errors to appear, if the code were buggy, than any small sample of test cases possibly could.

Tim

For the first few compilations of all of my programs I always insert checks for debugging purposes. For instance, when creating my program to determine genetic alogorithms I unparsed all the relevant information about how the program selected the top portion of the population for continued existence and sent this information to the screen.

Also, I've been compiling everything with the debug option until it becomes neccessary to compile it with the optimize option for speed. The debugging environment allows me to test each procedure individually without having to insert many compile-time-checks.

In the future when everything appears to be running smoothly, I plan to use automated testing to ensure that everything is in fact running smoothly.

Patrick

There are four main techniques for making sure the user interface code is valid.

The first technique is to avoid making errors in the first place. This is the best method but is not always possible.

The second technique is blackbox testing. For each cluster of the user interface, a blackbox test program will be written which tests all of the operations of that cluster.

The third technique is glassbox testing. For particularly tricky parts where blackbox testing alone may be insufficient to find all errors, a glassbox test will be written to test the code based on the particular internals of the code.

The fourth testing method is to play with the final program and make sure it behaves as expected.


Implementation Overview

For the most part, implementations of procedures seemed fairly straight-forward, so algorithms were not given (the code can be examined if necessary). Below, however, are the abstraction functions and rep. invariants for all the clusters, along with algorithms for a few of the procedures, where necessary.