Anti-Chess Final Report

Steven G. Johnson, Tim Macinta, & Patrick Pelletier

TA: Leo Chang


Overview of Project

What can I say? It works, and we got the program completed on time. Only the tournament will tell if our computer player is any good, but it seems reasonable. The user interface is now totally graphical, and I must say that it looks pretty cool.

In general, I would have to say that our group has worked together remarkably well. We got organized very quickly; by the end of spring break all the major modules had specifications written (and some were partially implemented). Our basic strategy was essentially top-down: we tried to get minimal implementations of all the modules done very quickly, so that we could have a running program which could then be used to do testing, etc. There were several advantages to this. First, writing driver programs is a pain. Second, since the modules are integrated from the start we never experienced any painful "integration phase." Third, progress on the program was very tangible since we could see it evolving before our eyes; new features and bug fixes appeared immediately in the program.

I have to take this opportunity to compliment Patrick and Tim on the fine jobs they've done, not only in writing their own code, but in looking over the other code in the program and helping in the debugging and testing. They've put a lot of work into doing a good job and it shows.


Individual Progress Reports

Steven

Chess board was completed fairly early, with no big surprises. This was necessary since it was used so heavily by the other modules. Of course, bugs occasionally unsurfaced, but they were dealt with fairly quickly. The main program was a lot simpler and required less maintenance, except when specifications to the other modules changed. This rarely happened, and so the main program remained essentially the same even as the user interface, etc. changed radically. This is a testimony to the wisdom of our design, which abstracted user interaction into its own module, completely separate from the control and flow of the program.

Tim

Everything was pretty much on schedule except for what I have listed below.

The machine_player cluster has been written, but not tested yet. This is not a problem because the machine_player is completely seperate from the final product that we produced and will be used only for participation in the tournament.

The static_evaluator that my genetic algorithm program produced may not be optimal. There seems to be a bias towards the black palyer. This bias may be inherent in the game, but I suspect it has to do with the way the computer_player budgets its time for each move, or the order in which the chess_board yeilds its pieces. Either way, I think that this bias may have hindered the determination of an optimal static_evaluator. If improvements were to be made in the determination of the computer_strategy, they should definitely be made in the direction of the eliminating the bias, because the current static_evaluator that has resulted from my genetic algorithm plays better than any static_evaluator I've hand created.

Patrick

  1. April 9, 1995
  2. April 15, 1995
  3. April 25, 1995 Patrick: modify the user interface to accept mouse input Status: not completed on time. (see below)
  4. May 6, 1995
  5. May 13, 1995

Regarding Teleports

The implementation of the teleports modification deserves a few words. Essentially, this feature was added with little difficulty. The majority of the changes occurred in the chess_board module, which was used to store the teleport locations and arbitrate their use. No modifications were needed in the computer player, for instance, since it performs its look-ahead searches using an iterator provided by chess_board (all_possible_moves). The user interface also required some modification so that it both displayed the teleport locations and also accepted commands for changing them or turning them on and off.

Validation Strategies

It was left to each member of the group to decide on a validation strategy most appropriate to his modules. In addition, Steven verifies the correct behavior of the entire program when he links it together and tests it as a whole.

Steven

The main program itself mainly involves calling the other modules and interaction with the user, which does not lend itself to automated testing. However, it is fairly straight-forward to run the program myself and test that the various commands work as they should. This is made possible by the fact that (mostly) working versions of the other modules have been in existence for some time now. I have also constructed a large collection of erroneous data files in order to verify that file format errors are correctly detected.

In order to test the chess_board data structure, I initially simply used the main program, which causes invocations to all of its functions. This is because it is difficult to verify that the correct moves have been made, etc., except by eye, looking at the output of the program. However, it has been possible to automate a series of tests by saving a sequence of commands in a text file and pasting them into the terminal window. In addition, the operation of the computer_player algorithm exercises the functions of chess_board in a very vigorous way, making thousands of different moves as it looks ahead into the future. This process would be more certain to cause errors to appear, if the code were buggy, than any small sample of test cases possibly could. Finally, I constructed several standard "test boards" containing strange cases to make sure that the program worked, especially in cases involving teleports.

Tim

I basically used a bottom up strategy in implimenting my portion of the program and this dictated, to a large degree the type of validation strategy I used. I wrote a program for the determiniation of an optimal static_evaluator through a genetic algorithm. This program was entirely seperate from the main program and the user interface so it also served as a driver to test the computer_player and static_evaluator clusters.

For the vast majority of the time that I was testing and developing the computer_player and static_evaluator, I inserted several run-time checks into the code so that my program would continually send information to the screen on what was actually going on internally so I wouldn't get any surprises.

Also, whenever possible I compiled with the debug option so that I could take advantage of the tools offered by the debugger. This allowed me to check the values of various variables during run-time and to easily test all the procedures with numerous different hand picked values without writing a full fledged driver for the specific purpose of testing.

Patrick

Testing consisted mostly of blackbox testing and integration testing. For each cluster in the user interface, a blackbox test program was written to thoroughly test all of the features of that cluster. The user interface was also testing thoroughly by using it after it was integrated with the rest of the program.


Implementation Overview

For the most part, implementations of procedures seemed fairly straight-forward, so algorithms were not given (the code can be examined if necessary). Below, however, are the abstraction functions and rep. invariants for all the clusters, along with algorithms for a few of the procedures, where necessary.