This page allows you to download the software described in our article ("Evolving artificial neural networks to control chaotic systems", E. R. Weeks and J. M. Burgess, published in Phys. Rev. E 56, pp 1531-1540, August 1997). Click here to download a PDF copy. If you have questions or comments about our work, we'd love to hear from you, send Eric email: erweeks / emory.edu (change / to @). Note that while I'm happy to answer questions directly related to this webpage, I have not done any research with neural networks or controlling chaos since 1997, so I'm not an expert on those topics. I greatly thank the Internet Archive Wayback Machine for having archived the old version of this web page, from which I restored the page that you're now reading.
The software and algorithm are copyrighted by Eric R. Weeks and John M. Burgess, 1998. They may be freely distributed with proper attribution and full acknowlegement of the copyright.
You have several choices for downloading the source code:
This package comes with a Makefile. This should have no problems compiling on any UNIX system, although you might want to change the line that says CC=cc to CC=gcc if you prefer the gcc compiler. To compile, type make. This creates an executable file called dsane.x.
The program is designed to be easy to use, allowing you to do simple things through command-line options and more complicated things by modifying some of the files and recompiling. Without modifying anything the program is set to try to control the Henon map; the default parameters should work without any problems.
To compile, see above. Compiling creates an executable file called dsane. To run, you must specify a suffix which will be used to name the output files (see below). The simplest way to run the program is:
which will generate output files names evol.test1, popul.test1, etc. If you wish to use command line options, put them between "dsane" and the suffix:
dsane.x -p 1.4 -E 0.003 test2
Right now you can do three different maps:
Henon Map (default): X'=P + 0.3 Y - X^2, Y' = X
Logistic Map: X' = P X (1 - X)
"User Map": X' = (P - X - 0.1 Y) X, Y' = (P - Y - 0.15 X) Y
Command line options:
There are a total of seven files, as well as a Makefile, that comprise the DSANE program. Several of these files are simple to modify. The seven files are:
When the program runs, it generates the following output files (note that the suffix is set by the command line):
para.suffix: this file contains a listing of all of the important parameters describing this trial. Some of these parameters can be modified using command line options; others require editing the file "params.h" and recompiling the program.
control.suffix: every ten generations, the best network of the current generation is tested 200 times, allowing the network 3000 iterations to try to control the map. This file contains the results: the average number of iterations needed to control the map, the number of times the network failed to control, the earliest iteration control was achieved, and the latest iteration control was achieved. Due to the simplistic nature of the test for control, networks which fail to control the map may still "control" the map for the last one or two iterations; this is clear from the earliest iteration control was achieved being set to 2998 or so.
evol.suffix: information about the population fitness at each generation. The first column is the generation number. The next information is the fitness of the best network tested each generation. Then is the average fitness of the neurons in the population. The last four columns give information about Directed SANE as compared to the "original" method (for details of the two techniques, see the Appendix in our paper). The last four columns typically look something like this: "D 2 o 7". The capital D indicates that Directed SANE formed the best network that generation; a capital O indicates that the original method formed the best. The 2 and 7 mean that this generation, Directed SANE formed 2 networks better than the previous generation's best fitness, and the original method formed 7 networks better than the previous generation.
net.suffix: a description of the best neural network formed so far. When the program is finished this is the best network that was found. The file contains a description of the network topology, and a listing of the weights connecting the hidden neurons to the input and output layer.
lambda.suffix: contains information used to calculate fitness, for the best network each generation. The first column is the generation number. The second column is the number of iterations the behavior of the system was near the fixed point (as defined by successive iterations being close to each other; the fixed point location isn't known). The third column is the value of lambda that was found. This indicates that near the fixed point, generally:
best.suffix: contains a sample run from the best network that has been found so far. After the program exits, this file contains a sample run from the best network found. The data is in four columns, (iteration n, x(n), delta P 1 (n), delta P 2 (n)).
dyn.suffix: each generation the best network is taken and retested. The last N iterations of that test are saved to this file. The default value of N is 0 and this file isn't written; to change the default value, use the -y option (see the command line options above). Each line in this file indicates the generation, two values for fitness, and then the data. The first value for the fitness is the fitness the best network had the first time it was tested (which was how we decided it was the best network), and the second value for the fitness is the fitness found during the retest.
There are several ways to modify the program: