Controlling Chaos with Neural Networks


This page describes a research project by Eric R. Weeks and John M. Burgess. This web description is not technical; if you'd like the more technical version, read our paper: "Evolving artificial neural networks to control chaotic systems", E. R. Weeks and J. M. Burgess, published in Phys. Rev. E 56 , 1531-1540, August 1997. Click here to download a PDF copy.

If you have questions or comments about our work, send Eric email: erweeks / emory.edu (change / to @). Also, you can download the software that implements our algorithm. Click here for more information on downloading the software. And, I greatly thank the Internet Archive Wayback Machine for having archived the old version of this web page, from which I restored the page that you're now reading.

The software and algorithm are copyrighted by Eric R. Weeks and John M. Burgess, 1998. They may be freely distributed with proper attribution and full acknowlegement of the copyright.


Introduction

We had both heard that neural networks could do everything from improving production in factories to bringing about world peace, so we decided to take a neural networks class from Risto Miikkulainen. For our class project we tried to control chaotic systems using a neural networks approach. After much work, we succeeded. The algorithm we use is closely related to an algorithm developed by David Moriarty and Risto Miikkulainen, and is described in their article in Machine Learning 22, 11 (1996).

We use a genetic algorithm that evolves neural networks into feedback controllers for chaotic systems. The algorithm was tested on the logistic and Henon maps, for which it stabilizes an unstable fixed point using small perturbations, even in the presence of significant noise. The network training method requires no previous knowledge about the system to be controlled, including the dimensionality of the system and the location of unstable fixed points. This is the first dimension-independent algorithm that produces neural network controllers using time-series data.


Chaotic Maps

Chaos is characterized by three simple ideas. First, chaotic systems are deterministic, meaning they obey some simple rules. In general this means that we can predict their behavior for short times. Second, chaotic systems have sensitive dependence on initial conditions, which means we can't predict their behavior for long times. The weather is an example of a system with these two traits: you can predict the weather five minutes from now with excellent success, the weather tomorrow is reasonably predictable, and the weather two weeks from now is a mystery. Third, chaotic systems generally have underlying patterns, sometimes called attractors. For the weather, the climate is in some sense an attractor. We can't predict the weather two weeks from now, but we can safely say Texas won't be covered with blizzards. Fourth, chaos is generally thought of as low-dimensional, meaning it can be described by a few simple variables. The weather is not low dimensional; to describe the weather accurately might require knowing the temperature at thousands of points, as well as many other variables such as moisture, cloud cover, etc.

Chaotic maps are the simplest form of chaos. We consider the logistic map:

X' = P X (1 - X)

and the Henon map:

X' = P + 0.3 Y - X*X, Y' = X;

the primes represent the new variables, which are found from the current values of the variables. By iterating these maps repeatedly (taking the new values of the map as the old values), the variables undergo chaotic dynamics for certain values of P.

By plotting (X', X) for the Henon map, a strange attractor can be seen. This is an attracting set with a fractal dimension, which again, we don't have enough space to elaborate on. The picture below shows the attractor, and two fixed points are circled (for which X'=X):

HENONMAP


Fixed Points

If the map was started with values X and Y having their fixed point values, the system would stay at the fixed point. In practical situations, noise prevents the system from being exactly at the fixed point. However, when the map is "near" the fixed point, the system can be stabilized by making small perturbations to P. Other control of chaos methods make use of this. Generally, such methods use various clever analytical techniques to study the behavior of the map near the fixed point, to determine the ideal perturbations to make in order to keep the system near the fixed point. It is the existence of these fixed points which makes chaotic systems possible to control.

Our goal was to devise a neural networks method that would not need to know where the fixed point was located, nor would it need analytical techniques to determine the ideal perturbations.


Neural Networks

Click here for a more detailed explanation of neural nets. For the purposes of understanding our method, think of neural networks as "black boxes" which observe the chaotic map values X for successive iterations and determine a perturbation dP to add to P in order to try to stabilize the chaotic map. Neural networks have many internal parameters (the "weights") which describe how the output dP is determined by the inputs; by determining the weights, you determine how the network works.


Genetic Algorithms

The real crux of our method is to use an evolutionary approach to determine the weights for the neural networks. This approach depends on a fitness function. The purpose of a fitness function is to examine a given neural network and determine whether the network is doing a good job or a bad job trying to stabilize a chaotic map.

Again, for the details of our method, you'll want to read the paper, but here is a conceptual description. We start with a population of 100 neural networks. We find the fitness of each of them. The best ones we keep, the worst ones replace with copies of the best ones. To these copies we make small random changes. Sometimes the random changes result in networks which are improved. As we iterate this process of evaluation & replacement, we gradually find better and better neural networks. At some point, we have a network which "works". This network is able to examine successive values from the chaotic map of one of the variables (say X) and determine the necessary perturbations dP to keep the system near the fixed point. The evolution is the process of determining the best weights for the neural network.


Results

Each iteration of evaluation & replacement is called a generation. The graph below shows the fitness of the best network in each generation.

FITNESS

The behavior of the chaotic map, with perturbations applied by the neural networks at various stages of evolution, are shown below:

RESULTS

Early on, in (a), the networks have very little idea how to control the system. As the evolution proceeds, they begin learning how to keep the system near the fixed point [(b) and (c)]. Finally, a dramatic increase in fitness occurs (d) and the networks can stabilize the fixed point forever, once the system gets near the fixed point.


Other Results

For details see the paper. Other results we have obtained include:


Questions?

If you have any questions or comments, send Eric email: erweeks / emory.edu (replace / with @)

Also, our paper has a coherent explanation of genetic algorithms and neural networks.


Eric's home page