| Previous | Table of Contents | Next |
Simple GA
GAs are optimization algorithms inspired by biological evolution [2]. They have been shown to be very effective at function optimization, efficiently search large and complex spaces to find near global optima. A simple GA uses three operators in its quest for improved solutions: reproduction, crossover, and mutation. These operators are implemented through the performance of the basic tasks of copying binary strings, exchanging portions of strings, and generating random numbers, respectively.
Reproduction is a process in which strings with high performance indexes receive accordingly large number of copies in the new population. For instance, in roulette wheel reproduction, the strings are given a number of copies that are proportional to their fitness. The probability of reproduction selection is defined as in Equation (11.1).

where
| Pselect | = Probability of a string being reproduced, and |
| fi | = fitness values of an individual string. |
Reproduction drives a population toward highly fit regions of the search space.
Crossover provides a mechanism of information exchanges between high performance strings. Crossover can be achieved in three steps:
An example of a crossover is shown Figure 11.1. The binary coded strings A and B of length 10 are crossed at the third position. New strings A and B are produced.
Figure 11.1 Example of GA crossover.
Mutation enhances a GAs ability to find near optimal solutions by providing a mechanism to insert missing genetic material into the population. Mutation consists of the occasional alteration of a value at the particular string position. This procedure insures against the loss of a particular value at any bit position.
Together, reproduction, crossover, and mutation provide the ingredients necessary for an effective GA. This simple GA model is employed in the current study.
Recurrent NN
Recurrent Neural Networks (RNNs) are experiencing an increasing popularity because of their inherent dynamic nature. In RNNs, individual neurons are fed back as inputs to other neurons. The general structure of an RNN with BP learning is shown in Figure 11.2. In this figure, the circles represent the neurons of the RNN and the arrows are RNN connections. Each connection has its own strength, commonly called a weight. In typical BP learning, weights are adjusted in order to obtain the desired output values from the RNN input. RNN input can be any crisp values, which are related to the RNN output values. RNN input values are usually obtained from the environmental state, and the outputs are the predicted action or consequence of the input. Therefore, like most NNs, RNNs attempt to capture the relationship between input and output values, inherent in a particular problem. As shown in Figure 11.2, an RNN neuron has connections from every other neuron to its left at time period t. Also, every neuron has connections from itself and from every other neuron to its right at time t+dt. Here, time has no meaning in the physical environment; it is used exclusively to mark iterations through an NN learning cycle.
Figure 11.2 Recurrent NN with BP learning structure.
The operation of RNNs with BP learning consists of two parts: (1) a forward pass and (2) a backward pass. The primary role of the forward pass is to predict an output response from a given input. The outputs are a definite function of the input. When an individual neuron receives an input, the input goes through an activation function within the neuron and generates a neuron output. The activation function can take many forms. Generally, it is a nonlinear function, but its only true limitation is that it must be a differentiable function. In this study, sigmoidal functions (Figure 11.3), are used. Equations 11.2 to 11.5 are necessary to accomplish a forward pass. Equation 11.2 represents the inputs U, which are received by the RNN. These inputs are multiplied by weights in 11.3 and outputs are processed by
in Equations 11.4 and 11.5. The
is the activation function. The effect of a bias in the neurons is achieved by assuming that the first input is always unity and is connected to all the other neurons.




where
| t | = current time frame, |
| t-1 | = previous time frame, |
| U(t) | = net inputs, |
| x(t) | = neuronal activations, |
| Y(t) | = net output, |
![]() | = activation function, |
| Wij | = weight connecting the ith neuron to the jth neuron, |
| m | = number of inputs, |
| h | = number of hidden neurons, |
| n | = number of outputs, |
| N | = total number of neurons(m+h+n). |
Figure 11.3 Sigmoidal activation function
.
The second fundamental operation, the backward pass, is where the learning or adaptation occurs. In a backward pass, errors associated with the RNNs performance are used to adjust the weights associated with the connections. Here, the errors are often a sum of squared error between the desired output (for the given input) and the output actually produced by the RNN. This adaptation (BP learning in this chapter) implies a modification of RNN structure and its parameters based on repeated exposure (epoch) to the environment, or from input-output pairs collected from the environment. Equations 11.2 to 11.6 are necessary to accomplish a backward pass. RNN learning involves an evaluation of RNN performance. The performance is measured via an error that is computed using Equation 11.6. The sum of the differences of RNN outputs (Yt(t)) and RNN desired outputs (dt(t)) is squared:

| Previous | Table of Contents | Next |