| Previous | Table of Contents | Next |
The Current State of the Art
There is no standard LCS. Some notable LCS implementations include the research of Wilson (1986; 1994; 1995). Wilsons work centers on the usage of LCSs in the simulation of adaptive behavior. His most current efforts have been with an advanced LCS called XCS (Wilson, 1995). For a more introductory system, Wilson provides ZCS, a zeroth-level LCS, which has the basic parts listed in this tutorial, excluding a message list.
The work of Smith and Cribbs (1994) relates artificial neural networks (ANNs) to the LCS. While ANNs may not be exactly the same as LCSs, there is a considerable amount of information that may be applicable to LCS research.
An initial framework by Valenzuela-Rendon (1991) introduces the principles of fuzzy logic to the LCS. More recently, the work by Bonarini (1996) is extending the fuzzy LCS concept. Fuzzy logic, like ANNs, may provide a rich medium of cross-talk to inspire future LCS research.
Finally, there is a variant of the GA that proves quite promising in the LCS. That is genetic programming (GP). GP is a tree based evolutionary algorithm that evolves program trees. This is a variable length structure that may allow for complex relationships to be mapped. This is extremely useful in the area of LCSs, where the system must learn from interacting with the environment. Little work has been done, but the advantages are quite attractive. Tufts (1995) has developed a GP-based LCS. Wilsons ZCS paper mentions that lisp s-expressions, which are similar to the lisp-like trees found in GP, might provide a compatible representation for LCS rules (Wilson, 1994).
ARTIFICIAL NEURAL NETWORKS AND THE LCS
Artificial neural networks (ANNs) provide a method to illustrate many of the ideas embodied by the LCS. This study shows a simple ANN application that involves classification of a series of binary patterns to a single binary signal. Linearly inseparable Boolean problems have become a staple of ANN benchmarks. These results stem from Smith and Cribbs (1994).
Input connectivity selection has been shown to extend the perceptron (Wilson, 1990). The addition of input selection provides a method to nonlinearly partition inputs. Using the premise of input partitioning, it becomes clear that LCS rules partition the input space through use of the dont care operator (#), while ANNs ignore inputs through zero valued weights (see Figure 16.2). The LCS uses 0, 1, and # to denote what values the conditions must meet for the rule to fire. A simple transformation of this scheme to -1, +1, and 0 easily emulates similar traits for the ANN (Smith & Cribbs, 1994). Figure 16.2 shows the overall LCS-like ANN. The ANN structure, presented in Figure 16.3, contains a hidden layer of nonlinear (bipolar sigmoid) activation functions and single output node (linear sum function). The update of weights is constrained to the hidden and output layer weights only.
Figure 16.2 Mapping of an LCS rule onto an ANN computational unit (neuron).
In an LCS, the GA is responsible for rule discovery. The same premise can be used to evolve ANNs. Assuming a Michigan-style LCS model, the hidden layer of a multi-layer ANN may be considered the ANNs rule-set. The input to hidden layer connectivity of an ANN may be treated as the conditions for the individual rules (neurons). Fitness governs which rules, or neurons, are selected for evolution. Many fitness measures could be used, but effective measures appear to be those based in correlating the error (reward) signal to the active rules (or neurons). Fahlman (1990) suggests a similar form of correlation measure to grow cascade-correlation networks.
Equation 16.2 shows the fitness function used in an LCS-inspired ANN (Smith & Cribbs, 1994; Cribbs, 1995).

The equation is shown in update fashion. The function σ denotes the activation function for the hidden layer neurons, and E complement (the bar above E) denotes when the ANNs training case (mean square) error is below a given threshold. This digital error threshold governs what the GA considers as the error state of the ANN. As long at the ANNs error is below this value, E is 0indicating a no error state. Using this logical format, the rest of the equation acts as a bonus function that provides additional fitness point for firing, when the error threshold has not been exceeded. The multiplication by the absolute value of wij is the magnitude of the output weight of hidden neuron, i to output node j. The multiplier promotes bonuses that are not parasitic. Parasites, in this sense, are rules (or neurons) that act similarly to other rules active at the same time, but do not additively participate, i.e., parasites do not affect the result appropriately.
A typical GA attempts to evolve the individuals in a population until the population consists largely of one type of individual. This form of evolution does not apply to the Michigan style LCS, or ANNs, because a diverse collection of rules must be maintained to allow the system to interact efficiently with its environment. To accomplish this task, several methods to preserve the diversity within a population have been developed. Goldberg (1989) developed a computational method called explicit sharing. The premise of sharing is to measure the distance, in the genotype space, of each individual with respect to the rest of the individuals in the population. The fitness values for each individual are degraded based on their proximity to the other members of the population.
| Previous | Table of Contents | Next |