Previous Table of Contents Next


Chapter 16
What Can I Do with a Learning Classifier System?

H. Brown Cribbs, III
Robert E. Smith

Department of Aerospace Engineering and Mechanics
The University of Alabama
Box 870280
Tuscaloosa, AL 35487-0280
e-mail: brown@galab3.mh.ua.edu

ABSTRACT

The learning classifier system (LCS) is an application of the genetic algorithm (GA) to machine learning. Artificial neural networks (ANNs) perform mappings of input vectors to outputs much the same way a LCS does. This chapter introduces the LCS paradigm and provides literature references for future investigation. Through the use of LCS principles, an ANN becomes a variable structure production system, capable of making complex input-output mappings that are similar to a LCS.

The evolutionary process of a single ANN facilitates a broad understanding of how evolution may help rule-based (or neuron-based) systems. An evolutionary approach to ANN structure is reviewed. Its similarities to the LCS are discussed.

A simple extension to Smith and Cribbs’ (1994) and Cribbs’ (1995) work in an ANN and LCS analogy is presented. The experiment presented removes the nonlinearity of the ANN’s output layer to assess the nonlinear effects of the GA’s partitioning within the hidden layer. The results indicate that GA-induced nonlinearity actively participates in the solution of a difficult Boolean problem - the six multiplexor problem.

INTRODUCTION

Learning classifier systems (LCSs) were developed by Holland in 1978 (Holland & Reitmann, 1978). Since its advent, the LCS has become a favorite of complex systems researchers and economists for its unique structure (Holland, 1993; Wilson, 1986). The use of genetic algorithms (GAs) allows the LCS to refine its rule-base, which improves the LCS’s performance.

Artificial neural network (ANN) research began almost at the birth of electronic computers, but it was not until 1986 that the backpropagation training method emerged (Rumelhart, Hinton, & Williams, 1986). Since that time, the ANN’s popularity has grown; subsequently, proofs of its abilities as a universal approximator of arbitrary precision have emerged in due course (Hornik et al., 1989; Cybenko, 1988).

Wilson (1990) presents an interesting extension to perceptron networks—genetically evolved input partitions. Perceptrons (Rosenblatt, 1958) represent one of the simplest ANN types. Unfortunately, the work of Minsky and Papert (1969) showed that perceptrons were not capable of learning linearly inseparable tasks. Wilson’s work (1990) provides a workable solution to linear inseparable problems and shows that input partitioning (input selection) adds another degree of nonlinearity to perceptron networks. Working on this premise, Smith and Cribbs (1994) used Wilson’s perceptron experiment as a starting point to investigate analogies between multi-layer, feedforward ANNs and LCSs.

The basic LCS paradigm does much the same task as Wilson’s 1990 work (Smith & Cribbs, 1994). Linearly inseparable classification tasks have become a favorite of LCS researchers and ANN researchers alike. This chapter introduces the basic principles of the LCS and mentions many advanced features to complete the reader’s exposure to the LCS. After the introduction to the LCS, the discussion turns to the similarity in features of the LCS and ANNs.

LEARNING CLASSIFIER SYSTEMS

A Learning Classifier System (LCS) is a rule-based system that learns by interacting with its environment. The LCS observes its environment and notes regularities within that environment. From these observations, the LCS forms rules that dictate how the LCS acts.

LCS rules are linguistic in nature and can be thought of as IF-THEN statements. For instance, a rule to control a robot wandering about a room might be,

if an object is to my right AND in front of me
then turn left.

This sort of rule may be written in a shorthand notation. Ease of storage and manipulation within the computer motivate shorthand. A simple notation is to drop the if and then and simply list the conditions and action(s) separated by a slash. The rule above might appear in shorthand as,

# # # 1 1/ 1 0 0.

The left-hand side (LHS) of the rule above is called the condition under which the rule applies. The right-hand side (RHS) of the rule is the action the rule advocates. The condition side of the rule also has three fields with # (hash) characters in addition to the two 1’s. The # is a special operator implementing the Boolean “don’t care” operation. The two ones on the LHS are said to be defined bits, i.e., they denote the necessary conditions for the rule to be applicable. In the case of the rule above, the first one may be taken to mean, “an object is present to my right,” and the second one similarly denotes the presence of an object in front of the robot. The RHS may be encoded by assuming there are three discrete actions: turn left, turn right, and move forward. The above rule would then have the action of “turn left, not right, and don’t move.”

The next few sections provide a brief introduction to LCSs, included are explanations of the various components, order of operation, and general issues surrounding their use. This introduction is intended for the LCS novice and for those unfamiliar with the LCS paradigm.


Figure 16.1  The basic LCS architecture.


Previous Table of Contents Next

Copyright © CRC Press LLC