| Previous | Table of Contents | Next |
PROBLEM STATEMENT
H2 optimal control models the control design as a problem of minimizing the H2 norm on the closed-loop transfer function while utilizing a state or a measurement feedback controller. The H2 optimal control problem is a deterministic setting of the linear quadratic Gaussian (LQG) control problem. LQG control theory is a powerful design tool which involves a linear model of a plant in a state-space description. The designer assumes properties for disturbances and measurement noise and translates the design specifications into a quadratic performance criterion consisting of state variables and control signal inputs. The designers objective is to minimize the performance criterion while at the same time, guaranteeing closed-loop stability. This involves the task of solving for the optimal compensator parameters, which are contained in the output feedback gain matrix. The formulation of the H2 optimal problem proceeds as follows [1].
The generalized plant of a standard control problem is given by



where
is the state vector,
is the disturbance vector,
is the control vector,
is the performance vector, and
is the measurement vector. Figure 3.1 illustrates this design framework. The following is assumed.
is stabilizable and detectable.
is stabilizable and detectable.
has full column rank.
has full row rank.A general compensator for the system is


where
is the state vector of the controller. Closing the loop using negative feedback yields the closed-loop system dynamics


where




Figure 3.1 Generalized plant with general compensator.
The set of all internally stabilizing compensators is defined as

For an H2 problem, the objective is to minimize the H2 norm on the closed-loop transfer function from disturbance inputs to performance outputs

If the disturbance is modeled as white noise, the objective is

It can be shown that the cost can be expressed as [1]

where


P is the controllability grammian of
and Q is the observability grammian of
.
In order to obtain the H2 optimal compensator, the Lagrangian is defined as

where L is a symmetric matrix of multipliers. Matrix gradients are taken to determine the first-order necessary conditions

Computation of an H2 optimal controller for the general compensator requires the simultaneous solution of five coupled equations. This process becomes computationally expensive, and the problem is over-parametrized with such a compensator. To avoid the problem of over-parametrization, either a controller or observer canonical form can be imposed on the compensator structure so that the number of parameters is reduced to its minimal number.
The resulting augmented system defines a static gain output feedback problem where the compensator is represented by a minimal number of free parameters in the design matrix, G. This augmented system is shown in Figure 3.2. The closed-loop system is given by


Figure 3.2 Augmented system with compensator in controller canonical form.
The optimal compensator design for this H2 optimal controller problem is obtained by finding the compensator parameters in the output feedback gain matrix, G, which minimizes the cost function of Equation (3.15). The crux of this chapter is to use a GA to find the optimal controller gain matrix, G, for the H2 compensator synthesis problem. For this study, the H2 optimal problem will be solved for a four-disk system (see Figure 3.3).
MOTIVATION FOR USING GENETIC ALGORITHMS
Numerous techniques have been developed to synthesize H2 controllers, such as Newtons method [4] and homotopy algorithms [1]. However, these methods have several limitations that restrict them from optimal performance. Some of these limitations include long run-times, a dependence on stable, initial guesses which are close to the optimal design, and a dependence on derivative information. GAs are not restricted by these limitations; therefore, a GA will be used to acquire a more robust and perhaps more efficient controller design.
GAs are search algorithms that combine a survival-of-the-fittest approach with a structured, yet random information exchange. This combination provides a balance between the exploration of the search space, and an exploitation of successful solutions. A GA differs from traditional search methods in three main ways [5]:
GAs require the parameter set of the optimization problem to be coded as a finite string of bits. Since a population of these strings are considered simultaneously, the chance of locating a false peak in a multimodal search space is reduced over methods that go from a single point to a single point. The coding similarities found in each population are used to differentiate good solutions from bad solutions. The use of payoff information gives each string a fitness value, without having to rely on any other auxiliary information. The fitness values for a particular string are obtained from an objective, or fitness function which dictates that stringsgoodnessas a solution of the search space. These differences from traditional methods allow a GA to perform well with the discontinuous and vastly multimodal, noisy search spaces of H2 controller synthesis.
| Previous | Table of Contents | Next |