Previous Table of Contents Next


Chapter 2
Image-Calibration Transformation Matrix Solution Using a Genetic Algorithm

Thomas P. Dickens

The Boeing Company
e-mail: thomas.p.dickens@boeing.com

ABSTRACT

To aid in the quantitative analysis of a pair of simultaneously taken images, one image can be transformed into the image-space of the other. Analysis can then be done by using the same pixel region from both images. Known points of interest are located on both images as a set of reference marker locations. By using the knowledge that the images were taken at the same time, and of the same item of interest, from approximately the same physical location, a simple 4×4 transformation matrix can be used to map the markers from one image to the other. This chapter outlines an approach to develop, and an implementation of, a genetic algorithm to find an acceptable transformation matrix for a given pair of such images. This research resulted in insights for using a GA in a time-constrained environment, techniques in reducing the effective GA search space, and advanced matrix GA operators.

INTRODUCTION

The author works for the Boeing Company in Aerodynamics Computing, specializing in geometry, graphics, and scientific visualization. One assignment involved working with pairs of digital images of a model, I1 and I2, where the images in an image pair are taken with different optical filters to view different characteristics of the model. A set of registration targets is placed upon the model to assist in down-stream image manipulation. This set of targets is located within each image in its own image-space as shown in Figure 2.1.

Currently, an image-calibration program is used which transforms the second image into the “space” of the first image, S1, creating a new image I2′. The purpose of this is that statistical operations can now be easily done on the image pair I1 and I2′ as simple pixel-based operations in the 2D image space, and the image space for both images has the same mapping (S1 == S2′). The approach used in the existing image-calibration program is to take the registration targets in image I2, process them through a 4×4 transformation matrix T21, and end up with the targets in the space of I1. Once this matrix is “found”, each pixel in the image can then be transformed into the I1 image space as shown in Figure 2.2 (with over sampling and jittering to avoid aliasing artifacts).


Figure 2.1  Finding transformation matrix T21 for targets in image 2 to the space of image 1.


Figure 2.2  Transforming image 2 through T21 into the space of image 1.

To find the T21 matrix, the image-calibration code calculates an initial guess by calculating the 2-D rotation, scaling, and translation from S2 to S1 for two distant targets. However, this is not a simple 2-D problem, hence the need for a 3-D transformation matrix. The code then cycles on all of the sixteen values in T21, perturbing them slightly, looking for a better “fit” from S2 to S1. The program monitors the sum of the square of the distances from the targets in S2′ to the targets to S1. This is analogous to physically jiggling a part to find the best-fit orientation. The program iterates until the fitness is within a given tolerance, or until a maximum number of cycles has been tried. Within this iteration logic an “escape” factor of 0.9995 is used to allow this method to escape from local minimums. The current image-calibration code as shown in Figure 2.3 is used and works well.


Figure 2.3  Current non-GA block diagram.

INTRODUCTION OF A GENETIC ALGORITHM

In this study, a genetic algorithm (GA) is used to find a suitable transformation matrix T21 for the image transformation problem. As seen in Figure 2.4, the fitness function needed for the GA is already well-defined in the current iterative method, as well as the initial matrix calculation algorithm. The goal with the GA implementation is to achieve a T21 which is as good as, or better than, the current method, but which converges in less time. The current method takes an average of 55 seconds to find a suitable mapping when run on an SGI Indigo2 Extreme. In production use, hundreds of images will be regularly transformed for statistical analysis.


Figure 2.4  GA-based block diagram.


Previous Table of Contents Next

Copyright © CRC Press LLC