A comparative analysis of retrieval techniques in content based image retrieval
dominguez_cecilia_image-processing-manual
1. Image Processing
Introduction
In an experiment, several images were taken to document the aperture field and what
occurred within it. A water solution consisting of sand was pumped into the fractured cell
and the system oscillated to simulate the act of an earthquake on a fractured aquifer. A
sequence of 100 images was taken. The camera was also used to take images of clear and
blue dye solutions that were being pumped into the cell. The averages of these two sets of
100 images are used to create the aperture field based on [Detwiler, et al. 2009] for
obtaining the aperture field of fractured cells. The aperture field was compared against the
experimental images to determine how the fluid behaves within the given fracture cell or the
cells aperture. Before any image processing can occur, one must be certain that the two
sets of images are completely lined up using a Matlab function, CPSELECT.
The Problem
With each picture taken, a variety of modifications to the image positioning occurred. These
may have been slight, such as those due to the camera heating up with use. The camera
may have also been moved drastically such as if the velcro used to hold it in place had to
be adjusted, or if the camera had to be moved entirely. In order to analyze the images
properly, and therefore measure the exact movement of water and sand through the field at
the moment each image was taken, the image must be realigned exactly with the aperture
field image.
CPSelect
Within Matlab’s Image Processing Toolkit, the command cpselect(input, base) starts the
Control Point Selection Tool which allows the user to choose what are called control points
within two related images. Input is the name of the image that needs to be adjusted in
comparison to the alignment of the base image. In CPSELECT terms, input is the
unregistered image and base is the original image. Once this command is run, CPSELECT
opens the two images alongside each other.
2. Figure 1: Image result of cpselect(r_gray, Iclear_all_gray)
The user then chooses pairs of control points, which are positioned at exactly the same
point on whatever was being photographed. (This is typically a part of the picture that can
be clearly identified in both the unregistered and registered images).
Figure 2: Image result of cpselect(r_gray, Iclear_all_gray) with control points selected
3. The number of control point pairs chosen depends on the type of transformation the image
needs. Different transformations are chosen depending on the needed type of alignment.
This is typically not clear to the user, but simply a matter of trying various transformations
until the proper alignment results. One example of a transformation is nonreflective
similarity. This only needs two pairs of control points and is needed when shapes in an
image are not changed but the image itself must be rotated, scaled, or translated. Similarity
calls for three pairs of points and allows for reflection of the image. Another is piecewise
linear which requires four pairs and is needed when the image appears to be distorted
differently in different regions.
Type of Transformation Before After
A way to help in choosing the right transformation:
The user also has the option of testing cpselect’s ability to choose (predict) points by using
the control point prediction button. After choosing 2 pairs of points, the user has the option
to click the “control point prediction” button, choose a point (on either the base or the input
image) and see what point cpselect predicts is the corresponding location of your chosen
point. If cpselect is unable to correctly predict the other point of the pair, the user knows to
choose another pair, try again, and so on until cpselect is able to choose the correct point.
Knowing how many pairs are needed until cpselect can properly predict points indicates
how many points are needed for a transformation, and will therefore help narrow down the
options of transformations to choose from.
(This may not always work, but is a good resource to try).
After the points have been chosen, CPSELECT returns input-points and base- points. The
user may use these points to finish the image adjustment or what can be a better option,
depending on the images, would be to use cpcorr to compensate for error between the
points chosen within a pair.
To correlate the points, the following syntax is used:
input_pts_adjusted = cpcorr(input_points, base_points, input_image_name(: , : ,1),
base_image_name);
The remaining code is used:
mytform = cp2tform(input_pts_adjusted, base_points, transformation_type);
registered_image = imtransform(input_image_name, mytform, **look into different
types of correlations**);
** registered image = same size as input image? **
* If cpcorr is not used, for example if the image will not allow for the points to be correlated,
4. then input_points is used instead of input_pts_adjusted. For example, cpcorr could not be
used for the Aperture Field Image because, once converted to grayscale, one can see that
the Aperture Field Image appears reversed from other images. Meaning that where the
Aperture Field appears white, the input images appear black and vise-versa. Cpcorr “uses
normalized cross-correlation” to adjust the input and base points selected by the user.
* The original base image can now be compared to the new “registered” image.
The transformation that best aligned the image we were trying to register was “piecewise
linear”. To compare how well the transformations worked implixelinfo was used to determine
the pixel coordinates of each of the corners of a border and those of a long needle in the
field in the experimental image versus the aperture field before and after alignment.
Using impixelinfo in matlab example:
imagesc(registered_image)
impixelinfo
** looking at coordinates**
Aligning Images to Aperture Field
The images from the Flow Sand Oscillation file are aligned to the aperture field image
created. However, because of the color scale of the images, they do not appear in
CPSELECT (see Figure 3 below). The images are therefore converted to an appropriate
gray scale. For these images, the function mat2gray was used. CPSELECT is run with the
new gray image and the resulting points are used to align every point in the file.
Figure 3: Image result of cpselect(r,Iclear_all)
5. Example: converting image “r” to gray scale image “r_gray”
r_gray = mat2gray(r);
The same is done for Iclear_all and cpselect(r_gray, Iclear_all_gray) gives Figure 1 above.
Mat2gray is one of several functions that properly scales images to convert them between
class and type. Mat2gray is valid for images of class double and converts to double within
range [0,1].
For more information, or if a different function is needed refer to “Digital Image Processing
Using Matlab (Gonzalez, Woods, Eddins)” white binder or search “converting between
image classes” under Matlab’s Help tool.
** include “recipe of steps” show expected output also**
Algorithm
1. Load base image and image(s) to be registered
2. Run cpselect
cpselect(input_image,base_image)
a. Convert to grayscale if needed:
cpselect(Input_gray,base_gray)
b. choose points
3. Save executed input and base points
4. Align images (example code):
for n = a:b;
file = char(FL(n));
FullFileName =
['/Users/Lab/JeanElkhoury/Experiment_1/Exp_20110330_FlowSandOscillations/'
file];
im = imread(FullFileName);
r = double(squeeze(im(:,:,1)));
r_gray = mat2gray(r);
input_pts_adj =
cpcorr(input_pts_r_gray_4pair,base_pts_Iclear_all_gray_4pair, r_gray,
Iclear_all_gray); % correlates pts to correct error with pts chosen
mytform = cp2tform(input_pts_adj, base_pts_Iclear_all_gray_4pair,
'piecewise linear');
registered = imtransform(r_gray, mytform);
6. end
5. Check if images have been properly aligned
- impixelinfo
imagesc(registered_image)
impixelinfo
6. Save registered images