SlideShare a Scribd company logo
1 of 18
Download to read offline
Comparison of Image Restoration Techniques for
Removing Noise
A Senior Project by Tuyen Pham
Supervised by Dr. Erin Pearse
California Polytechnic State University,
San Luis Obispo
Advisor Signature:
Chair Signature:
Comparison of Image Restoration Techniques for Removing
Noise
Tuyen Pham Dr. Erin Pearse
January 9, 2017
Abstract
This paper presents a comparison between using Wavelets, Markov Random Fields, and
Iterated Geometric Harmonics in detecting noise within images and correcting it. After a brief
overview of each of the topics and how they are used within image reconstruction. Afterward,
this paper will explain the process we used to test the methods and compare results.
Contents
1 Introduction 1
2 Iterated Geographic Harmonics 2
3 Markov Random Fields 3
4 Wavelet Decomposition 5
5 Tests 8
6 Results and Discussion 12
7 Code and Usage 13
8 References 15
1 Introduction
Problems in computer vision and image analysis often involve noisy data, which most likely makes
exact solutions impossible. There are several strategies for removing noise from images. In this
project we will be comparing Iterated Geometric Harmonics (IGH), Wavelet methods, and Markov
Random Field methods. Within images, noise is seen as incorrect values rather than missing
values. This creates issues when attempting to correct data with Iterated Geometric Harmonics,
as described in the subsequent section. To remedy this, we compare how well wavelet methods
and MRF methods can be used to detect noise within images. We detect noise by correcting the
image with each respective method, then comparing the original noisy image to the reconstructed
image to isolate differences between the damaged and repaired pixels. Larger differences give us the
location of the noise within the noisy file. For compatibility with IGH, we can then remove those
pixels, done in MatLab by replacing their original values with NaN values. Once holes have been
punched out of our image, we can run the new image with legitimately missing values through IGH
to compare how well each of the three methods can correct noise.
Generally, data is treated as a collection of vectors. Data is represented as an n × p matrix, in
which each of the n rows is a vector of length p. We call rows records, and columns parameters
or coordinates. For images, we take values from the original 2 dimensional image, with size n × m,
and rearrange the information to be in one long vector of length n · m. For example, an image of
size 112×92 pixels would be organized into a vector v ⊆ R10,304
, and a collection of 36 n×m images
would be stored as a 36 × 10, 304 matrix.
To compare two images image1, and image2, with the width n and height m, we will define a
distance between two images as:
image1 − image2 =
n·m
j=1
(aj − bj)2
√
n2 + m2
(1)
Where aj is the jth
entry of image1, and bj is similarly defined for image2. This will be our method
for measuring how well each method reconstructed an image by comparing it to an original without
noise.
The strategy for comparing how well Markov Random Fields and Wavelet Decomposition meth-
ods repair and locate noise within an image is tested by artificially placing noise within an im-
age. Specifically, we will inject noise into an image via one of three methods: MatLab’s function
imNoise(), a function from Dr. Erin Pearse, and another from Orchard[3]. In the examples below,
the Gaussian (normal) Distribution is used for generating the noise.
After artificially adding noise, we separately use Wavelet Decomposition and Markov Random
Fields to repair two copies of the noisy image, and then compare the repaired images to the noisy
image to isolate where each method made corrections. Once we have 2 separate sets of noise, one
located from wavelet methods and another from MRF methods, we will replace values where noise
was reported in copies of noisy image and run the copies with NaN values through our Iterated
Geometric Harmonics function to reconstruct our images. Then we have 2 images that we can
compare to the original using our metric, (1).
1
2 Iterated Geographic Harmonics
Since conventional statistical software does not fare well with missing data, incomplete data points
were either discarded or to be artificially fill in the missing data. Discarding data is unfavorable
as it could bias the remaining data, or leave too little to analyze. Iterated Geometric Harmonics
(IGH) is a method used for imputing missing data for scientific/statistical analysis so we don’t have
to discard any data. Geometric harmonics uses the geometry of the dataset to create an extension
of the data. The geometry of the dataset depends on a measure defined by the analyst to discern
similarities within data points.
To analyze the structure of the dataset, which is typically nonlinear, we treat it as a graph
network with vertices corresponding to points in the dataset. Each vertex is connected to every
other vertex by an edge with a weight defined in terms of a symmetric and nonnegative kernel
function. This allows us to represent the graph as a n×n matrix K, where each entry K(x, y) gives
the weight of the edge connecting the vertices x and y. The nonlinear structure of the dataset is
encoded by the kernel function as a “similarity” between two vertices, where K(x, y) 0 implies
that x and y are very similar, while K(x, y) ≈ 0 indicates a large difference between x and y. In
this project, we use the positive semidefinite Gaussian kernel:
K(x, y) =
1
σ
√
2π
exp(−
x − y 2
2
σ2
)
where the 2
-norm is given by the usual norm upon vectors
x − y 2 =
p
k=1
| xk − yk |2
As per the introduction, we consider an n × p matrix representing n data where a data point
is a vector x ⊆ Rp
. Geometric harmonics imputes in one column at a time, so the iterated portion
of ’IGH’ comes from the fact that we update the matrix after each column is imputed and iterate
over all columns in the matrix. In other words, let xj be the jth
column of the data matrix. We
can consider values within xj to be results of a function depending on the other columns.
xj(i) = xi,j = f(xi,1, xi,2, · · · , xi,j−1, xj,j+1, · · · , xi,p)
where i ∈ Γ of the subset of rows where the jth
entry is not missing. Specifically, we have f, a
function defined on Γ ⊂ X, that we will extend to some function, F which is defined on all of X
Geometric harmonics use a form of the Nystrom method for approximating integral equations
by subsampling, i.e., discretizing the integral. Geometric harmonics applies the Nystrom method
to collect eigendata from the solution of the eigenvalue integral equation defined on the subset of
our data where our values are known. We denote this subset Γ. The solutions of the eigen equation
kψ = λψ satisfy the entrywise equations
(ψ)m
j=1, (λ)m
j=1 with λjψj(x) =
y∈Γ
k(x, y)ψj(y), for j = 1, . . . , m
From here, using the eigendata, geometric harmonics are constructed
Ψj(x) =
1
λj
y∈Γ
k(x, y)ψj(y)
2
Here, Ψj(x) = ψj(x) when x ∈ Γ, but Ψj is also defined for x ∈ (X − Γ). Next we can construct an
extension of f : Γ → Y
F(x) =
m
l=1
xi, ψl ΓΨl(x)
where
xi, ψl Γ =
x∈Γ
xi(x)ψl(x)
This function works to fill in missing values within a dataset, but since noise within images is
not considered missing data, but rather incorrect data, IGH is not completely suited towards the
problem of image reconstruction. To remedy this, we will use Wavelet Decomposition and Markov
Random Fields to locate and remove damaged data to fulfill proper conditions for IGH to run.
3 Markov Random Fields
Markov Random Fields inherits its name from the Markov Property, which means that this process
satisfies a memoryless property within a stochastic process. A Markov Random Field is a graphical
model of a joint probability distribution, described an undirected graph. In this graph, nodes
represent random variables and edges denote conditional dependence relationships the connected
nodes. This model is said to have the Markov property because the values of a given Random
Variable, zi, are independent of all other random variables, given the values of the neighbors of zi
in the graph.
Figure 1: Given the gray nodes, the black node, zi
is conditionally independent of all other nodes
This method of analysis lends itself well to image analysis and computer vision because each
pixel value usually depends strongly on the neighboring pixel while only having weak correlations
with pixels that are further away. Similarly to a Markov Chain being indexed by time, where a state
Yj depends only on Yj−1, Markov Random Field optimization in image analysis is indexed by spatial
variables, so zj can depend on more than one other random variable, but is limited to neighboring
nodes. MRF determines the state of a pixel based on two potential functions, corresponding to the
neighboring estimated pixel values and the noisy value. Markov Random Fields have proven to be
effective in image reconstruction and restoration, segmentation, and edge detection.
The following figure is an example of a MRF when applied to image restoration, where each
black node is an unknown true value for the random variable and the gray nodes denote the original
noisy data. The edges, both solid and dotted, show that a random variable should depend on both
the noisy pixel value and the estimated neighboring values.
3
Figure 2: Representation of a partial Markov Random Field within image restoration
The optimization portion of Markov Random Field optimization is the method of determining
when to stop updating the current pixel value. Maximizing the joint probability over every node
of the graph is equivalent to minimizing an energy function, which is determined by weights placed
on the original pixel value and the neighboring pixel values. This means that there is a trade off
between smoothing (regularization) and accuracy (matching the original data). This optimization
can be done through numerical methods, such as gradient descent methods, or through techniques
designed specifically for MRF optimization.
More simply, updating the pixel value throughout an image with noise will depend on its neigh-
bors, and on how drastic the proposed change is. Within MRF methods, each problem has different
sets of energy functions, often with different standard energy functions between different problems.
In this project, we used the standard energy equations for image restoration given by Orchard [3].
In the energy equations, dn is the current observed value at data node n, xn is the proposed
value for the data node n. Vk is the energy between the observed value and the proposed value,
with a prior belief that the image is corrupted with Gaussian noise with variance σ2
. Since we are
attempting to minimize the total energy, we can see that large differences are discouraged. The
second equation, Vi,j is the energy between neighboring nodes and has β and γ as constant values.
Vk =
(xk − dk)2
2σ2
Vj,i = γ min
xj∼xi
((xj − xi)2
, β)
Here xj ∼ xi denotes that we only check on nodes neighboring xj. Vi,j punishes large differences
between nodes up to a value β. This punishment has a bias towards totally smooth images and
since we want to preserve edges and accounts for the trade off stated above, we introduce γ to scale
the punishment down, this allows for an appropriate scaling with respect to Vn.
4
Figure 3: Markov Random Field reconstruction Example
Settings Covariance 100, β = 200, γ = 0.009, 10 iterations
4 Wavelet Decomposition
A wavelet family is a family of oscillatory and compactly supported functions that form an or-
thonormal basis of L2
(X). Families of wavelets and decomposition through projection onto the
members works similarly to Fourier decomposition. In comparison to a Fourier series, wavelets hail
from a ‘mother wavelet’, and consequent ’daughter wavelets’ are formed in tiers. This construction
can be done recursively and allows for a quicker method for decomposition with nice properties.
Wavelets in image analysis fall under the domain of multiresolution analysis of the Lebesgue
space L2
(R) and consist of a sequence of nested subspaces
{0} ⊂ · · · ⊂ V1 ⊂ V0 ⊂ V−1 ⊂ · · · ⊂ V−n ⊂ V−n−1 ⊂ · · · ⊂ L2
(R)
That satisfy certain self-similarity relations in time/space, and scale/frequency, in addition to com-
pleteness and regularity relations.
Here, self-similarity in time is the condition that each subspace Vk is invariant under shifts by
integer multiples of 2k
, that is, for each f ∈ Vk, m ∈ Z the function g defined as g(x) = f(x − m2k
)
is also contained in vk
To satisfy self-similarity in space, we have that for Vk ⊂ Vl, k > l, for each f ∈ Vk, ∃g ∈ Vl with
∀x ∈ R, g(x) = f(2k−l
x). This means that for each wavelet in Vk, there is another similar wavelet
scaled down by a factor of 2k−l
in Vl.
5
From these properties, we have that in the sequence of subspaces, for k > l the space resolution
2l
of the lth
subspace is higher than the resolution 2k
of the kth
subspace.
The regularity properties are what allows us to generate functions within our basis from the
scaling functions, or Mother Wavelets, as previously noted. The final condition of completeness
calls for the union of the nested subspace to be dense in L2
(R) and that they are not redundant,
i.e. their intersection should only be the zero element.
For example, the first wavelet was the Haar Wavelet, in which each member of the family is a
sequence of step functions that fluctuate between 1 and -1 on the interval [0,1], and are 0 everywhere
else.
Figure 4: First 3 levels of scaling functions
A member of a wavelet family is identified by 2 integers, a level and a location. As seen in figure
4, the daughter wavelets in the second column would be denoted V1,1 and V1,2, denoting which tier
they are in (tier 1), and which position they are within their tier.
A resultant property of wavelets and this system of indexing is that coefficients aj,k of a wavelet
decomposition
j,k
aj,kfi,j intrinsically carry more meaning compared to the coefficients of a Fourier
Series. In the case of images, larger coefficients imply a jump within a region of the image, which
can pertain to noise or an edge. A consequence of this property allows us to only project onto
certain basis functions within a family and cut off our projection before it is accurate enough to
encode the noise as well as the image.
6
Figure 5: Example of MatLab’s denoising via wavelets, with added noise capture
Our goal with wavelets is to isolate the noise, so we used wavelet reconstruction to fix the
noisy image. With the resultant noisy image and the corrected image, we can isolate where the
wavelet method removed noise by subtracting values, similarly to the technique used for MRF noise
isolation. The larger the difference after the subtraction, the more the pixel value was changed. We
can use these values to decide a threshold where the wavelet method determined there was noise.
For exemplifying wavelet reconstruction and noise capturing, we used MatLab’s built in noising
function imNoise(); to damage our image.
Figure 6: Adding noise to an image, Wavelet reconstruction, and noise isolation within MatLab
7
5 Tests
Gaussian Noise
The first test is testing how each method works on one image. We used Lena with Orchard’s noise
code run with a covariance input of 100. From the source image, the average distance from the
noisy image was ≈ 7.07. How well Wavelet Decomposition, Markov Random Field methods, and
IGH corrected the image were measured by which corrected image was closer to the source image.
Since wavelets and MRF are used to detect noise, they were optimized first via distance from the
original image.
For wavelets, we fix the wavelet family to be the biorthogonal spline wavelet and modified the
number of levels and tuning parameter. For using Orchard’s noise code, we had that the minimum
distance between the wavelet fix and the original image was minimized at level 10 with a tuning
parameter of 1. To optimize parameters, fix each value except for one and then proceed to run
tests to see if adjustments lower the distance from the original image. The MRF method was
minimized in a similar manner, and returned optimal results with settings: prior covariance =
100, maximum difference = 256, weight difference = .002, and iterations = 10. From optimizing
wavelet reconstruction and MRF, the average difference between the wavelet fixed image and the
mrf corrected is 6.06 and 5.55, respectively.
We first optimize the threshold for when to replace a value for NaN. On average, 0.8 of the
absolute maximum of the captured noise was the optimal threshold.
To optimize IGH, fix values except for one, and iterate through multiple values for possible in-
puts to find an optimal value. As seen in the following graphs, none of the changes in the settings of
IGH resulted in a reduction to noise compared to the original noisy image, and at times was worse.
For these tests, the difference between the noisy image and the source was 7.08, and corrections to
the image kept the image at 7.01 or higher.
8
Figure 7: changing parameters on IGH image correction
with altered variable on x axis
Visual results show that wavelet and Markov random fields properly corrected the image and
when properly replaced values with NaN values. Iterated geometric harmonics corrects the image
slightly, but not as well as wavelet or MRF methods, with the test returning differences between
the source image, IGHwavelet correction, IGHmrf correction, wavelet reconstruction, and MRF
reconstruction, respectively, as: 7.0934, 6.9785, 6.0333, 5.4173 .
9
Figure 8: comparison between the source image, noisy image,
and the 4 methods of image reconstruction
Salt & Pepper noise
Figure 9: Salt and pepper noise, with density = 0.05;
difference between noisy image and source image is 21.0959
10
Salt & pepper noise occurs as spikes of black or white pixels, which leads to smoothing out an
image to remove these abrupt changes in value. For MRF, the optimal parameters were found to
be: covariance, maximum difference, weight on difference, iterations = 500, 256, 0.1, 10 . With
these parameters the difference between the MRF corrected image and the source image is ≈ 8.8
on average. For wavelet reconstruction, optimal settings were found to be level = 10, with a tuning
parameter of 10, resulting in an average difference of 12.08.
For optimizing IGH, we again looked at plots on how the difference between the corrected image
and the source changed upon changing a singular parameter. Using this method to optimize IGH,
we ended up with settings: threshold, iterations, kernel parameter, delta = .3, 10, 1000, 0.000001,
which averaged a distance of 4.7 when using wavelets to detect noise. When using Markov Random
Fields to detect noise, only 6 iterations of IGH were needed, but had a higher average than when
using wavelets, 5.2 average versus 4.7. Optimization techniques was similar to the previous section
when attempting to reconstruct an image with Gaussian noise.
Figure 10: finding optimizing parameters using MRF detection
For the following example, the distances for the images ighwvt, ighmrf , waveletfix, and MRFfix
were 4.3, 5.2, 12.0, 8.6, respectively.
11
6 Results and Discussion
When comparing the results of all four methods of image reconstruction, wavelet and MRF recon-
struction both did a better job correcting Gaussian noise. This is most likely because Gaussian noise
creates slight perturbations to the source image, so when the images are corrected with wavelets
and MRF, the noise is difficult to detect, as the whole image was corrected, and noise was detected
somewhat uniformly. This is supported by the results of the four methods correcting Salt & Pepper
noise, where IGH did much better than MRF and wavelet methods. In the Salt & Pepper, the noise
added to the source image is drastic, which allows for easier detection for noise between the wavelet
correction and the MRF correction.
In the end, MRF methods, on average, performed better than wavelets for correcting noise.
IGH results varied depending on the type of noise added to the source image. When correcting
Gaussian noise, IGH didn’t correct the image as much as the other two images, but fared far better
than the other two methods when reconstructing images damaged with Salt & Pepper style noise.
Surprisingly, within the Salt & Pepper correction, while MRF methods resulted in more accurate
reconstruction than wavelet reconstruction, IGH corrected the image more accurately using the
noise detected from the wavelet reconstruction.
Results from this experiment depend heavily on outside code, so different results could follow
from more sophisticated methods for MRF, wavelets or IGH, as well as more accurate methods of
optimizing parameters.
12
7 Code and Usage
Comparisons between each of the methods is done through the function MethodComparison();.
Syntax
MethodComparison(src image, noise type, delta, saveTitle);
This function takes in an undamaged source image, artificially adds noise to the image through
values hard coded into the function. Noise options are as follows:
noise type helper function Description
0 imnoise()
Uses MatLab’s built in function to add Gaussian noise
to the image with parameters mean and variance preset
within the function.
1 blurimage()
Uses a function provided by Orchard[3] to add Gaussian
noise to the image with an additional variance parameter
preset within the function
2 addNoise()
Uses a function provided by Dr. Pearse to add Gaussian to
an inputted image with probability and extremity param-
eters preset within the function
3 imnoise()
Uses MatLab’s built in noise function to add Salt & Pepper
noise to the image with a set density within the function
Discription
This code relies upon Orchard’s MatLab code for MRF, and upon Dr. Pearse’s IGH code. To run
the code, the function call takes in a source image, which is assumed to be undamaged. Noise type
is a integer value of 0, 1, 2, or 3, determining the type of noise, with specifics drawn from the table
above. The ‘delta’ parameter is the percentage of the maximum difference between the noisy image
and the repaired image to determine the threshold for when to consider a pixel value changed. The
final parameter is a string to determine what to save the final comparison image as.
The function is broken into 7 sections, with additional small helper functions after the main
body of code. The sections of code are broken down below. Due to the amount of parameters
needed for all of the larger helper functions, (such as MatLab’s imnoise()), hard coded parameters
were implemented into the code. To optimize settings of each method of image reconstruction,
values were changed within the main body of code.
Section descriptions
In section 1 of the code, we initialize values. The variables prob, noise, matlabgauss mean, matlab-
gauss var, orch covar, and sp noiseDensity are used in the helper functions which input noise into
the source image. level and sorh are used in wavelet reconstruction. These are 2 of the variables to
be tuned for wavelet reconstruction. These values can be changed as preferred in function. Using
the values in the previous section, we then add noise to the source image. After adding noise to the
source image, the code displays the comparison between the default image and the source image.
In sections 3 and 4, we reconstruct the image using wavelets and MRF respectively. For wavelets,
the next major parameters available to change is the tuning parameter, which is the final parameter
of the wthrmngr() function. Depending on this value, the third parameter should be chaged to
13
‘penalhi,’ ‘penalme,’ or ‘penallo.’ In section 4, mrf() has parameters: ‘noise covariance,’ ‘maximum
allowed difference,’ ‘weight difference,’ and ‘number of iterations.’ In both sections, we additionally
create a new image that records the differences between matching pixels.
The next 2 sections breaks the process of using IGH to correct the images. Section 5 creates
copies of the noisy images with values above the threshold replaced with NaN values. The last 2
lines of section 5 provide an option to display how many pixels were changed by the set threshold
to help with debugging, or choosing parameters. Then, section 6 runs Dr. Pearse’s IGH code,
which has input parameters damaged image, number of iterations, kernel type, kernel parameter,
and epsilon. Here kernel type is fixed to be Gaussian, which fixes kernel parameter to represent the
variance. Epsilon determines the lower cutoff point for which eigenvalues we can use in IGH.
Section 7 displays numerical differences between the 4 reconstructions to the source image,
displays an image so that the user can visually compare the 4 reconstructions to the source image
and the original noisy image, and saves the visual comparison as saveTitle.
The subsequent helper functions are, in order: Orchard’s noise function, Dr. Pearse’s noise
function, and the function that compares images using the metric defined in (1).
Class Support
This function has been tested for source images of types uint8 and double. When using double
valued source images, MatLab’s imnoise() has issues adding noise without destroying images so it
is advised to use uint8 source images.
Example
Running the command MethodComparison(lena, 3, .6, example.png ); returns the vector [11.4589 12.2287 1
and saves the following image into the current MatLab directory under the label ’example.png’.
Figure 11: example.png
14
8 References
[1] Erin Pearse. Iterated Geometric Harmonics for Data Imputation and Reconstruction of Missing
Data.
[2] Stephane S. Lafon. Diffusion maps and Geometric Harmonics
[3] Peter Orchard. Markov Random Field Optimization
15

More Related Content

What's hot

Comparison of image segmentation
Comparison of image segmentationComparison of image segmentation
Comparison of image segmentationHaitham Ahmed
 
Image segmentation
Image segmentationImage segmentation
Image segmentationRania H
 
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...National Cheng Kung University
 
Efficiency and capability of fractal image compression with adaptive quardtre...
Efficiency and capability of fractal image compression with adaptive quardtre...Efficiency and capability of fractal image compression with adaptive quardtre...
Efficiency and capability of fractal image compression with adaptive quardtre...ijma
 
Illustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIllustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIJMER
 
Lec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image SegmentationLec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image SegmentationUlaş Bağcı
 
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Koteswar Rao Jerripothula
 
Histogram Processing
Histogram ProcessingHistogram Processing
Histogram ProcessingAmnaakhaan
 
Image segmentation
Image segmentation Image segmentation
Image segmentation Amnaakhaan
 
20140530.journal club
20140530.journal club20140530.journal club
20140530.journal clubHayaru SHOUNO
 
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODFORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
 
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESGREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
 
20150703.journal club
20150703.journal club20150703.journal club
20150703.journal clubHayaru SHOUNO
 
Study of Various Histogram Equalization Techniques
Study of Various Histogram Equalization TechniquesStudy of Various Histogram Equalization Techniques
Study of Various Histogram Equalization TechniquesIOSR Journals
 
METHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLI
METHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLIMETHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLI
METHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLIijcsit
 
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
 

What's hot (19)

Comparison of image segmentation
Comparison of image segmentationComparison of image segmentation
Comparison of image segmentation
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
 
Efficiency and capability of fractal image compression with adaptive quardtre...
Efficiency and capability of fractal image compression with adaptive quardtre...Efficiency and capability of fractal image compression with adaptive quardtre...
Efficiency and capability of fractal image compression with adaptive quardtre...
 
Illustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIllustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
 
W33123127
W33123127W33123127
W33123127
 
Lec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image SegmentationLec11: Active Contour and Level Set for Medical Image Segmentation
Lec11: Active Contour and Level Set for Medical Image Segmentation
 
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
Image segmentation using advanced fuzzy c-mean algorithm [FYP @ IITR, obtaine...
 
Histogram Processing
Histogram ProcessingHistogram Processing
Histogram Processing
 
Image segmentation
Image segmentation Image segmentation
Image segmentation
 
20140530.journal club
20140530.journal club20140530.journal club
20140530.journal club
 
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODFORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
 
linkd
linkdlinkd
linkd
 
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESGREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURES
 
20150703.journal club
20150703.journal club20150703.journal club
20150703.journal club
 
Study of Various Histogram Equalization Techniques
Study of Various Histogram Equalization TechniquesStudy of Various Histogram Equalization Techniques
Study of Various Histogram Equalization Techniques
 
METHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLI
METHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLIMETHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLI
METHOD FOR A SIMPLE ENCRYPTION OF IMAGES BASED ON THE CHAOTIC MAP OF BERNOULLI
 
regions
regionsregions
regions
 
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
 

Viewers also liked

1. towards open pedagogical practices
1. towards open pedagogical practices1. towards open pedagogical practices
1. towards open pedagogical practicesAngelica Risquez
 
Eliza Boon Resume copy
Eliza Boon Resume copyEliza Boon Resume copy
Eliza Boon Resume copyEliza Boon
 
Reseña de película interstellar
Reseña de película interstellarReseña de película interstellar
Reseña de película interstellarDino Arceuxis
 
Maravillas de la técnica
Maravillas de la técnicaMaravillas de la técnica
Maravillas de la técnicaDino Arceuxis
 
Bailando nico cris
Bailando nico crisBailando nico cris
Bailando nico crisnicolacsus9
 
Firehost Webinar: Validating your Cardholder Data Envirnment
Firehost Webinar: Validating your Cardholder Data EnvirnmentFirehost Webinar: Validating your Cardholder Data Envirnment
Firehost Webinar: Validating your Cardholder Data EnvirnmentArmor
 
dfgrsNuevo presentación de microsoft office power point
dfgrsNuevo presentación de microsoft office power pointdfgrsNuevo presentación de microsoft office power point
dfgrsNuevo presentación de microsoft office power pointlusembo
 
Imagine Dragons
Imagine DragonsImagine Dragons
Imagine DragonsEnriquedpa
 
Presentación 11
Presentación 11Presentación 11
Presentación 11arecerv
 
Programa 3
Programa 3 Programa 3
Programa 3 arecerv
 
SkidWeigh ED2 Series Version 1200
SkidWeigh ED2 Series Version 1200SkidWeigh ED2 Series Version 1200
SkidWeigh ED2 Series Version 1200Ted Jurca
 

Viewers also liked (20)

1. towards open pedagogical practices
1. towards open pedagogical practices1. towards open pedagogical practices
1. towards open pedagogical practices
 
Eliza Boon Resume copy
Eliza Boon Resume copyEliza Boon Resume copy
Eliza Boon Resume copy
 
Reseña de película interstellar
Reseña de película interstellarReseña de película interstellar
Reseña de película interstellar
 
Maravillas de la técnica
Maravillas de la técnicaMaravillas de la técnica
Maravillas de la técnica
 
Foamy en quemaduras
Foamy en quemadurasFoamy en quemaduras
Foamy en quemaduras
 
Bailando nico cris
Bailando nico crisBailando nico cris
Bailando nico cris
 
Nuestras tradiciones
Nuestras tradicionesNuestras tradiciones
Nuestras tradiciones
 
Caldeira industrial
Caldeira industrialCaldeira industrial
Caldeira industrial
 
Firehost Webinar: Validating your Cardholder Data Envirnment
Firehost Webinar: Validating your Cardholder Data EnvirnmentFirehost Webinar: Validating your Cardholder Data Envirnment
Firehost Webinar: Validating your Cardholder Data Envirnment
 
Virus del chikungunya
Virus del chikungunyaVirus del chikungunya
Virus del chikungunya
 
Al-nadeem Portfolio
Al-nadeem PortfolioAl-nadeem Portfolio
Al-nadeem Portfolio
 
resume mumbai
resume mumbairesume mumbai
resume mumbai
 
dfgrsNuevo presentación de microsoft office power point
dfgrsNuevo presentación de microsoft office power pointdfgrsNuevo presentación de microsoft office power point
dfgrsNuevo presentación de microsoft office power point
 
Imagine Dragons
Imagine DragonsImagine Dragons
Imagine Dragons
 
DIAPOSITIVAS
DIAPOSITIVASDIAPOSITIVAS
DIAPOSITIVAS
 
Presentación 11
Presentación 11Presentación 11
Presentación 11
 
Premis ànim 1r i 2n la salle manlleu
Premis ànim 1r i 2n la salle manlleuPremis ànim 1r i 2n la salle manlleu
Premis ànim 1r i 2n la salle manlleu
 
Programa 3
Programa 3 Programa 3
Programa 3
 
SkidWeigh ED2 Series Version 1200
SkidWeigh ED2 Series Version 1200SkidWeigh ED2 Series Version 1200
SkidWeigh ED2 Series Version 1200
 
Asier registros
Asier registrosAsier registros
Asier registros
 

Similar to Image Processing

PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
 
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...
Joint3DShapeMatching  - a fast approach to 3D model matching using MatchALS 3...Joint3DShapeMatching  - a fast approach to 3D model matching using MatchALS 3...
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...Mamoon Ismail Khalid
 
A Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesA Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
 
Blind Image Seperation Using Forward Difference Method (FDM)
Blind Image Seperation Using Forward Difference Method (FDM)Blind Image Seperation Using Forward Difference Method (FDM)
Blind Image Seperation Using Forward Difference Method (FDM)sipij
 
Rigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy SystemRigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy Systeminventy
 
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation ModelImage Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation ModelIJERA Editor
 
search engine for images
search engine for imagessearch engine for images
search engine for imagesAnjani
 
Paper id 24201464
Paper id 24201464Paper id 24201464
Paper id 24201464IJRAT
 
Depth estimation from stereo image pairs using block-matching
Depth estimation from stereo image pairs using block-matchingDepth estimation from stereo image pairs using block-matching
Depth estimation from stereo image pairs using block-matchingAbhranil Das
 
The International Journal of Computational Science, Information Technology an...
The International Journal of Computational Science, Information Technology an...The International Journal of Computational Science, Information Technology an...
The International Journal of Computational Science, Information Technology an...rinzindorjej
 
Satellite image compression technique
Satellite image compression techniqueSatellite image compression technique
Satellite image compression techniqueacijjournal
 
videoMotionTrackingPCA
videoMotionTrackingPCAvideoMotionTrackingPCA
videoMotionTrackingPCAKellen Betts
 
imageCorrectionLinearDiffusion
imageCorrectionLinearDiffusionimageCorrectionLinearDiffusion
imageCorrectionLinearDiffusionKellen Betts
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 

Similar to Image Processing (20)

PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
 
Joint3DShapeMatching
Joint3DShapeMatchingJoint3DShapeMatching
Joint3DShapeMatching
 
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...
Joint3DShapeMatching  - a fast approach to 3D model matching using MatchALS 3...Joint3DShapeMatching  - a fast approach to 3D model matching using MatchALS 3...
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...
 
A Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray ImagesA Novel Feature Extraction Scheme for Medical X-Ray Images
A Novel Feature Extraction Scheme for Medical X-Ray Images
 
Blind Image Seperation Using Forward Difference Method (FDM)
Blind Image Seperation Using Forward Difference Method (FDM)Blind Image Seperation Using Forward Difference Method (FDM)
Blind Image Seperation Using Forward Difference Method (FDM)
 
Rigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy SystemRigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy System
 
Dycops2019
Dycops2019 Dycops2019
Dycops2019
 
3rd unit.pptx
3rd unit.pptx3rd unit.pptx
3rd unit.pptx
 
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation ModelImage Restitution Using Non-Locally Centralized Sparse Representation Model
Image Restitution Using Non-Locally Centralized Sparse Representation Model
 
search engine for images
search engine for imagessearch engine for images
search engine for images
 
Paper id 24201464
Paper id 24201464Paper id 24201464
Paper id 24201464
 
Depth estimation from stereo image pairs using block-matching
Depth estimation from stereo image pairs using block-matchingDepth estimation from stereo image pairs using block-matching
Depth estimation from stereo image pairs using block-matching
 
2009 asilomar
2009 asilomar2009 asilomar
2009 asilomar
 
Linear algebra havard university
Linear algebra havard universityLinear algebra havard university
Linear algebra havard university
 
The International Journal of Computational Science, Information Technology an...
The International Journal of Computational Science, Information Technology an...The International Journal of Computational Science, Information Technology an...
The International Journal of Computational Science, Information Technology an...
 
Satellite image compression technique
Satellite image compression techniqueSatellite image compression technique
Satellite image compression technique
 
07 Tensor Visualization
07 Tensor Visualization07 Tensor Visualization
07 Tensor Visualization
 
videoMotionTrackingPCA
videoMotionTrackingPCAvideoMotionTrackingPCA
videoMotionTrackingPCA
 
imageCorrectionLinearDiffusion
imageCorrectionLinearDiffusionimageCorrectionLinearDiffusion
imageCorrectionLinearDiffusion
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 

Image Processing

  • 1. Comparison of Image Restoration Techniques for Removing Noise A Senior Project by Tuyen Pham Supervised by Dr. Erin Pearse California Polytechnic State University, San Luis Obispo Advisor Signature: Chair Signature:
  • 2. Comparison of Image Restoration Techniques for Removing Noise Tuyen Pham Dr. Erin Pearse January 9, 2017 Abstract This paper presents a comparison between using Wavelets, Markov Random Fields, and Iterated Geometric Harmonics in detecting noise within images and correcting it. After a brief overview of each of the topics and how they are used within image reconstruction. Afterward, this paper will explain the process we used to test the methods and compare results.
  • 3. Contents 1 Introduction 1 2 Iterated Geographic Harmonics 2 3 Markov Random Fields 3 4 Wavelet Decomposition 5 5 Tests 8 6 Results and Discussion 12 7 Code and Usage 13 8 References 15
  • 4. 1 Introduction Problems in computer vision and image analysis often involve noisy data, which most likely makes exact solutions impossible. There are several strategies for removing noise from images. In this project we will be comparing Iterated Geometric Harmonics (IGH), Wavelet methods, and Markov Random Field methods. Within images, noise is seen as incorrect values rather than missing values. This creates issues when attempting to correct data with Iterated Geometric Harmonics, as described in the subsequent section. To remedy this, we compare how well wavelet methods and MRF methods can be used to detect noise within images. We detect noise by correcting the image with each respective method, then comparing the original noisy image to the reconstructed image to isolate differences between the damaged and repaired pixels. Larger differences give us the location of the noise within the noisy file. For compatibility with IGH, we can then remove those pixels, done in MatLab by replacing their original values with NaN values. Once holes have been punched out of our image, we can run the new image with legitimately missing values through IGH to compare how well each of the three methods can correct noise. Generally, data is treated as a collection of vectors. Data is represented as an n × p matrix, in which each of the n rows is a vector of length p. We call rows records, and columns parameters or coordinates. For images, we take values from the original 2 dimensional image, with size n × m, and rearrange the information to be in one long vector of length n · m. For example, an image of size 112×92 pixels would be organized into a vector v ⊆ R10,304 , and a collection of 36 n×m images would be stored as a 36 × 10, 304 matrix. To compare two images image1, and image2, with the width n and height m, we will define a distance between two images as: image1 − image2 = n·m j=1 (aj − bj)2 √ n2 + m2 (1) Where aj is the jth entry of image1, and bj is similarly defined for image2. This will be our method for measuring how well each method reconstructed an image by comparing it to an original without noise. The strategy for comparing how well Markov Random Fields and Wavelet Decomposition meth- ods repair and locate noise within an image is tested by artificially placing noise within an im- age. Specifically, we will inject noise into an image via one of three methods: MatLab’s function imNoise(), a function from Dr. Erin Pearse, and another from Orchard[3]. In the examples below, the Gaussian (normal) Distribution is used for generating the noise. After artificially adding noise, we separately use Wavelet Decomposition and Markov Random Fields to repair two copies of the noisy image, and then compare the repaired images to the noisy image to isolate where each method made corrections. Once we have 2 separate sets of noise, one located from wavelet methods and another from MRF methods, we will replace values where noise was reported in copies of noisy image and run the copies with NaN values through our Iterated Geometric Harmonics function to reconstruct our images. Then we have 2 images that we can compare to the original using our metric, (1). 1
  • 5. 2 Iterated Geographic Harmonics Since conventional statistical software does not fare well with missing data, incomplete data points were either discarded or to be artificially fill in the missing data. Discarding data is unfavorable as it could bias the remaining data, or leave too little to analyze. Iterated Geometric Harmonics (IGH) is a method used for imputing missing data for scientific/statistical analysis so we don’t have to discard any data. Geometric harmonics uses the geometry of the dataset to create an extension of the data. The geometry of the dataset depends on a measure defined by the analyst to discern similarities within data points. To analyze the structure of the dataset, which is typically nonlinear, we treat it as a graph network with vertices corresponding to points in the dataset. Each vertex is connected to every other vertex by an edge with a weight defined in terms of a symmetric and nonnegative kernel function. This allows us to represent the graph as a n×n matrix K, where each entry K(x, y) gives the weight of the edge connecting the vertices x and y. The nonlinear structure of the dataset is encoded by the kernel function as a “similarity” between two vertices, where K(x, y) 0 implies that x and y are very similar, while K(x, y) ≈ 0 indicates a large difference between x and y. In this project, we use the positive semidefinite Gaussian kernel: K(x, y) = 1 σ √ 2π exp(− x − y 2 2 σ2 ) where the 2 -norm is given by the usual norm upon vectors x − y 2 = p k=1 | xk − yk |2 As per the introduction, we consider an n × p matrix representing n data where a data point is a vector x ⊆ Rp . Geometric harmonics imputes in one column at a time, so the iterated portion of ’IGH’ comes from the fact that we update the matrix after each column is imputed and iterate over all columns in the matrix. In other words, let xj be the jth column of the data matrix. We can consider values within xj to be results of a function depending on the other columns. xj(i) = xi,j = f(xi,1, xi,2, · · · , xi,j−1, xj,j+1, · · · , xi,p) where i ∈ Γ of the subset of rows where the jth entry is not missing. Specifically, we have f, a function defined on Γ ⊂ X, that we will extend to some function, F which is defined on all of X Geometric harmonics use a form of the Nystrom method for approximating integral equations by subsampling, i.e., discretizing the integral. Geometric harmonics applies the Nystrom method to collect eigendata from the solution of the eigenvalue integral equation defined on the subset of our data where our values are known. We denote this subset Γ. The solutions of the eigen equation kψ = λψ satisfy the entrywise equations (ψ)m j=1, (λ)m j=1 with λjψj(x) = y∈Γ k(x, y)ψj(y), for j = 1, . . . , m From here, using the eigendata, geometric harmonics are constructed Ψj(x) = 1 λj y∈Γ k(x, y)ψj(y) 2
  • 6. Here, Ψj(x) = ψj(x) when x ∈ Γ, but Ψj is also defined for x ∈ (X − Γ). Next we can construct an extension of f : Γ → Y F(x) = m l=1 xi, ψl ΓΨl(x) where xi, ψl Γ = x∈Γ xi(x)ψl(x) This function works to fill in missing values within a dataset, but since noise within images is not considered missing data, but rather incorrect data, IGH is not completely suited towards the problem of image reconstruction. To remedy this, we will use Wavelet Decomposition and Markov Random Fields to locate and remove damaged data to fulfill proper conditions for IGH to run. 3 Markov Random Fields Markov Random Fields inherits its name from the Markov Property, which means that this process satisfies a memoryless property within a stochastic process. A Markov Random Field is a graphical model of a joint probability distribution, described an undirected graph. In this graph, nodes represent random variables and edges denote conditional dependence relationships the connected nodes. This model is said to have the Markov property because the values of a given Random Variable, zi, are independent of all other random variables, given the values of the neighbors of zi in the graph. Figure 1: Given the gray nodes, the black node, zi is conditionally independent of all other nodes This method of analysis lends itself well to image analysis and computer vision because each pixel value usually depends strongly on the neighboring pixel while only having weak correlations with pixels that are further away. Similarly to a Markov Chain being indexed by time, where a state Yj depends only on Yj−1, Markov Random Field optimization in image analysis is indexed by spatial variables, so zj can depend on more than one other random variable, but is limited to neighboring nodes. MRF determines the state of a pixel based on two potential functions, corresponding to the neighboring estimated pixel values and the noisy value. Markov Random Fields have proven to be effective in image reconstruction and restoration, segmentation, and edge detection. The following figure is an example of a MRF when applied to image restoration, where each black node is an unknown true value for the random variable and the gray nodes denote the original noisy data. The edges, both solid and dotted, show that a random variable should depend on both the noisy pixel value and the estimated neighboring values. 3
  • 7. Figure 2: Representation of a partial Markov Random Field within image restoration The optimization portion of Markov Random Field optimization is the method of determining when to stop updating the current pixel value. Maximizing the joint probability over every node of the graph is equivalent to minimizing an energy function, which is determined by weights placed on the original pixel value and the neighboring pixel values. This means that there is a trade off between smoothing (regularization) and accuracy (matching the original data). This optimization can be done through numerical methods, such as gradient descent methods, or through techniques designed specifically for MRF optimization. More simply, updating the pixel value throughout an image with noise will depend on its neigh- bors, and on how drastic the proposed change is. Within MRF methods, each problem has different sets of energy functions, often with different standard energy functions between different problems. In this project, we used the standard energy equations for image restoration given by Orchard [3]. In the energy equations, dn is the current observed value at data node n, xn is the proposed value for the data node n. Vk is the energy between the observed value and the proposed value, with a prior belief that the image is corrupted with Gaussian noise with variance σ2 . Since we are attempting to minimize the total energy, we can see that large differences are discouraged. The second equation, Vi,j is the energy between neighboring nodes and has β and γ as constant values. Vk = (xk − dk)2 2σ2 Vj,i = γ min xj∼xi ((xj − xi)2 , β) Here xj ∼ xi denotes that we only check on nodes neighboring xj. Vi,j punishes large differences between nodes up to a value β. This punishment has a bias towards totally smooth images and since we want to preserve edges and accounts for the trade off stated above, we introduce γ to scale the punishment down, this allows for an appropriate scaling with respect to Vn. 4
  • 8. Figure 3: Markov Random Field reconstruction Example Settings Covariance 100, β = 200, γ = 0.009, 10 iterations 4 Wavelet Decomposition A wavelet family is a family of oscillatory and compactly supported functions that form an or- thonormal basis of L2 (X). Families of wavelets and decomposition through projection onto the members works similarly to Fourier decomposition. In comparison to a Fourier series, wavelets hail from a ‘mother wavelet’, and consequent ’daughter wavelets’ are formed in tiers. This construction can be done recursively and allows for a quicker method for decomposition with nice properties. Wavelets in image analysis fall under the domain of multiresolution analysis of the Lebesgue space L2 (R) and consist of a sequence of nested subspaces {0} ⊂ · · · ⊂ V1 ⊂ V0 ⊂ V−1 ⊂ · · · ⊂ V−n ⊂ V−n−1 ⊂ · · · ⊂ L2 (R) That satisfy certain self-similarity relations in time/space, and scale/frequency, in addition to com- pleteness and regularity relations. Here, self-similarity in time is the condition that each subspace Vk is invariant under shifts by integer multiples of 2k , that is, for each f ∈ Vk, m ∈ Z the function g defined as g(x) = f(x − m2k ) is also contained in vk To satisfy self-similarity in space, we have that for Vk ⊂ Vl, k > l, for each f ∈ Vk, ∃g ∈ Vl with ∀x ∈ R, g(x) = f(2k−l x). This means that for each wavelet in Vk, there is another similar wavelet scaled down by a factor of 2k−l in Vl. 5
  • 9. From these properties, we have that in the sequence of subspaces, for k > l the space resolution 2l of the lth subspace is higher than the resolution 2k of the kth subspace. The regularity properties are what allows us to generate functions within our basis from the scaling functions, or Mother Wavelets, as previously noted. The final condition of completeness calls for the union of the nested subspace to be dense in L2 (R) and that they are not redundant, i.e. their intersection should only be the zero element. For example, the first wavelet was the Haar Wavelet, in which each member of the family is a sequence of step functions that fluctuate between 1 and -1 on the interval [0,1], and are 0 everywhere else. Figure 4: First 3 levels of scaling functions A member of a wavelet family is identified by 2 integers, a level and a location. As seen in figure 4, the daughter wavelets in the second column would be denoted V1,1 and V1,2, denoting which tier they are in (tier 1), and which position they are within their tier. A resultant property of wavelets and this system of indexing is that coefficients aj,k of a wavelet decomposition j,k aj,kfi,j intrinsically carry more meaning compared to the coefficients of a Fourier Series. In the case of images, larger coefficients imply a jump within a region of the image, which can pertain to noise or an edge. A consequence of this property allows us to only project onto certain basis functions within a family and cut off our projection before it is accurate enough to encode the noise as well as the image. 6
  • 10. Figure 5: Example of MatLab’s denoising via wavelets, with added noise capture Our goal with wavelets is to isolate the noise, so we used wavelet reconstruction to fix the noisy image. With the resultant noisy image and the corrected image, we can isolate where the wavelet method removed noise by subtracting values, similarly to the technique used for MRF noise isolation. The larger the difference after the subtraction, the more the pixel value was changed. We can use these values to decide a threshold where the wavelet method determined there was noise. For exemplifying wavelet reconstruction and noise capturing, we used MatLab’s built in noising function imNoise(); to damage our image. Figure 6: Adding noise to an image, Wavelet reconstruction, and noise isolation within MatLab 7
  • 11. 5 Tests Gaussian Noise The first test is testing how each method works on one image. We used Lena with Orchard’s noise code run with a covariance input of 100. From the source image, the average distance from the noisy image was ≈ 7.07. How well Wavelet Decomposition, Markov Random Field methods, and IGH corrected the image were measured by which corrected image was closer to the source image. Since wavelets and MRF are used to detect noise, they were optimized first via distance from the original image. For wavelets, we fix the wavelet family to be the biorthogonal spline wavelet and modified the number of levels and tuning parameter. For using Orchard’s noise code, we had that the minimum distance between the wavelet fix and the original image was minimized at level 10 with a tuning parameter of 1. To optimize parameters, fix each value except for one and then proceed to run tests to see if adjustments lower the distance from the original image. The MRF method was minimized in a similar manner, and returned optimal results with settings: prior covariance = 100, maximum difference = 256, weight difference = .002, and iterations = 10. From optimizing wavelet reconstruction and MRF, the average difference between the wavelet fixed image and the mrf corrected is 6.06 and 5.55, respectively. We first optimize the threshold for when to replace a value for NaN. On average, 0.8 of the absolute maximum of the captured noise was the optimal threshold. To optimize IGH, fix values except for one, and iterate through multiple values for possible in- puts to find an optimal value. As seen in the following graphs, none of the changes in the settings of IGH resulted in a reduction to noise compared to the original noisy image, and at times was worse. For these tests, the difference between the noisy image and the source was 7.08, and corrections to the image kept the image at 7.01 or higher. 8
  • 12. Figure 7: changing parameters on IGH image correction with altered variable on x axis Visual results show that wavelet and Markov random fields properly corrected the image and when properly replaced values with NaN values. Iterated geometric harmonics corrects the image slightly, but not as well as wavelet or MRF methods, with the test returning differences between the source image, IGHwavelet correction, IGHmrf correction, wavelet reconstruction, and MRF reconstruction, respectively, as: 7.0934, 6.9785, 6.0333, 5.4173 . 9
  • 13. Figure 8: comparison between the source image, noisy image, and the 4 methods of image reconstruction Salt & Pepper noise Figure 9: Salt and pepper noise, with density = 0.05; difference between noisy image and source image is 21.0959 10
  • 14. Salt & pepper noise occurs as spikes of black or white pixels, which leads to smoothing out an image to remove these abrupt changes in value. For MRF, the optimal parameters were found to be: covariance, maximum difference, weight on difference, iterations = 500, 256, 0.1, 10 . With these parameters the difference between the MRF corrected image and the source image is ≈ 8.8 on average. For wavelet reconstruction, optimal settings were found to be level = 10, with a tuning parameter of 10, resulting in an average difference of 12.08. For optimizing IGH, we again looked at plots on how the difference between the corrected image and the source changed upon changing a singular parameter. Using this method to optimize IGH, we ended up with settings: threshold, iterations, kernel parameter, delta = .3, 10, 1000, 0.000001, which averaged a distance of 4.7 when using wavelets to detect noise. When using Markov Random Fields to detect noise, only 6 iterations of IGH were needed, but had a higher average than when using wavelets, 5.2 average versus 4.7. Optimization techniques was similar to the previous section when attempting to reconstruct an image with Gaussian noise. Figure 10: finding optimizing parameters using MRF detection For the following example, the distances for the images ighwvt, ighmrf , waveletfix, and MRFfix were 4.3, 5.2, 12.0, 8.6, respectively. 11
  • 15. 6 Results and Discussion When comparing the results of all four methods of image reconstruction, wavelet and MRF recon- struction both did a better job correcting Gaussian noise. This is most likely because Gaussian noise creates slight perturbations to the source image, so when the images are corrected with wavelets and MRF, the noise is difficult to detect, as the whole image was corrected, and noise was detected somewhat uniformly. This is supported by the results of the four methods correcting Salt & Pepper noise, where IGH did much better than MRF and wavelet methods. In the Salt & Pepper, the noise added to the source image is drastic, which allows for easier detection for noise between the wavelet correction and the MRF correction. In the end, MRF methods, on average, performed better than wavelets for correcting noise. IGH results varied depending on the type of noise added to the source image. When correcting Gaussian noise, IGH didn’t correct the image as much as the other two images, but fared far better than the other two methods when reconstructing images damaged with Salt & Pepper style noise. Surprisingly, within the Salt & Pepper correction, while MRF methods resulted in more accurate reconstruction than wavelet reconstruction, IGH corrected the image more accurately using the noise detected from the wavelet reconstruction. Results from this experiment depend heavily on outside code, so different results could follow from more sophisticated methods for MRF, wavelets or IGH, as well as more accurate methods of optimizing parameters. 12
  • 16. 7 Code and Usage Comparisons between each of the methods is done through the function MethodComparison();. Syntax MethodComparison(src image, noise type, delta, saveTitle); This function takes in an undamaged source image, artificially adds noise to the image through values hard coded into the function. Noise options are as follows: noise type helper function Description 0 imnoise() Uses MatLab’s built in function to add Gaussian noise to the image with parameters mean and variance preset within the function. 1 blurimage() Uses a function provided by Orchard[3] to add Gaussian noise to the image with an additional variance parameter preset within the function 2 addNoise() Uses a function provided by Dr. Pearse to add Gaussian to an inputted image with probability and extremity param- eters preset within the function 3 imnoise() Uses MatLab’s built in noise function to add Salt & Pepper noise to the image with a set density within the function Discription This code relies upon Orchard’s MatLab code for MRF, and upon Dr. Pearse’s IGH code. To run the code, the function call takes in a source image, which is assumed to be undamaged. Noise type is a integer value of 0, 1, 2, or 3, determining the type of noise, with specifics drawn from the table above. The ‘delta’ parameter is the percentage of the maximum difference between the noisy image and the repaired image to determine the threshold for when to consider a pixel value changed. The final parameter is a string to determine what to save the final comparison image as. The function is broken into 7 sections, with additional small helper functions after the main body of code. The sections of code are broken down below. Due to the amount of parameters needed for all of the larger helper functions, (such as MatLab’s imnoise()), hard coded parameters were implemented into the code. To optimize settings of each method of image reconstruction, values were changed within the main body of code. Section descriptions In section 1 of the code, we initialize values. The variables prob, noise, matlabgauss mean, matlab- gauss var, orch covar, and sp noiseDensity are used in the helper functions which input noise into the source image. level and sorh are used in wavelet reconstruction. These are 2 of the variables to be tuned for wavelet reconstruction. These values can be changed as preferred in function. Using the values in the previous section, we then add noise to the source image. After adding noise to the source image, the code displays the comparison between the default image and the source image. In sections 3 and 4, we reconstruct the image using wavelets and MRF respectively. For wavelets, the next major parameters available to change is the tuning parameter, which is the final parameter of the wthrmngr() function. Depending on this value, the third parameter should be chaged to 13
  • 17. ‘penalhi,’ ‘penalme,’ or ‘penallo.’ In section 4, mrf() has parameters: ‘noise covariance,’ ‘maximum allowed difference,’ ‘weight difference,’ and ‘number of iterations.’ In both sections, we additionally create a new image that records the differences between matching pixels. The next 2 sections breaks the process of using IGH to correct the images. Section 5 creates copies of the noisy images with values above the threshold replaced with NaN values. The last 2 lines of section 5 provide an option to display how many pixels were changed by the set threshold to help with debugging, or choosing parameters. Then, section 6 runs Dr. Pearse’s IGH code, which has input parameters damaged image, number of iterations, kernel type, kernel parameter, and epsilon. Here kernel type is fixed to be Gaussian, which fixes kernel parameter to represent the variance. Epsilon determines the lower cutoff point for which eigenvalues we can use in IGH. Section 7 displays numerical differences between the 4 reconstructions to the source image, displays an image so that the user can visually compare the 4 reconstructions to the source image and the original noisy image, and saves the visual comparison as saveTitle. The subsequent helper functions are, in order: Orchard’s noise function, Dr. Pearse’s noise function, and the function that compares images using the metric defined in (1). Class Support This function has been tested for source images of types uint8 and double. When using double valued source images, MatLab’s imnoise() has issues adding noise without destroying images so it is advised to use uint8 source images. Example Running the command MethodComparison(lena, 3, .6, example.png ); returns the vector [11.4589 12.2287 1 and saves the following image into the current MatLab directory under the label ’example.png’. Figure 11: example.png 14
  • 18. 8 References [1] Erin Pearse. Iterated Geometric Harmonics for Data Imputation and Reconstruction of Missing Data. [2] Stephane S. Lafon. Diffusion maps and Geometric Harmonics [3] Peter Orchard. Markov Random Field Optimization 15