Bangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% Safe
radiotherapy.pptx
1. Submitted By:
Harmeet Kaur
Dept. of Computer Science and
Applications (DCSA)
Panjab University, Chandigarh –
160014
PUPIN: 18205000930
Supervisor:
Dr. Satish Kumar
Associate Professor
Department of Computer
Science and Applications
Panjab University SSG
Regional Centre
Hoshiarpur, Punjab
CO-GUIDE;Dr
Behgal KS Director
BEHGAL CANCER
HOSPITAL
DR. YAGIYADEEP
SHARMA R.S.O
BEHGAL HOSPITAL
2. 1. Introduction
i. RT Planning
ii. Role of Fusion in RT Planning
iii. Image Decomposition
iv. Image Fusion Rules
v. Image Reconstruction
vi. Image Quality Assessment
vii. International status
viii. National status
ix. Research gaps
2. Review of Literature
i. Image Decomposition And Reconstruction Techniques
ii. Image Fusion
iii. Evaluation metrics
3. Problem Statement
i. Problem Definition
ii. Objectives/ Aims of the proposed research
iii. Scope of the proposed research
2
3. 4. Research Methodology
i. Design Methodology
ii. Implementation
a) Hardware
b) Software
iii. Database
5. Validation and Testing
6. References
3
4. Positron Emission Tomography(PET) image
shows good functional information.
Computed Tomography (CT)is used to provide
information about the anatomical structure of the
organs. CT scanners are used to get the images
of dense structures like bones.
Magnetic Resonance Imaging(MRI) gives better
soft tissue contrast.
There is a need to fuse the above modalities to
assist the expert in taking decisions in treatment
planning and diagnosis.
4
6. Radiotherapy Treatment (RT) Planning is an imperative
phase after the confirmation of disease and before the
treatment delivery.
Target volume delineation is done i.e. GTV, CTV, PTV,
etc. with different colored markers so that the most
effected part is perfectly delineated.
Based on target volume delineation, Dose Planning step
is carried out.
6
8. To provide a better and more complete view of the image
content.
To contour the tumor bed in a better way.
To help in decision making process.
8
10. • Decomposition also known as Analysis phase deals with extraction of
features on the basis of frequency, wavelet or edge.
•Decomposition of an image is done so that the variety of information present
in each image is extracted into sub-bands.
•The sub-images, thus obtained from decomposition contain more useful
information and can be given more weightage as compared to the sub-bands
containing undesirable information.
•Once the sub-bands are obtained, respective sub-band of each image is
fused using fusion technique.
•The main motive behind fusion is to inculcate the features into the final
fused image.
•After successfully fusing the sub-images, single fused image is
reconstructed from the fused sub-images.
10
14. Soft computing based techniques for Medical Image Fusion
• Soft computing techniques: Neural network, Fuzzy logic, Genetic
algorithm and ANFIS. The neural network works on the principal of
learning and adaptation, imprecise and unclear situations are dealt with
fuzzy logic, genetic algorithms are used for searching and optimization,
and ANFIS combines the features of fuzzy logic with neural network.
• The encouragement to authors for using soft computing methods comes
from the fact that these techniques have close resemblance with human
like decision making and will outperform by learning from human experts
followed by rigorous testing and training.
• Soft Computing approach acts as a perfect imitator of biological and
physical processes and so it is also known as nature inspired strategies.
14
16. Results of existing Fusion methods- MAX, PCA, MEAN
Existing methods of
fusion
16
17. The Institute of Cancer Research (ICR), UK: The Radiotherapy
Treatment Planning team works on novel methods for targeting tumours
with external beam radiation. The necessary imaging technology,
calculation of dose distributions and optimization of individualized
treatment plans are developed. Techniques such as intensity-modulated
radiation therapy, volumetric modulated arc therapy and image-guided
radiotherapy are continually being improved.
Olivia Newton-John Cancer Wellness & Research Centre, Australia:
The focus is on targeting and molecular imaging of tumours and
exploring receptor-based signaling pathways responsible for cancer cell
growth through the development of innovative strategies for molecular
imaging of cancer
National Cancer Institute, US: National Cancer Institute’s CIP (Cancer
Imaging Program) supports research on the use of imaging techniques
to noninvasively diagnose cancer and the identification of disease
subsets in patients, among other research areas. Other opportunities in
imaging include the development of better tools for imaging tumors and
for reading and interpreting scans. 17
18. Dana-Farber Cancer Institute, US: The Center for Biomedical
Imaging in Oncology of this institute is focused to use state-of-the-art
preclinical and clinical imaging in order to accelerate translational
research and develop new diagnostic and therapeutic strategies for
patients with cancer. The Center has two primary components: the
Lurie Family Imaging Center and a clinical research program. The
Lurie Family Imaging Center is a preclinical imaging facility equipped
with a 7T MRI, micro PET/CT, ultrasound, bioluminescence, and
fluorescence imaging instruments, along with radiochemistry and
radiotherapy capabilities. Imaging Design, Evaluation, and Analysis
(IDEA) lab, a multidisciplinary functional imaging laboratory that
provides study design, imaging protocol development, PET/CT
scanner evaluation and qualification, quality control/archival of
imaging data, diagnostic review of images, quantitative image
analysis, and scientific interpretation of final imaging results for
numerous institutional, national, and global multicenter cancer
therapeutic trials.
18
19. Postgraduate Institute of Medical Education & Research,
Chandigarh : ONCENTRA
Rajiv Gandhi Cancer Institute & Research Centre, Delhi :
ECLIPSE
Delhi State Cancer Institute, Delhi : COBALT
AIIMS DELHI : MONACCO
Behgal Cancer Institute (IT & Radiation Technology),
Mohali :ONCENTRA
19
20. The research gaps, as per the literature survey are as
follows:
The appropriate decomposition levels are required to find
the coarse details of the image.
• A Fusion method is required which should be capable of
differentiating between the edge and non-edge regions.
Unlike many existing fusion methods, the new proposed
method will consider the neighboring pixels also.
• Contouring: Treatment planning involves contouring and
it will determine the success of fusion process. If the
fusion is carried out properly, the tumor area will be
maximally covered.
20
22. Sr.
No.
Paper Title AUTHOR METHOD
MODALITIES
ANALYSED
SOURCE
Evaluation
Metrics Used
1
Image fusion using
hierarchical PCA.
Patil et al.[9]
Multi-resolution
analysis
MRI, CT www.fusion.org
Quantitative
quality and
subjective
quality analysis
2
Union Laplacian pyramid
with multiple features for
medical image fusion
J. Du et al. [13]
Multi-resolution
analysis
MRI-CT, MRI-
PET,PET-
SPECT
Whole brain
website of
Harvard
medical school
Quantitative
quality and
subjective
quality analysis,
Histogram
analysis
3
Medical Image Fusion
with Laplacian Pyramids
A. Sahu [14]
Multi-resolution
analysis
MRI-T2, MR-
PD, CT
-
Quantitative and
qualitative
analysis
4
Fusion of Medical
Sensors Using Adaptive
Cloud Model in Local
Laplacian Pyramid
Domain
W.Li et al.[17]
Multi-resolution
analysis
MRI, PET,
SPECT
Real time
database
Quantitative and
subjective
analysis
5
Multi-Modality Medical
Image Fusion using
Discrete Wavelet
Transform
Bhavana V. et
al.[19]
Multi-scale
geometric
analysis/Wavel
et Transform
MRI, PET
Whole brain
website of
Harvard
medical school
Quantitative
Analysis22
23. Sr. No. Paper Title AUTHOR METHOD
MODALITIES
ANALYSED
SOURCE
Evaluation
Metrics Used
6
Medical image fusion by
wavelet transform
modulus maxima
G. Qu et al. [20]
Multi-scale
geometric
analysis/Wavel
et Transform
CT, MRI -
Mutual
information (MI)
7
Pixel based medical
image fusion techniques
using discrete wavelet
transform and Stationary
wavelet transform
K. P. Indira et
al. [22]
Multi-scale
geometric
analysis/Wavel
et Transform
CT, PET
Real time
database
Objective
analysis
8
Fusion of multimodal
medical images using
Daubechies complex
wavelet transform - A
multiresolution approach
R. Singh et al.
[31]
Multi-scale
geometric
analysis/Wavel
et Transform
CT, MRI, MR-
T1, MRA
www.imagefusi
on.org
Quantitative
and subjective
analysis
9
Edge Preserving Image
Fusion Based on
Contourlet Transform
A. Khare et al.
[41]
Multi-scale
geometric
analysis/Wavel
et Transform
Multifocus and
medical images
Standard
database
Objective
analysis
23
24. Sr. No. Paper Title AUTHOR METHOD
MODALITIES
ANALYSED
SOURCE
Evaluation
Metrics Used
10
PET and MRI brain image
fusion using wavelet
transform with structural
information adjustment &
spectral information patching
Huang et al.[44]
Color based
Method
PET, MRI
www.med.harva
rd.edu
Objective
analysis
11
MRI and PET image
fusion by combining IHS
and retina-inspired
models
Daneshvar et
al.[42]
Color based
Method
PET, MRI
HARVARD
WEBSITE
Visual analysis,
statistical
assessment
12
Filter for biomedical
imaging and image
processing
[45]
Filter based
Method
MRI, PET
Real time
database
Quantitative
and subjective
analysis
13
Medical Image Fusion
Based on Rolling
guidance filter and
Spiking Cortical Model
L. Shuaiqi et
al.[46]
Filter based
Method
CT, MRI,
ULTRASOUND,
SPECT
Real time
database
Quantitative
and subjective
analysis
14
Image Fusion based on
Pixel Significance using
Cross Bilateral Filter
Kumar et al.
[47]
Filter based
Method
IR-VISIBILE,
MULTIFOCUS,
,MEDICAL
www.imagefusi
on.org
Quantitative
and subjective
analysis
24
26. S.No. Paper Authors Level of fusion Technique Verification
methods
1 Pixel-level image fusion with
simultaneous orthogonal
matching pursuit
B. Huang et al. [57] Pixel level Signal sparse
representation
theory
Objective metrices
2 Hybrid Pixel-Based Method for
Cardiac Ultrasound Fusion Based
on Integration of PCA and DWT
S. Mazaheri et
al.[58]
Pixel level Hybrid – PCA and
DWT
Quantitative
analysis and
subjective analysis
3 MRI and PET images fusion
based on human retina model
D. Sabalan et
al.[59]
Feature level Retina based model Objective metrices
4 Simultaneous image fusion and
super-resolution using sparse
representation
H. Yin et al. [60] Pixel level Sparse
representation
Objective metrices
and subjective
analysis
26
27. S.No. Paper Authors Level of fusion Technique Verification
methods
5 Pixel-level image fusion
scheme based on steerable
pyramid wavelet transform
using absolute maximum
selection fusion rule
O. Prakash et al.
[62]
Pixel level Multi resolution
steerable pyramid
wavelet transform
Quantitative and
qualitative
metrices
6 Medical images fusion by
using weighted least squares
filter and sparse representation
W. Jiang et al.
[67]
Pixel level Multi-scale edge
preserving
decomposition
and sparse
representation
Quantitative
analysis and
subjective
analysis
27
28. REF TYPE MODALIT
Y
METRIC METRIC VALUES OBTAINED
5 FATAL STROKE
ALZHEIMER
MRI CT
MRI PET
MI,PSNR, QAB/F
MI,PSNR, QAB/F
1.7048,20.2037,.5849
1.1666,25.2012,.6259
6 ALZHEIMER
SUB ACUTE
STROKE
BRAIN TUMOR
MRI PET
MRI
SPECT
MRI
SPECT
QMI, QS,QAB/F
QMI, QS,QAB/F
QMI, QS,QAB/F
1.5017,.7972,.6722
1.3740,.8907,.6278
1.9809,.8248,.6875
23 NORMAL AXIAL
NORMAL
CORONAL
ALZHEIMER
MRI PET
MRI PET
MRI PET
MSE,PSNR,AG,SD
(W=.5/.7)
.02819/.1911,63.6424/55.3184,5.6237/6.8573,8.116/2.2966
.11529/.18589,57.5131/55.4383,5.4715/7.9881,4.9116/2.63
.10509/.19144,58.0621/55.3104,6.7541/10.5855,2.3371/.3808
28 MILD
ALZHEIMER
MILD
ALZHEIMER
MILD
ALZHEIMER
MRI PET
MRI PET
MRI PET
PSNR,ENTROPY,
STD
61.8509,3.0617,3.4886
59.5109,2.9238,2.3311
62.2149,2.5149,1.9743
29 NORMAL AXIAL
NORMAL
CORONAL
ALZHEIMER
MRI PET
MRI PET
MRI PET
SD,AG(W=.5/.7) 6.7169/7.0019,5.4759/5.5285
7.5140/7.8330,6.3542/6.4355
4.9210/4.9731,5.1964/5.2169
45 NORMAL AXIAL
NORMAL
CORONAL
MRI PET
MRI PET
MRI PET
AVG,O.P,MI, 5.3603,2.3457,.6541
6.2927,1.6104,.6551
5.0353,.9765,.6230
Metrics Assessed On Harvard Database
28
29. • In the proposed research, the radiotherapy treatment
planning is improved by fusing multi-modality images
such as Magnetic Resonance Imaging (MRI), Computed
Tomography (CT) etc.
• A novel algorithm for image fusion is proposed which
helps the radiation oncologist in contouring on a single
fused image.
Images are combined such that the fused image contain
both(functional and anatomical) information, in a single
image
Fusion will directly effect the treatment execution.
29
30. Comparing the features of different modality of images.
Analysis of various image decomposition and
reconstruction techniques already available in the
literature (Multi Scale Geometric Analysis, Color based).
To study various image fusion techniques available in the
literature(PCA, Averaging method, Weighted average,
Fuzzy logic, ANN).
To propose a Fusion method which will aid Contouring.
Image Fusion Quality Assessment: Subjective as well as
Objective comparison methods are used for assessing
the quality of the output image.
30
31. Registered MRI, CT, PET, etc. images are taken as input
In decomposition phase, only Multi Decomposition
Analysis, Filter based or color based techniques are
considered.
Minor need based change depending on the outcome of
experimentaltheoretical study are made.
Fusion technique from the following, are explored:
a) Principal component analysis (PCA).
b) Averaging method
c) Weighted average
d) Wavelet Transform
e) Fuzzy logic
f) ANN
g) ANFIS 31
33. Hardware
• For implementation, operating system with standard
peripherals will be required.
• Operating system : Windows 7( 32-bit operating
system), X86 based PC
• Processor Intel(R) Core-TM i3-3110M CPU@ 2.40GHz,
2400Mhz, 2-Core(s), 4-Logical Processor/processors.
• RAM: Physical Memory installed-4.00GB
33
34. • Software
• MATLAB 7.9.0(R2009b)- 64 bit or higher version is
used for pre-processing, image decomposition and
reconstruction, and fusion i.e. the codes are designed
using MATLAB.
• Various tools available in MATLAB software are used.
• The functions from the library of other software tools
are used.
• For Graphical User Interface(GUI) - GUIDE will be
used
34
35. • The experiments are performed on imaging data taken
from Whole Brain Atlas database, available online.
• The Whole Brain Atlas is a benchmark database for
evaluating the performance of multi-modal medical image
fusion methods, which was established by Keith A.
Johnson and J. Alex Becker at Harvard Medical School.
Importantly, all the images of the database are co-
aligned.
35
36. • For the image fusion quality assessment we need to go
for subjective as well as objective comparison methods.
• In case of subjective method, visual evaluation is done.
• For the objective evaluation we have many performance
measures like entropy (EN), Average Gradient(AG),
Standard Deviation (SD), Root Mean Square Error
(RMSE), Peak Signal to Noise Ratio (PSNR), QAB/F , LAB/F
, NAB/F etc.
• The results of the research will be tested and validated
on the available images.
36
38. • The Fuzzy Logic uses fuzzy sets to deal with the
vague values logically.
• The comparison is made with wavelet based fusion
in which the three stages are followed to apply fusion.
• The first stage is decomposition, in which the
acquired images are decomposed into sub-bands
called approximate and details.
• These sub-bands from each source image are then
fused using fuzzy logic.
• The last step is reconstruction; in which inverse of
decomposition is done to finally obtain a single image
from sub-bands.
38
39. • The proposed method is implemented on CT, MRI
modalities and is based on 2 input and 1 output
fuzzy inference systems with defined fuzzy rules.
• The decomposition method is DWT and min rule
is applied on approximate sub-bands, max rule is
applied on the detailed sub-bands.
• Finally reconstruction is done by taking inverse of
DWT.
• T-S type fuzzy system is implemented. The fuzzy
rules are defined based on the pixel intensity of
each source image.
39
40. Algorithm for image fusion using fuzzy logic
// IMGCT = CT image.
// IMGMRI = MRI image.
// SBapprox. = Approximate sub-bands.
// SBdetail = Detail sub-bands.
// FUZZYout= Fuzzy inference output or Fused image
// IMGrecont= Reconstructed image.
STEP 1: Input two images (IMGCT, IMGMRI).
STEP 2: Decompose images to extract approximate and detail sub-bands.
DWT(IMGCT, IMGMRI)
//min rule to extract approximate sub-bands.
//max rule to extract detail sub-bands.
STEP 3: Fuse sub-bands obtained after decomposition.
FUZZYout = Fuzzy_Logic (SBapprox., SBdetail )
// 9 fuzzy rules are defined to obtain the result from T_S Fuzzy Inference System.
STEP 4: Reconstruction of Image.
IMGrecont = Inverse(FUZZYout)
// Inverse of decomposition method or varied reconstruction method
40
42. • The experiments were carried out on the images from Harvard database.
From the various modalities available, the CT and MRI images were the
candidate images to be fed to fuzzy logic for fusion.
• After successful implementation of FIS, the results are compared with the
fusion results obtained from wavelets.
• The evaluation is done in two ways; using metrics calculation and visual
inspection.
• The evaluation is done using Peak Signal to Noise Ratio (PSNR), Signal to
Noise Ratio (SNR) and Mean Square Error (MSE) metrics, taking reference
image to be MRI.
• After this, in the next step, CT image is taken as reference image. The
table shows PSNR value 8.5366 with reference image MRI, PSNR value
10.8427 with reference image CT.
• Similarly the SNR is 4.2605 with reference image MRI and with reference
image CT, SNR is 6.4822. The MSE is 9.1079e+04 with reference to MRI
and with reference to CT, MSE is 1.2185e+04.
42
46. • Scale as well as orientation based decomposition is performed before the
fusion process begins. To fetch the low frequency component low pass filter is
performed and to fetch high frequency sub-bands, high pass filters is applied.
• Hence a complete representation of the image in the decomposed parts is
obtained where smoothing is a process of convolution of image with
Uniform/Gaussian kernel.
• The Cross Bilateral Filter has the edge preservation ability which makes it a
likely acceptable candidate for extraction of features in the decomposed sub-
bands. The medical images require strict attention at the boundaries as well as
the volume within, the need for an algorithm to serve the same purpose arise.
• In the CBF, two factors radiometric and geometric sigma are used to fine
tune the cbf components. The Cross bilateral filter accomplish edge preserved
smoothing by modifying the kernel based on the indigenous contents which is
impossible to achieve using Gaussian kernel. Using cross-bilateral filter based
decomposition; detailed coefficients are obtained.
46
47. • The Cross Bilateral Filter (CBF) is widely used by authors for fusion. CBF is
used for decomposing the images, which is a pre-fusion requirement.
• On applying CBF, image is decomposed into 2 components namely, cbf
component and detail component. Subtracting the cbf component from
original image gives detail component.
• This detail component is used for further processing. The detail component
of each modality is given as input to ANFIS for fusion.
• CBF is used in order to enhance the multi-modality medical image fusion
results by providing edge preservation.
• The proposed work is compared with the techniques available in MATLAB
Toolbox.
• The purpose is to make improvements in the fusion which will ease the
oncologist to make decisions on the resultant image obtained. 47
48. • The CBF component is calculated for each input image (ACBF,
BCBF) while tuning the radiometric sigma and geometric sigma.
Euclidean distance calculation is done to consider the
neighboring pixels as well. When these CBF components are
subtracted from their respective original images, detailed
components are obtained, having equation:
ADETAIL=A-ACBF
BDETAIL=B-BCBF
48
51. • Input: A (MR-T1) , B (MR-T2)
• Decomposition: Deducing kernel weights from one image and applying it on the
second image and hence ACBF, BCBF are produced.
• Fetching the detailed image: For this, output obtained from the above step is
subtracted from original image to get the details ADETAIL, BDETAIL.
• Wavelet Selection: From the family of wavelets, Biorthogonal wavelet (bior 2.2)
transform is applied on the source images A, B.
• Fusion Strategy: Fuzzy inference system and average rule is performed on the
decomposed parts. Fuzzy Logic is applied on the detailed components.
• To deal with the approximate components, the average rule is followed to fuse
the low-low, high-low and low-high subbands. The details obtained from the CBF
is fed to the Fuzzy Inference system of Mamdani type for fusion with two input
variables and one output variable with Gauss membership function for each input
and output is defined.
• 25 Fuzzy rules are defined to fuse the pixels with min as AndMethod, max as
OrMethod. For implication, min is used, max rule is used for aggregation and
centroid for defuzzification.
• Reconstruction: This is the last step in which inverse wavelet transform is
performed to reconstruct the final fused image. Fused subcomponents are
combined into a single image which is expected to be more informative for
radiotherapy treatment planning.
51
53. METRIC
USED
CONVENTIONAL EVALUATION METRICS
Proposed
method
Image fusion
based on pixel
significance using
cross bilateral
filter
An efficient adaptive
fusion scheme for
multifocus images in
wavelet domain using
statistical properties
of neighborhood
A modified
statistical approach
for image fusion
using wavelet
transform
Multifocus and
multispectral image
fusion based on pixel
significance using
multiresolution
decomposition
A novel multifocus
image fusion scheme
based on pixel
significance using
wavelet transform
API 48.2381 54.7351 46.3165 36.4330 40.1711 44.1301
SD 63.6862 57.6902 52.3071 51.3242 46.8869 51.3010
FS 1.9995 1.6142 1.6899 1.7651 1.7126 1.6880
CC 0.7182 0.6565 0.6374 0.5563 0.6185 0.6011
53
Evaluation of Results
54. METRIC
USED
OBJECTIVE EVALUATION METRICS
Proposed
method
Image fusion based
on pixel
significance using
cross bilateral
filter.
An efficient adaptive
fusion scheme for multi
focus images in wavelet
domain using statistical
properties of
neighborhood .
A modified statistical
approach for image
fusion using wavelet
transform.
Multi focus and
multispectral image
fusion based on pixel
significance using
multi-resolution
decomposition.
A novel multi focus
image fusion scheme
based on pixel
significance using
wavelet transform.
QAB/F 0.8940 0.8932 0.8065 0.6900 0.7760 0.7300
LAB/F 0.0929 0.0961 0.1856 0.2776 0.2137 0.2531
NAB/F 0.0131 0.0950 0.0735 0.2172 0.0924 0.1310
SUM 1 1 1 1 1 1
54
Evaluation of Results
55. • Adaptive Neuro-Fuzzy Inference System (ANFIS) is a class of adaptive
networks.
• This class of networks in ANFIS is functionally equivalent to Fuzzy
Inference Systems (FIS).
• The T-S(Takagi-Sugeno) is fine-tuned using hybrid learning method.
• For modeling training data set, combination of least-squares and back-
propagation gradient descent methods are used.
• The basic structure of ANFIS is depicted having two inputs with five
membership functions for each input, set of 25 rules and single output i.e.
fused image. ANFIS contains adaptive networks with fuzzy rules.
55
73. Comparison of fusion algorithms based on conventional metrics for Case
1 to Case 5
73
74. Comparison of fusion algorithms based on objective metrics (Case1
to Case5)
74
Comparison of fusion algorithms based on objective metrics for Case 1 to
Case 5
75. Graph-1 Average Pixel Intensity (API) calculation. Graph-2 Average Gradient (AG) calculation.
75
81. [1]IAEA PACT, “GLOBAL RADIOTHERAPY COVERAGE,” 2018. [Online]. Available:
https://www.iaea.org/sites/default/files/18/03/pact-radiotherapy-coverage-010114.pdf.
[2]S. Jelercic and M. Rajer, “The role of PET-CT in radiotherapy planning of solid tumours.” 2015.
[3] MacManus et al., “Use of PET and PET/CT for Radiation Therapy Planning: IAEA expert report 2006-2007,” Radiother. Oncol., vol.
91, no. 1, pp. 85–94, 2009.
[4]K. Kagawa, W. R. Lee, T. E. Schultheiss, M. A. Hunt, A. H. Shaer, and G. E. Hanks, “Initial clinical assessment of CT-MRI image fusion
software in localization of the prostate for 3D conformal radiation therapy,” Int. J. Radiat. Oncol., vol. 38, no. 2, pp. 319–325, May 1997.
[5]J. Du, W. Li, K. Lu, and B. Xiao, “An overview of multi-modal medical image fusion,” Neurocomputing, vol. 215, pp. 3–20, 2016.
[6]G. Bhatnagar, Q. M. J. Wu, and Z. Liu, “Directive contrast based multimodal medical image fusion in NSCT domain,” IEEE Trans.
Multimed., vol. 15, no. 5, pp. 1014–1024, 2013.
[7]International Atomic Energy Agency (IAEA), “Definition of target volumes and organs at risk,” 2005.
[8]G. C. Pereira, M. Traughber, and R. F. Muzic, “The role of imaging in radiation therapy planning: Past, present, and future,” Biomed Res.
Int., vol. 2014, no. 2, 2014.
[9]U. Patil and U. Mudengudi, “Image fusion using hierarchical PCA.,” 2011 Int. Conf. Image Inf. Process., no. Iciip, pp. 1–6, 2011.
[10]A. Wang, H. Sun, and Y. Guan, “The Application of Wavelet Transform to Multi-modality Medical Image Fusion,” 2006 IEEE Int.
Conf. Networking, Sens. Control, no. 1, pp. 270–274, 2006.
[11]C. Lin, “Medical Image Fusion Method based on Wavelet Multi-resolution and Entropy,” no. September, pp. 2329–2333, 2008.
[12]J. Saeedi and K. Faez, “Fisher classifier and fuzzy logic based multi-focus image fusion,” Proc. - 2009 IEEE Int. Conf. Intell. Comput.
Intell. Syst. ICIS 2009, vol. 4, no. May 2014, pp. 420–425, 2009.
[13]J. Du, W. Li, B. Xiao, and Q. Nawaz, “Union Laplacian pyramid with multiple features for medical image fusion,” Neurocomputing,
vol. 194, pp. 326–339, 2016.
[14]A. Sahu, “Medical Image Fusion with Laplacian Pyramids,” pp. 448–453, 2014.
[15]J. Fu, W. Li, J. Du, and B. Xiao, “Multimodal medical image fusion via laplacian pyramid and convolutional neural network
reconstruction with local gradient energy strategy,” Comput. Biol. Med., vol. 126, no. July, p. 104048, 2020.
[16]Z. Wang, Z. Cui, and Y. Zhu, “Multi-modal medical image fusion by Laplacian pyramid and adaptive sparse representation,” Comput.
Biol. Med., vol. 123, p. 103823, 2020.
[17]W. Li, J. Du, Z. Zhao, and J. Long, “Fusion of Medical Sensors Using Adaptive Cloud Model in Local Laplacian Pyramid Domain,”
IEEE Trans. Biomed. Eng., vol. 66, no. 4, pp. 1172–1183, 2019.
[18] V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process., vol. 13, no. 2, pp. 228–237, 2004.
[19] Bhavana V. and Krishnappa H.K., “Multi-Modality Medical Image Fusion using Discrete Wavelet Transform,” Procedia Comput. Sci., vol. 70, no.
December 2015, pp. 625–631, 2015.
[20] G. Qu, D. Zhang, and P. Yan, “Medical image fusion by wavelet transform modulus maxima,” vol. 9, no. 4, pp. 713–718, 2001.
81
82. [21] T. Acharya and C. Chakrabarti, “A survey on lifting-based discrete wavelet transform architectures,” J. VLSI Signal Process.
Syst. Signal Image. Video Technol., vol. 42, no. 3, pp. 321–339, 2006.
[22] K. P. Indira, R. Rani Hemamalini, and R. Indhumathi, “Pixel based medical image fusion techniques using discrete wavelet
transform and Stationary wavelet transform,” Indian J. Sci. Technol., vol. 8, no. 26, pp. 1–7, 2015.
[23] R. P. Desale and S. V Verma, “Study and Analysis ofPCA, DCT & DWT based Image Fusion Techniques,” pp. 1–4, 2013.
[24] M. Haribabu, C. H. H. Bindu, and K. S. Prasad, “Multimodal Medical Image Fusion of MRI - PET Using Wavelet Transform,”
pp. 1–4, 2012.
[25] Y. Yang, D. S. Park, S. Huang, Z. Fang, and Z. Wang, “Wavelet based Approach for Fusing Computed Tomography and
Magnetic Resonance Images,” Control Decis. Conf. 2009. CCDC ’09. Chinese, pp. 5770–5774, 2009.
[26] Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima.,” Opt. Express, vol. 9, no.
4, pp. 184–190, 2001.
[27] R. Singh, M. Vatsa, and A. Noore, “Multimodal Medical Image Fusion Using Redundant Discrete Wavelet Transform,” Adv.
Pattern Recognition, 2009. ICAPR ’09. Seventh Int. Conf., pp. 232–235, 2009.
[28] C. Prakash, Medical Image Fusion Based on Redundancy DWT and Mamdani Type Min-sum Mean-of-max Techniques with
Quantitative Analysis. 2012, pp. 54–59.
[29] M. Ciampi, “Medical Image Fusion for Color Visualization via 3D RDWT,” 2010.
[30] R. Singh, R. Srivastava, O. Prakash, and A. Khare, “Multimodal Medical Image Fusion in Dual Tree Complex Wavelet
Transform Domain Using Maximum and Average Fusion Rules,” vol. 2, no. 2, 2012.
[31] R. Singh and A. Khare, “Fusion of multimodal medical images using Daubechies complex wavelet transform - A multiresolution
approach,” Inf. Fusion, vol. 19, no. 1, pp. 49–60, 2014.
[32] E. Thomas, P. B. Nair, S. N. John, and M. Dominic, “Image fusion using daubechies complex wavelet transform and lifting
wavelet transform: A multiresolution approach,” 2014 Annu. Int. Conf. Emerg. Res. Areas Magn. Mach. Drives, AICERA/iCMMD 2014 - Proc.,
2014.
[33] L. Li and H. Ma, “Saliency-Guided Nonsubsampled Shearlet Transform for Multisource Remote Sensing Image Fusion,” 2021.
[34] A. A. Pure, N. Gupta, and M. Shrivastava, “Wavelet and fast discrete curvelet transform for medical application,” 2013 4th Int.
Conf. Comput. Commun. Netw. Technol. ICCCNT 2013, 2013.
[35] Q. Miao, C. Shi, P. Xu, M. Yang, and Y. Shi, “A novel algorithm of image fusion using shearlets,” Opt. Commun., vol. 284, no. 6,
pp. 1540–1547, 2011.
[36] and L. T. Lei Wang, Bin Li, “Multi- Modal Medical Volumetric Data Fusion Using 3D Discrete Shearlet Transform and,” vol.
61, no. c, pp. 197–206, 2013. 82
83. [80] M. N. Do and M. Vetterli, “The Contourlet Transform : An Efficient Directional Multiresolution Image Representation,” vol.
14, no. 12, pp. 2091–2106, 2005.
[81] G. Bhatnagar, Q. M. J. Wu, and Z. Liu, “A new contrast based multimodal medical image fusion framework,” Neurocomputing,
vol. 157, pp. 143–152, 2015.
[82] N. A. Al-azzawi, “Color Medical Imaging Fusion Based on Principle Component Analysis and F-Transform,” Pattern Recognit.
Image Anal., vol. 28, no. 3, pp. 393–399, 2018.
[83] F. A. Al-Wassai, N. V Kalyankar, and A. A. Al-Zuky, “The IHS Transformations Based Image Fusion,” Image Rochester NY,
vol. Volume 2, no. No. 5, pp. 70–77, 2011.
[84] S. Lahmiri and M. Boukadoum, “Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features
Extraction from Biomedical Images,” J. Med. Eng., vol. 2013, p. 13, 2013.
[85] H. KAUR, K. S. BEHGAL, and S. KUMAR, “Multi-Modality Medical Image Fusion Using Cross Bilateral Filter with Fuzzy
Logic,” INFOCOMP J. Comput. Sci., vol. 19, no. 2, pp. 141–150, 2020.
[86] D. C. Lepcha, B. Goyal, and A. Dogra, “Image Fusion based on Cross Bilateral and Rolling Guidance Filter through Weight
Normalization,” Open Neuroimag. J., vol. 13, no. 1, pp. 51–61, 2021.
[87] K. Joshi, N. K. Joshi, and M. Diwakar, “Image fusion using cross bilateral filter and wavelet transform domain,” Int. J. Eng.
Adv. Technol., vol. 8, no. 4C, pp. 110–115, 2019.
[88] X. Li, F. Zhou, H. Tan, W. Zhang, and C. Zhao, “Multimodal medical image fusion based on joint bilateral filter and local
gradient energy,” Inf. Sci. (Ny)., vol. 569, pp. 302–325, Aug. 2021.
[89] W. Tan, P. Tiwari, H. M. Pandey, C. Moreira, and A. K. Jaiswal, “Multimodal medical image fusion algorithm in the era of big
data,” Neural Comput. Appl., vol. 2, 2020.
[90] C. M. S. Rani et al., “An Efficient Block Based Feature Level Image Fusion Technique Using Wavelet Transform and Neural
Network,” vol. 52, no. 12, pp. 980–986, 2012.
[91] Y. Wang, J. Dang, Q. Li, and S. H. A. Li, “MULTIMODAL MEDICAL IMAGE FUSION USING FUZZY RADIAL BASIS
FUNCTION NEURAL NETWORKS,” in 2007 IEEE International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China,
2007, pp. 2–6.
[92] M. Arif and G. Wang, “Fast curvelet transform through genetic algorithm for multimodal medical image fusion,” Soft Comput.,
vol. 24, no. 3, pp. 1815–1836, 2020.
[93] J. Dou, Q. Qin, and Z. Tu, “Image fusion based on wavelet transform with genetic algorithms and human visual system,”
Multimed. Tools Appl., vol. 78, no. 9, pp. 12491–12517, 2019.
[94] M. Thamarai and K. Mohanbabu, “An improved image fusion and segmentation using FLICM with GA for medical
diagonosis,” Indian J. Sci. Technol., vol. 9, no. 12, 2016.
83
84. [95] S. Das and M. K. Kundu, “A neuro-fuzzy approach for medical image fusion,” IEEE Trans. Biomed. Eng., vol. 60, no. 12,
pp. 3347–3353, 2013.
[96] C. T. Kavitha and C. Chellamuthu, “Multimodal medical image fusion based on integer wavelet transform and neuro-
fuzzy,” Proc. 2010 Int. Conf. Signal Image Process. ICSIP 2010, pp. 296–300, 2010.
[97] U. Javed, M. M. Riaz, A. Ghafoor, S. S. Ali, and T. A. Cheema, “MRI and PET Image Fusion Using Fuzzy Logic and Image
Local Features,” vol. 2014. 2014.
[98] J. Teng, S. Wang, J. Zhang, and X. Wang, “Fusion Algorithm of Medical Images Based on Fuzzy Logic,” 2010 3rd Int.
Congr. Image Signal Process., vol. 4, no. Fskd, pp. 1552–1556, 2010.
[99] Z. Chao, D. Kim, and H. J. Kim, “Multi-modality image fusion based on enhanced fuzzy radial basis function neural
networks,” Phys. Medica, vol. 48, no. March, pp. 11–20, 2018.
[100] S. Li, H. Yin, and L. Fang, “Group-sparse representation with dictionary learning for medical image denoising and fusion,”
IEEE Trans. Biomed. Eng., vol. 59, no. 12, pp. 3450–3459, 2012.
[101] F. P. M. Oliveira and J. M. R. S. Tavares, “Medical image registration: A review,” Comput. Methods Biomech. Biomed.
Engin., vol. 17, no. 2, pp. 73–93, 2014.
[102] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A general image fusion framework based on
convolutional neural network,” Inf. Fusion, vol. 54, no. March 2019, pp. 99–118, 2020.
[103] N. Walia, H. Singh, and A. Sharma, “ANFIS: Adaptive Neuro-Fuzzy Inference System- A Survey,” Int. J. Comput. Appl.,
vol. 123, no. 13, pp. 32–38, 2015.
[104] N. B. Bahadure, A. K. Ray, and H. P. Thethi, “Image Analysis for MRI Based Brain Tumor Detection and Feature
Extraction Using Biologically Inspired BWT and SVM,” vol. 2017, 2017.
[105] S. Rajkumar, P. Bardhan, S. K. Akkireddl, and C. Munshi, “CT and MRI Image Fusion based on Wavelet Transform and
Neuro-Fuzzy concepts with quantitative analysis.”
[106] C. T. Kavitha and C. Chellamuthu, “Fusion of PET and MRI images using adaptive neuro-fuzzy inference system,” J. Sci.
Ind. Res. (India)., vol. 71, no. 10, pp. 651–656, 2012.
[107] C. Shangli, “Medical image of PET/CT weighted fusion based on wavelet transform,” in ICBBE, 2008, pp. 2523–2525.
[108] P. Shah and S. N. Merchant, “An Efficient Adaptive Fusion Scheme for Multifocus Images in Wavelet Domain Using
Statistical Properties of Neighborhood,” no. January, 2011.
[109] B. K. Nobariyan, S. Daneshvar, and A. Foroughi, “A new MRI and PET image fusion algorithm based on pulse coupled
neural network,” Electr. Eng. (ICEE), 2014 22nd Iran. Conf., no. Icee, pp. 1950–1955, 2014.
[110] V. Xydeas, C. S., & Petrović, “Objective image fusion performance measure.,” in Electronics Letters, 2000, vol. 36, no. 4,
pp. 308–309. 84
85. 1.Harmeet Kaur, Satish Kumar, (2018) “Comparative Analysis of the
Decomposition/ Reconstruction Methods for Fusion of Medical Images”, 12th
Chandigarh Science Congress (CHASCON 2018) February 12-14, 2018.
2.Harmeet Kaur, Satish Kumar, (2018) “A Review of methods used for fusion
of images”, International Conference on Science and Technology: Trends and
challenges, ICSTTC, 2018, April 16-17, 2018 in collaboration with Punjab
Academy of Sciences, Patiala.
3.H. Kaur and S. Kumar, “Fusion of Multi-Modality Medical Images: A Fuzzy
Approach,” in Proceedings on 2018 IEEE 3rd International Conference on
Computing, Communication and Security, ICCCS 2018, 2018.
4.H. KAUR, K. S. BEHGAL, and S. KUMAR, “Multi-Modality Medical Image
Fusion Using Cross Bilateral Filter with Fuzzy Logic,” INFOCOMP J. Comput.
Sci., vol. 19, no. 2, pp. 141–150, 2020.
85
86. 5.Harmeet Kaur; Satish Kumar (2020) "A Review on
Decomposition/Reconstruction methods for Fusion of Medical Images".
International Research Journal on Advanced Science Hub, 2, 8, 2020, 34-40.
6.Harmeet Kaur, Satish Kumar, Kuljinder Singh Behgal, Yagiyadeep Sharma,
"Multi-modality medical image fusion using cross-bilateral filter and neuro-fuzzy
approach" published in "Journal of Medical Physics", 2021;46:263-77.
7.Harmeet Kaur; Satish Kumar, "Role of AI Techniques in Enhancing Multi-
Modality Medical Image Fusion Results", in "Predictive Modelling in Biomedical
Data Mining and Analysis" Elsevier book.
86
91. 91
• The research work included modeling of a system to fuse
the modalities, making it competent for analysis and
treatment.
• The verification includes mathematical calculation and
visual inspection by oncologist and radiation safety
officer.
• The experts validated the performance of the proposed
method in terms of presence of information, noise
removal and edge preservation
• In future, the performance of proposed method can be
upgraded by tuning the ANFIS.
• New metrics can be developed to measure the
performance of fusion methods.