The field of hyperspectral image storage and processing has undergone a remarkable evolution in recent years. The visualization of these images represents a challenge as the number of bands exceeds three bands, since direct visualization using the trivial system red, green and blue (RGB) or hue, saturation and lightness (HSL) is not feasible. One potential solution to resolve this problem is the reduction of the dimensionality of the image to three dimensions and thereafter assigning each dimension to a color. Conventional tools and algorithms have become incapable of producing results within a reasonable time. In this paper, we present a new distributed method of visualization of hyperspectral image based on the principal component analysis (PCA) and implemented in a distributed parallel environment (Apache Spark). The visualization of the big hyperspectral images with the proposed method is made in a smaller time and with the same performance as the classical method of visualization.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
High Performance Computing for Satellite Image Processing and Analyzing – A ...Editor IJCATR
High Performance Computing (HPC) is the recently developed technology in the field of computer science, which evolved
due to meet increasing demands for processing speed and analysing/processing huge size of data sets. HPC brings together several
technologies such as computer architecture, algorithm, programs and system software under one canopy to solve/handle advanced
complex problems quickly and effectively. It is a crucial element today to gather and process large amount of satellite (remote sensing)
data which is the need of an hour. In this paper, we review recent development in HPC technology (Parallel, Distributed and Cluster
Computing) for satellite data processing and analysing. We attempt to discuss the fundamentals of High Performance Computing
(HPC) for satellite data processing and analysing, in a way which is easy to understand without much previous background. We sketch
the various HPC approach such as Parallel, Distributed & Cluster Computing and subsequent satellite data processing & analysing
methods like geo-referencing, image mosaicking, image classification, image fusion and Morphological/neural approach for hyperspectral satellite data. Collective, these works deliver a snapshot, tables and algorithms of the recent developments in those sectors and
offer a thoughtful perspective of the potential and promising challenges of satellite data processing and analysing using HPC
paradigms.
Image fusion is a technique of
intertwining at least two pictures of same scene to
shape single melded picture which shows indispensable
data in the melded picture. Picture combination
system is utilized for expelling clamor from the
pictures. Commotion is an undesirable material which
crumbles the nature of a picture influencing the
lucidity of a picture. Clamor can be of different kinds,
for example, Gaussian commotion, motivation clamor,
uniform commotion and so forth. Pictures degenerate
some of the time amid securing or transmission or
because of blame memory areas in the equipment.
Picture combination should be possible at three
dimensions, for example, pixel level combination,
highlight level combination and choice dimension
combination. There are essentially two kinds of picture
combination methods which are spatial area
combination systems and transient space combination
procedures. (PCA) combination, Normal strategy, high
pass sifting are spatial area techniques and strategies
which incorporate change, for example, Discrete
Cosine Transform, Discrete wavelet change are
transient space combination strategies. There are
different techniques for picture combination which
have numerous favorable circumstances and
detriments. Numerous procedures experience the ill
effects of the issue of shading curios that comes in the
intertwined picture shaped. Also, the Cyclopean One
of the most astonishing properties of human stereo
vision is the combination of the left and right
perspectives of a scene into a solitary cyclopean one.
Under typical survey conditions, the world shows up as
observed from a virtual eye set halfway between the
left and right eye positions. The apparent picture of
the world is never recorded specifically by any tangible
exhibit, however developed by our neural equipment.
The term cyclopean alludes to a type of visual
upgrades that is characterized by binocular
dissimilarity alone. He suspected that stereo-psis may
find concealed articles, this may be helpful to discover
disguised items. The critical part of this examination
when utilizing arbitrary dab stereo-grams was that
uniqueness is adequate for stereo-psis, and where had
just demonstrated that binocular difference was vital
for stereo-psis.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
High Performance Computing for Satellite Image Processing and Analyzing – A ...Editor IJCATR
High Performance Computing (HPC) is the recently developed technology in the field of computer science, which evolved
due to meet increasing demands for processing speed and analysing/processing huge size of data sets. HPC brings together several
technologies such as computer architecture, algorithm, programs and system software under one canopy to solve/handle advanced
complex problems quickly and effectively. It is a crucial element today to gather and process large amount of satellite (remote sensing)
data which is the need of an hour. In this paper, we review recent development in HPC technology (Parallel, Distributed and Cluster
Computing) for satellite data processing and analysing. We attempt to discuss the fundamentals of High Performance Computing
(HPC) for satellite data processing and analysing, in a way which is easy to understand without much previous background. We sketch
the various HPC approach such as Parallel, Distributed & Cluster Computing and subsequent satellite data processing & analysing
methods like geo-referencing, image mosaicking, image classification, image fusion and Morphological/neural approach for hyperspectral satellite data. Collective, these works deliver a snapshot, tables and algorithms of the recent developments in those sectors and
offer a thoughtful perspective of the potential and promising challenges of satellite data processing and analysing using HPC
paradigms.
Image fusion is a technique of
intertwining at least two pictures of same scene to
shape single melded picture which shows indispensable
data in the melded picture. Picture combination
system is utilized for expelling clamor from the
pictures. Commotion is an undesirable material which
crumbles the nature of a picture influencing the
lucidity of a picture. Clamor can be of different kinds,
for example, Gaussian commotion, motivation clamor,
uniform commotion and so forth. Pictures degenerate
some of the time amid securing or transmission or
because of blame memory areas in the equipment.
Picture combination should be possible at three
dimensions, for example, pixel level combination,
highlight level combination and choice dimension
combination. There are essentially two kinds of picture
combination methods which are spatial area
combination systems and transient space combination
procedures. (PCA) combination, Normal strategy, high
pass sifting are spatial area techniques and strategies
which incorporate change, for example, Discrete
Cosine Transform, Discrete wavelet change are
transient space combination strategies. There are
different techniques for picture combination which
have numerous favorable circumstances and
detriments. Numerous procedures experience the ill
effects of the issue of shading curios that comes in the
intertwined picture shaped. Also, the Cyclopean One
of the most astonishing properties of human stereo
vision is the combination of the left and right
perspectives of a scene into a solitary cyclopean one.
Under typical survey conditions, the world shows up as
observed from a virtual eye set halfway between the
left and right eye positions. The apparent picture of
the world is never recorded specifically by any tangible
exhibit, however developed by our neural equipment.
The term cyclopean alludes to a type of visual
upgrades that is characterized by binocular
dissimilarity alone. He suspected that stereo-psis may
find concealed articles, this may be helpful to discover
disguised items. The critical part of this examination
when utilizing arbitrary dab stereo-grams was that
uniqueness is adequate for stereo-psis, and where had
just demonstrated that binocular difference was vital
for stereo-psis.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
Image denoising is an interesting inverse problem. By denoising we mean finding a clean image, given a noisy one. In this paper, we propose a novel image denoising technique based on the generalized k density model as an extension to the probabilistic framework for solving image denoising problem. The approach is based on using overcomplete basis dictionary for sparsely representing the image under interest. To learn the overcomplete basis, we used the generalized k density model based ICA. The learned dictionary used after that for denoising speech signals and other images. Experimental results confirm the effectiveness of the proposed method for image denoising. The comparison with other denoising methods is also made and it is shown that the proposed method produces the best denoising effect.
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
A Neural Network Approach to Identify Hyperspectral Image Content IJECEIAES
A Hyperspectral is the imaging technique that contains very large dimension data with the hundreds of channels. Meanwhile, the Hyperspectral Images (HISs) delivers the complete knowledge of imaging; therefore applying a classification algorithm is very important tool for practical uses. The HSIs are always having a large number of correlated and redundant feature, which causes the decrement in the classification accuracy; moreover, the features redundancy come up with some extra burden of computation that without adding any beneficial information to the classification accuracy. In this study, an unsupervised based Band Selection Algorithm (BSA) is considered with the Linear Projection (LP) that depends upon the metric-band similarities. Afterwards Monogenetic Binary Feature (MBF) has consider to perform the „texture analysis‟ of the HSI, where three operational component represents the monogenetic signal such as; phase, amplitude and orientation. In post processing classification stage, feature-mapping function can provide important information, which help to adopt the Kernel based Neural Network (KNN) to optimize the generalization ability. However, an alternative method of multiclass application can be adopt through KNN, if we consider the multi-output nodes instead of taking single-output node.
Human Re-identification with Global and Local Siamese Convolution Neural NetworkTELKOMNIKA JOURNAL
Human re-identification is an important task in surveillance system to determine whether the same human re-appears in multiple cameras with disjoint views. Mostly, appearance based approaches are used to perform human re-identification task because they are less constrained than biometric based approaches. Most of the research works apply hand-crafted feature extractors and then simple matching methods are used. However, designing a robust and stable feature requires expert knowledge and takes time to tune the features. In this paper, we propose a global and local structure of Siamese Convolution Neural Network which automatically extracts features from input images to perform human re-identification task. Besides, most of the current human re-identification tasks in single-shot approaches do not consider occlusion issue due to lack of tracking information. Therefore, we apply a decision fusion technique to combine global and local features for occlusion cases in single-shot approaches.
Multimode system condition monitoring using sparsity reconstruction for quali...IJECEIAES
In this paper, we introduce an improved multivariate statistical monitoring method based on the stacked sparse autoencoder (SSAE). Our contribution focuses on the choice of the SSAE model based on neural networks to solve diagnostic problems of complex systems. In order to monitor the process performance, the squared prediction error (SPE) chart is linked with nonparametric adaptive confidence bounds which arise from the kernel density estimation to minimize erroneous alerts. Then, faults are localized using two methods: contribution plots and sensor validity index (SVI). The results are obtained from experiments and real data from a drinkable water processing plant, demonstrating how the applied technique is performed. The simulation results of the SSAE model show a better ability to detect and identify sensor failures.
Noise-robust classification with hypergraph neural networknooriasukmaningtyas
This paper presents a novel version of hypergraph neural network method. This method is utilized to solve the noisy label learning problem. First, we apply the PCA dimensional reduction technique to the feature matrices of the image datasets in order to reduce the “noise” and the redundant features in the feature matrices of the image datasets and to reduce the runtime constructing the hypergraph of the hypergraph neural network method. Then, the classic graph based semisupervised learning method, the classic hypergraph based semi-supervised learning method, the graph neural network, the hypergraph neural network, and our proposed hypergraph neural network are employed to solve the noisy label learning problem. The accuracies of these five methods are evaluated and compared. Experimental results show that the hypergraph neural network methods achieve the best performance when the noise level increases. Moreover, the hypergraph neural network methods are at least as good as the graph neural network.
Data mining techniques application for prediction in OLAP cubeIJECEIAES
Data warehouses represent collections of data organized to support a process of decision support, and provide an appropriate solution for managing large volumes of data. OLAP online analytics is a technology that complements data warehouses to make data usable and understandable by users, by providing tools for visualization, exploration, and navigation of data-cubes. On the other hand, data mining allows the extraction of knowledge from data with different methods of description, classification, explanation and prediction. As part of this work, we propose new ways to improve existing approaches in the process of decision support. In the continuity of the work treating the coupling between the online analysis and data mining to integrate prediction into OLAP, an approach based on automatic learning with Clustering is proposed in order to partition an initial data cube into dense sub-cubes that could serve as a learning set to build a prediction model. The technique of data mining by regression trees is then applied for each sub-cube to predict the value of a cell.
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...ijsc
In this paper we study the performance of Spiking Neural Networks (SNN)and Support Vector Machine (SVM) by using a GPU, model GeForce 6400M. Respect to applications of SNN, the methodology may be used for clustering, classification of databases, odor, speech and image recognition..In case of methodology SVM, is typically applied for clustering, regression and progression. According to particular characteristics of these methodologies,theycan be parallelizedin several grades. However, level of parallelism is limited to architecture of hardware. So, is very sure to get better results using other hardware with more computational resources. The different approaches are evaluated by the training speed and performance. On the other hand, some authors have coded algorithms SVM light, but nobody has programming QP SVM in a GPU. Algorithms were coded by authors in the hardware, like Nvidia card, FPGA or sequential circuits that depends on methodology used, to compare learning timewith between GPU and CPU. Also, in the survey we introduce a brief description of the types of ANN and its techniques of execution to be related with results of researching.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Más contenido relacionado
Similar a Visualization of hyperspectral images on parallel and distributed platform: Apache Spark
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
Image denoising is an interesting inverse problem. By denoising we mean finding a clean image, given a noisy one. In this paper, we propose a novel image denoising technique based on the generalized k density model as an extension to the probabilistic framework for solving image denoising problem. The approach is based on using overcomplete basis dictionary for sparsely representing the image under interest. To learn the overcomplete basis, we used the generalized k density model based ICA. The learned dictionary used after that for denoising speech signals and other images. Experimental results confirm the effectiveness of the proposed method for image denoising. The comparison with other denoising methods is also made and it is shown that the proposed method produces the best denoising effect.
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
A Neural Network Approach to Identify Hyperspectral Image Content IJECEIAES
A Hyperspectral is the imaging technique that contains very large dimension data with the hundreds of channels. Meanwhile, the Hyperspectral Images (HISs) delivers the complete knowledge of imaging; therefore applying a classification algorithm is very important tool for practical uses. The HSIs are always having a large number of correlated and redundant feature, which causes the decrement in the classification accuracy; moreover, the features redundancy come up with some extra burden of computation that without adding any beneficial information to the classification accuracy. In this study, an unsupervised based Band Selection Algorithm (BSA) is considered with the Linear Projection (LP) that depends upon the metric-band similarities. Afterwards Monogenetic Binary Feature (MBF) has consider to perform the „texture analysis‟ of the HSI, where three operational component represents the monogenetic signal such as; phase, amplitude and orientation. In post processing classification stage, feature-mapping function can provide important information, which help to adopt the Kernel based Neural Network (KNN) to optimize the generalization ability. However, an alternative method of multiclass application can be adopt through KNN, if we consider the multi-output nodes instead of taking single-output node.
Human Re-identification with Global and Local Siamese Convolution Neural NetworkTELKOMNIKA JOURNAL
Human re-identification is an important task in surveillance system to determine whether the same human re-appears in multiple cameras with disjoint views. Mostly, appearance based approaches are used to perform human re-identification task because they are less constrained than biometric based approaches. Most of the research works apply hand-crafted feature extractors and then simple matching methods are used. However, designing a robust and stable feature requires expert knowledge and takes time to tune the features. In this paper, we propose a global and local structure of Siamese Convolution Neural Network which automatically extracts features from input images to perform human re-identification task. Besides, most of the current human re-identification tasks in single-shot approaches do not consider occlusion issue due to lack of tracking information. Therefore, we apply a decision fusion technique to combine global and local features for occlusion cases in single-shot approaches.
Multimode system condition monitoring using sparsity reconstruction for quali...IJECEIAES
In this paper, we introduce an improved multivariate statistical monitoring method based on the stacked sparse autoencoder (SSAE). Our contribution focuses on the choice of the SSAE model based on neural networks to solve diagnostic problems of complex systems. In order to monitor the process performance, the squared prediction error (SPE) chart is linked with nonparametric adaptive confidence bounds which arise from the kernel density estimation to minimize erroneous alerts. Then, faults are localized using two methods: contribution plots and sensor validity index (SVI). The results are obtained from experiments and real data from a drinkable water processing plant, demonstrating how the applied technique is performed. The simulation results of the SSAE model show a better ability to detect and identify sensor failures.
Noise-robust classification with hypergraph neural networknooriasukmaningtyas
This paper presents a novel version of hypergraph neural network method. This method is utilized to solve the noisy label learning problem. First, we apply the PCA dimensional reduction technique to the feature matrices of the image datasets in order to reduce the “noise” and the redundant features in the feature matrices of the image datasets and to reduce the runtime constructing the hypergraph of the hypergraph neural network method. Then, the classic graph based semisupervised learning method, the classic hypergraph based semi-supervised learning method, the graph neural network, the hypergraph neural network, and our proposed hypergraph neural network are employed to solve the noisy label learning problem. The accuracies of these five methods are evaluated and compared. Experimental results show that the hypergraph neural network methods achieve the best performance when the noise level increases. Moreover, the hypergraph neural network methods are at least as good as the graph neural network.
Data mining techniques application for prediction in OLAP cubeIJECEIAES
Data warehouses represent collections of data organized to support a process of decision support, and provide an appropriate solution for managing large volumes of data. OLAP online analytics is a technology that complements data warehouses to make data usable and understandable by users, by providing tools for visualization, exploration, and navigation of data-cubes. On the other hand, data mining allows the extraction of knowledge from data with different methods of description, classification, explanation and prediction. As part of this work, we propose new ways to improve existing approaches in the process of decision support. In the continuity of the work treating the coupling between the online analysis and data mining to integrate prediction into OLAP, an approach based on automatic learning with Clustering is proposed in order to partition an initial data cube into dense sub-cubes that could serve as a learning set to build a prediction model. The technique of data mining by regression trees is then applied for each sub-cube to predict the value of a cell.
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...ijsc
In this paper we study the performance of Spiking Neural Networks (SNN)and Support Vector Machine (SVM) by using a GPU, model GeForce 6400M. Respect to applications of SNN, the methodology may be used for clustering, classification of databases, odor, speech and image recognition..In case of methodology SVM, is typically applied for clustering, regression and progression. According to particular characteristics of these methodologies,theycan be parallelizedin several grades. However, level of parallelism is limited to architecture of hardware. So, is very sure to get better results using other hardware with more computational resources. The different approaches are evaluated by the training speed and performance. On the other hand, some authors have coded algorithms SVM light, but nobody has programming QP SVM in a GPU. Algorithms were coded by authors in the hardware, like Nvidia card, FPGA or sequential circuits that depends on methodology used, to compare learning timewith between GPU and CPU. Also, in the survey we introduce a brief description of the types of ANN and its techniques of execution to be related with results of researching.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Visualization of hyperspectral images on parallel and distributed platform: Apache Spark
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 13, No. 6, December 2023, pp. 7115~7124
ISSN: 2088-8708, DOI: 10.11591/ijece.v13i6.pp7115-7124 7115
Journal homepage: http://ijece.iaescore.com
Visualization of hyperspectral images on parallel and
distributed platform: Apache Spark
Abdelali Zbakh1
, Mohamed Taj Bennani2
, Adnan Souri3
, Outman El Hichami4
1
National School of Business and Management Tangier (ENCGT), ER-MSI, Abdelmalek Essaâdi University, Tetouan, Morocco
2
LPAIS Laboratory, Computing Science Department, Faculty of Sciences Dhar El Mahraz Fez, Sidi Mohamed Ben Abdellah University,
Fez, Morocco
3
New Technology Trends for Innovation, Faculty of Sciences Tetouan, Abdelmalek Essaâdi University, Tetouan, Morocco
4
Applied Mathematics and Computer Sciences Team, Higher Normal School, Abdelmalek Essaadi University, Tetouan, Morocco
Article Info ABSTRACT
Article history:
Received Jan 2, 2023
Revised May 3, 2023
Accepted Jun 4, 2023
The field of hyperspectral image storage and processing has undergone a
remarkable evolution in recent years. The visualization of these images
represents a challenge as the number of bands exceeds three bands, since
direct visualization using the trivial system red, green and blue (RGB) or hue,
saturation and lightness (HSL) is not feasible. One potential solution to
resolve this problem is the reduction of the dimensionality of the image to
three dimensions and thereafter assigning each dimension to a color.
Conventional tools and algorithms have become incapable of producing
results within a reasonable time. In this paper, we present a new distributed
method of visualization of hyperspectral image based on the principal
component analysis (PCA) and implemented in a distributed parallel
environment (Apache Spark). The visualization of the big hyperspectral
images with the proposed method is made in a smaller time and with the same
performance as the classical method of visualization.
Keywords:
Dimension reduction
Hyperspectral
MapReduce
Principal component analysis
Spark platform
Visualization
This is an open access article under the CC BY-SA license.
Corresponding Author:
Abdelali Zbakh
National School of Business and Management Tangier (ENCGT), ER-MSI, Abdelmalek Essaâdi University
Tetouan, Morocco
Email: a.zbakh@uae.ac.ma
1. INTRODUCTION
Currently, digital display devices produce a color image for the human eye using a combination of
three primitive colors. So, a classic red, green and blue (RGB) color image is a combination of three layers
(bands): RGB. A hue, saturation and lightness (HSL) color image is a combination of: HSL [1]. On the contrary,
we find hyperspectral images composed of hundreds of layers (bands). A hyperspectral image can be described
as a three-dimensional data cube consisting of two spatial dimensions and one spectral dimension. In this
representation, each pixel contains a spectrum of wavelengths within the visible-near infrared range, spanning
from 400 to 1,400 nanometers.
Hyperspectral imaging is frequently used in the field of remote sensing, environment monitoring [2],
[3], polarimetric imaging, land cover classification [4], [5] and multimodal medical imaging. In astronomy, for
example, hyperspectral imagery is used to archive soil and space observations. In medical imaging,
hyperspectral imaging is used for the detection of diseases such as cancer [6].
Hyperspectral imaging produces high-dimensional data where each pixel in the image is represented
by a spectrum of measurements across many different wavelengths. However, the high dimensionality of this
data can make it challenging to analyze and interpret. Now, how to visualize a hyperspectral cube and give the
user, not usually specialist, a synthetic view of the data contained in the image with the minimum possible loss,
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 7115-7124
7116
and facilitate the interpretation of the image. Among the first solutions proposed is to visualize all the cube in
the form of a video sequence, each layer of the cube is represented by an image. However, when we work in
the plan and with a lot of hyperspectral images of large spectral dimensions, this solution remains difficult to
practice.
So, to visualize a hyperspectral image in color and in the plan with the number of spectral bands which
exceeds three bands, it is often necessary to reduce the dimensionality of hyperspectral images and obtain, from
the original image, a composite image that consists of three bands. Dimension reduction techniques such as
principal component analysis (PCA) or t-distributed stochastic neighbor embedding (t-SNE) can be employed
to transform the high-dimensional hyperspectral data into a lower-dimensional space while preserving as much
of the information as possible. By reducing the number of dimensions, it becomes easier to visualize and
interpret the data. By visualizing the reduced-dimensional data, researchers and analysts can gain insights into
the underlying patterns and relationships in the data, which can help with tasks such as identifying and
classifying different materials or objects in the scene. This, in turn, can have applications in fields such as
remote sensing, agriculture, and environmental monitoring. Several methods of visualizing hyperspectral
images exist: Methods based on spectral band selection [7]–[9], methods based on weighting [5], methods
based on optimization [6] and projection-based methods [10].
To bypass the computational problem posed by the processing of large hyperspectral cube [11], [12].
We will use an open source framework named Apache Spark [13], which distributes data storage in memory
random-access memory (RAM) and processes the data in parallel. This choice has given us a considerable gain
in the time of visualization of a hyperspectral image.
The rest of this paper is structured in the following manner: in section 2, we will present the related
work on the hyperspectral image visualization methods. Next, in section 3, we will describe our parallel
distributed visualization approach based on the PCA projection method. In section 4, we will experiment
our approach on several free hyperspectral images. And we finish the paper with a conclusion and
perspectives.
2. RELATED WORKS OF HYPERSPECTRAL IMAGE VISUALIZATION METHODS
In the field of hyperspectral imaging, there are several methods used to visualize a hyperspectral
image. The literature review revealed that the four main methods used for this purpose are based on band
selection, weighting, optimization, and transformation. The first method is based on band selection [7]–[9]. To
visualize a hyperspectral image in an RGB representation system, three spectral bands must be selected from
the original hyperspectral image composed of hundreds of bands. Then, each band will be assigned to a color:
red, green and blue. This type of visualization method is used in the AVIRIS browser [14]. The visualization
with this method is fast, but we take, just the existing data in the three selected bands and the data from other
bands will be ignored. So, a large amount of existing information in the image is lost. The second method is
based on weighting [15]. This method provides an image resulting from a linear combination of the input image
bands. In this method there are two types: Method based on stretched color matching functions (CMFs) and
Method based on bilateral filtering. The advantage of this method is the use of all the bands of the image, but
the problem arises in the choice of the same weight that will be attributed to the pixels of the image by ignoring
the variety of pixels. The third method is based on optimization [16]. In this method, some functions are
applied to the image according to the optimized criterion. We find: Method based on Markov random field
and Method based on multi maximization goals. The big challenge for this method is how to find the right
function to apply it to the image. The last method is based on transformation. With this method, we can
visualize a hyperspectral image by projecting the original image on a smaller dimension (three for example).
Over the past few years, several techniques of dimensionality reduction have emerged to decrease the
dimensionality of hyperspectral data to a lower-dimensional space, important examples include: Hessian
eigenmap embedding, locally linear embedding (LLE), isometric feature mapping (ISOMAP), Laplacian
eigenmap embedding, diffusion maps, conformal maps, independent component analysis (ICA) [17] and PCA
[18], [19].
In this paper, we will use the PCA algorithm of the last method to do the visualization. PCA is among
the dimension reduction algorithms that can be implemented effectively and which is used successfully in
commercial remote sensing applications [20]. Since we are visualizing a large hyperspectral image, PCA does
a lot of computation time [11], [12]. So, to solve this problem, we will use a distributed and parallel
computing.
At present, there exist two widely-used libraries that offer a parallel distributed implementation of
the PCA algorithm: MLlib on spark [21], [22] and the Mahout based on MapReduce [23]. Elgamal [24]
demonstrated that these two libraries do not allow a flawless analysis of a large mass of data and introduced
a novel implementation of PCA called sPCA. This proposed algorithm exhibits superior scalability and
3. Int J Elec & Comp Eng ISSN: 2088-8708
Visualization of hyperspectral images on parallel and distributed platform: Apache Spark (Abdelali Zbakh)
7117
accuracy compared to its competitors. Wu et al. [25] proposed a new distributed parallel implementation for
the PCA algorithm. The implementation is done using the Spark platform and the results obtained are
compared with a serial implementation on MATLAB and a parallel implementation on Hadoop. The
comparison demonstrates the effectiveness of the proposed implementation in terms of both precision and
computation time.
3. THE PROPOSED VISUALIZATION APPROACH
To comprehend the information concealed within the hyperspectral image cube or extract a relevant
portion of the image, visualization is often employed. However, due to the limitations of human perception,
we can only visualize a limited number of hyperspectral bands (typically up to 3). Before embarking on the
visualization of our hyperspectral image, it is necessary to reduce the number of spectral bands to 3 without
compromising the quality of information. In the subsequent steps, we will employ PCA, a widely used
technique in various domains such as dimensionality reduction, image processing, data visualization, and
discovering underlying patterns within the data.
3.1. Classic PCA algorithm
PCA [26] is a dimensionality reduction technique employed to reduce the dimensions of a matrix
containing quantitative data. This approach enables the extraction of the dominant profiles from the matrix. To
utilize the PCA algorithm (refer to Algorithm 1) on the hyperspectral image, we consider the hyperspectral
image M as a matrix of size (𝑚 = 𝐿 × 𝐶, 𝑁), where C represents the number of columns, L represents the
number of rows, and N represents the number of bands in the image. It is important to note that 𝑚 >> 𝑁,
indicating a significantly higher number of pixels than the number of spectral bands. Every row of the matrix
M corresponds to a pixel vector. For example, the first pixel is represented by the vector: [𝑀11, 𝑀12, … , 𝑀1𝑁],
with M1j is the value of the pixel 1 taken by the spectrum of number j. Each column of the matrix M
represents the values of all pixels in the image captured by a specific spectrum. For example, 𝑀𝑖1 =
[𝑀11, 𝑀21, … , 𝑀𝑚1] represents the data of the image taken by the spectrum 1. In the formula (1), 𝑀𝑗
̅̅̅̅ denoted
the average of column j and 𝜎𝑗 denoted the standard deviation of column j. In the formula (2), 𝑀𝑅𝐶𝑇
. 𝑀𝑅𝐶
denoted the matrix product between the transpose of the matrix 𝑀𝑅𝐶 and the matrix 𝑀𝑅𝐶.
Algorithm 1. Classical PCA algorithm
Algorithm Classical_PCA(M)
Input: matrix M of dimension (m, N)
Output: matrix U of dimension (m,3)
# Calculate the reduced centered matrix of M denoted: MRC
− for each 𝑖 = 1. . . 𝑚 and for each 𝑗 = 1. . . 𝑁
𝑀𝑅𝐶𝑖𝑗 =
Mij−Mj
̅̅̅̅
𝜎𝑗
(1)
With 𝑀𝑗
̅̅̅̅ =
1
𝑚
∑ 𝑀𝑖𝑗
𝑚
𝑖=1 and 𝜎𝑗2
=
1
𝑚
∑ (𝑀𝑖𝑗 − 𝑀𝑗
̅̅̅̅)2
𝑚
𝑖=1
# Calculate the correlation matrix of size (N, N) denoted: MC
𝑀𝐶 =
1
m
(MRCT
. MRC) (2)
− Calculate the eigenvalues and eigenvector of the MC matrix denoted:
[𝝀, 𝑽]
− Sort the eigenvector in descending order of the eigenvalues and take the first k
columns of 𝑽(𝒌 < 𝑵)
− Project the matrix M on the vector
𝑽: 𝑼 = 𝑴. 𝑽
− use the new matrix U of size (m, k) for the visualization of the hyperspectral image
− return U
3.2. Proposed distributed and parallel PCA algorithm
Due to the large size of hyperspectral images, the traditional PCA algorithm necessitates
computationally intensive processing. In this section, we will introduce a parallel distributed implementation
method of the algorithm utilizing the Spark platform. Given that a hyperspectral image captures the same
scene across multiple spectral bands, we can decompose the hyperspectral image into individual images,
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 7115-7124
7118
each representing a specific spectrum (as depicted in Figure 1). First, we start by transforming the
hyperspectral cube of the image into a one-dimensional V vector of size N. Each element of V contains an
image of size L×C according to a certain band Figure 1. Now, each Vt image, will be stored in the memory
RAM as a resilient distributed data set (RDD). To make a parallel distributed implementation of PCA, we
will leverage the map-reduce paradigm of Spark. The proposed algorithm operates in the following
manner:
Figure 1. Descriptive diagram of the proposed PCA algorithm
Step 1: Calculate the reduced centered matrix of V:
As previously mentioned, the vector V comprises multiple images, with each image Vt represented
by a matrix (referred to as a resilient distributed data in Spark notation) of size (L, C), where L represents
the number of rows in the image and C represents the number of columns. Hence, in order to compute the
reduced centered matrix of V, denoted as MRC, a parallel distributed computation is performed on each
image Vt.
− Calculate the centered matrix of V denoted MC algorithm 2:
𝑀𝐶𝑡𝑖𝑗 = 𝑉𝑡𝑖𝑗 − 𝑉𝑡
̅ for each 𝑖 = 1 to L and for each 𝑗 = 1 to 𝐶 (3)
with
𝑉𝑡
̅ = ∑ ∑ (
1
𝐿𝑥𝐶
𝑥 𝑉𝑡𝑖𝑗)
𝐶
𝑗=1
𝐿
𝑖=1
Algorithm 2. Calculating the centered matrix with Spark
Algorithm Center_Images (V)
Input: vector images V
Output: vector of centered images MC
For each image Vt do
5. Int J Elec & Comp Eng ISSN: 2088-8708
Visualization of hyperspectral images on parallel and distributed platform: Apache Spark (Abdelali Zbakh)
7119
Map1:
for each line i of image Vt do
calculate X[i]=sum (Vti)
return X
Reduce1:
calculate the average of X denoted avg
Map2:
for each line i of image Vt
for each value Vti,j
calculate MC tij= Vtij-avg
return MC
In the formula 3, 𝑉𝑡
̅ denoted the average of image 𝑉𝑡
− Calculate the reduced centered matrix of M denoted MRC algorithm 3:
𝑀𝑅𝐶𝑡𝑖𝑗 =
MCtij
σt
for each 𝑖 = 1 to 𝐿 and for each 𝑗 = 1 to 𝐶 (4)
with
𝜎𝑡
2
=
1
𝐿𝑥𝐶
∑ ∑ (𝑉𝑡𝑖𝑗 − 𝑉𝑡
̅
𝐶
𝑗=1 )
𝐿
𝑖=1
2
or
𝜎𝑡
2
=
1
𝐿𝑥𝐶
∑ ∑ (𝑀𝐶𝑡𝑖𝑗
𝐶
𝑗=1 )
𝐿
𝑖=1
2
In the formula 4, 𝜎𝑡 denoted the standard deviation of image 𝑉𝑡.
Algorithm 3. Calculating the reduced centered matrix with Spark
Algorithm Reducing_Images (MC)
Input: vector images V centered
Output: the reduced centered matrix MRC
For each image centered 𝑀𝐶𝑡 do
Map1:
# Calculate the standard deviation of image Vt denoted σt
for each line 𝑖 of image 𝑀𝐶𝑡 do
for each value 𝑀𝐶𝑡𝑖, 𝑗 do
calculate 𝑀𝐶𝑡𝑖, 𝑗 = (𝑀𝐶𝑡𝑖, 𝑗 ∗ 𝑀𝐶𝑡𝑖, 𝑗)/(𝐿𝑥𝐶)
return 𝑀𝐶𝑡
Reduce1:
Calculate sqrt(sum(𝑀𝐶𝑡)) denoted σt
Map2:
for each line i of image 𝑀𝐶𝑡 do
for each value 𝑀𝐶𝑡𝑖, 𝑗 do
calculate 𝑀𝑅𝐶𝑡𝑖𝑗 = 𝑀𝐶𝑡𝑖, 𝑗/𝜎𝑡
return 𝑀𝑅𝐶
Step 2: Calculate the MRC correlation matrix of size (N, N) denoted: 𝑀𝐶𝑜𝑜𝑟
As stated in step 1, the MRC represents an image vector of size N. Every image 𝑀𝑅𝐶𝑡 corresponds to
a reduced and centered matrix. We will now utilize Spark's distributed parallel computation framework,
MapReduce, to compute the correlation matrix of size (N, N) by performing a matrix product operation between
the MRCT
and MRC vectors.
𝑀𝐶𝑜𝑟𝑟 =
1
𝐿𝑥𝐶
(𝑀𝑅𝐶𝑇
. 𝑀𝑅𝐶) (5)
𝑀𝐶𝑜𝑟𝑟𝑡,𝑘 =
1
LxC
( MRCt . MRCk ) for each 𝑡 = 1 to N and for each 𝑘 = 1 to N (6)
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 7115-7124
7120
with
𝑀𝑅𝐶𝑡 . 𝑀𝑅𝐶𝑘 = ∑ ∑ 𝑀𝑅𝐶𝑡𝑖𝑗 . 𝑀𝑅𝐶𝑘𝑖𝑗
𝐶
𝑗=1
𝐿
𝑖=1
To determine the value of each 𝑀𝐶𝑜𝑟𝑟𝑡,𝑘 in formula (6), the image 𝑀𝑅𝐶𝑡 is multiplied by the image 𝑀𝑅𝐶𝑘
pixel by pixel. Then we calculate the average of result algorithm 4:
Algorithm 4. Calculation of the correlation matrix using Spark
Algorithm Correlation_Images (MRC)
Input: Reduced centered matrix MRC
Output: Correlation matrix Mcorr of size NxN
For each 1 ≤ 𝑡 ≤ 𝑁 do
For each 1 ≤ 𝑘 ≤ 𝑁 do
Calculate 𝑆 = 𝑃𝑟𝑜𝑑𝑢𝑐𝑡(𝑀𝑅𝐶𝑡, 𝑀𝑅𝐶𝑘)
𝑀𝐶𝑜𝑟𝑟𝑡,𝑘 = 𝑆/(𝐿 × 𝐶)
return 𝑀𝐶𝑜𝑟𝑟
Algorithm Product(𝑀𝑅𝐶𝑡, 𝑀𝑅𝐶𝑘)
Input: 2 Reduced centered matrix 𝑀𝑅𝐶𝑡, 𝑀𝑅𝐶𝑘
Output: sum(the product of two images pixel by pixel)
Map1:
for each line 𝑖 of image 𝑀𝑅𝐶𝑡 do
calculate 𝑋𝑖 = (𝑀𝑅𝐶𝑡𝑖, 𝑀𝑅𝐶𝑘𝑖)
return 𝑋
Map2:
for each line 𝑖 of image 𝑋 do
calculate Xi= scalar product between 𝑋𝑖[0] et 𝑋𝑖[1]
return X
Reduce1:
Calculate 𝑆𝑢𝑚(𝑋) denoted 𝑆
Return 𝑆
Step 3: Calculate the eigenvector and eigenvalues of the MCorr matrix: [𝜆, 𝑉]
Step 4: Arrange the eigenvectors in descending order based on their corresponding eigenvalues and select the
first three columns of V (3 < 𝑁)
Step 5: Perform a projection of the matrix M onto the vector 𝑉: 𝑈 = 𝑀. 𝑉
Step 6: Utilize the newly obtained matrix U, which has a size of (m, 3), for the purpose of visualizing the
hyperspectral image.
4. EXPERIMENTS AND COMPUTATIONS
To test the proposed algorithm, the free visible air infra-red imaging spectrometer (AVIRIS) Moffett
Field image was used with 224 spectral bands in the 2.5 nanometers to 400 nanometers, which was acquired
on August 20, 1992 [14]. On this hyperspectral image, we took samples of different sizes, see Table 1, and
then on each image obtained, we tested the proposed distributed parallel algorithm and a serial implementation
of classical PCA from the Python library scikit-learn. We collected the three most significant eigenvalues as
shown in Table 2 and the execution time of each algorithm as shown in Figure 2.
Table 1. Datasets
Name of dataset Spatial dimensions Hyperspectral bands
Dataset 1 Moffett Field 500×500 3
Dataset 2 Moffett Field 500×500 10
Dataset 3 Moffett Field 1924×753 3
Dataset 4 Moffett Field 1924×753 10
Dataset 5 Moffett Field 1924×753 15
Dataset 6 Moffett Field 1924×753 20
Dataset 7 Moffett Field 1924×753 25
Dataset 8 Moffett Field 1924×753 50
Dataset 9 Moffett Field 1924×753 75
Dataset 10 Moffett Field 1924×753 100
Dataset 11 Moffett Field 1924×753 150
Dataset 12 Moffett Field 1924×753 224
7. Int J Elec & Comp Eng ISSN: 2088-8708
Visualization of hyperspectral images on parallel and distributed platform: Apache Spark (Abdelali Zbakh)
7121
Table 2. Example of the top three eigenvalues obtained from PCA
Sklearn PCA Proposed PCA
Dataset 1 1.9371026343
0.913755533084
0.149141832615
1.93710263
0.9137555
0.14914183
Dataset 5 10.06478855
4.01771578
0.5480712
10.06478855,
4.01771578,
0.5480712
Dataset 12 160.63876264
28.00313352
14.56390331
160.63876264,
28.00313352,
14.56390331
Figure 2. The runtime comparison between sklearn and the proposed PCA
The classical PCA of the scikit-learn library is tested on one computer equipped with: CPU: Intel®
Core ™ i7-2820QM CPU @ 2.30 GHz × 8, RAM: 8G, OS: Ubuntu 16.04 LTS. The proposed distributed
parallel algorithm has been tested on the cloud Databricks [26] of the configuration, as shown in Table 3. Both
algorithms are programmed with the Python language. The runtime comparison of the two algorithms shows
the speed of PCA sklearn for small images, but if the image has a high number of bands, more than 10 spectral
bands our proposed PCA is faster, see Figure 2. Figure 3 illustrates the visualization of a hyperspectral image
dataset. In Figure 3(a), the image is displayed without any PCA applied, presenting the original representation.
In contrast, Figure 3(b) showcases the image after the application of either classical PCA or the newly proposed
PCA distributed method.
Table 3. Configuration parameters of cluster spark in cloud Databricks
Driver Nodes
Driver type: 36 GB memory, 8
Cores
Number of Worker Nodes: 6
For each worker :8.0 GB Memory, 2 Core
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 7115-7124
7122
(a)
(b)
Figure 3. Visualization of hyperspectral image (dataset 12) in (a) the image is displayed without PCA, while
on the (b) the image is shown after applying either classical PCA or the proposed PCA
5. CONCLUSION
In this work, a method of visualizing a hyperspectral image has been proposed based on the reduction
of the dimensionality of the image in a parallel distributed environment. The algorithm has been developed
using Python 3, and evaluated on hyperspectral images utilizing the Spark platform. The results obtained align
with those of traditional PCA, and the visualization of the images post-application of our reduction algorithm
confirms the validity of our algorithm. By comparing the execution time of the two algorithms: sklearn PCA
and proposed PCA, we discovered that the proposed PCA algorithm displays faster performance when
processing large images. This observation implies that the proposed PCA algorithm may be more efficient and
effective in handling larger data sets compared to the classic algorithm.
REFERENCES
[1] J. S. Tyo, A. Konsolakis, D. I. Diersen, and R. C. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 41, no. 3, pp. 708–718, Mar. 2003, doi: 10.1109/TGRS.2003.808879.
[2] J. Pontius, M. Martin, L. Plourde, and R. Hallett, “Ash decline assessment in emerald ash borer-infested regions: A test of tree-
level, hyperspectral technologies,” Remote Sensing of Environment, vol. 112, no. 5, pp. 2665–2676, May 2008, doi:
10.1016/j.rse.2007.12.011.
9. Int J Elec & Comp Eng ISSN: 2088-8708
Visualization of hyperspectral images on parallel and distributed platform: Apache Spark (Abdelali Zbakh)
7123
[3] Y.-T. Chan, S.-J. Wang, and C.-H. Tsai, “Real-time foreground detection approach based on adaptive ensemble learning with
arbitrary algorithms for changing environments,” Information Fusion, vol. 39, pp. 154–167, Jan. 2018, doi:
10.1016/j.inffus.2017.05.001.
[4] G. Cheng, J. Han, and X. Lu, “Remote sensing image scene classification: Benchmark and state of the art,” Proceedings of the
IEEE, vol. 105, no. 10, pp. 1865–1883, Oct. 2017, doi: 10.1109/JPROC.2017.2675998.
[5] P. Duan, X. Kang, S. Li, and P. Ghamisi, “Noise-robust hyperspectral image classification via multi-scale total variation,” IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 6, pp. 1948–1962, Jun. 2019, doi:
10.1109/JSTARS.2019.2915272.
[6] Y. Khouj, J. Dawson, J. Coad, and L. Vona-Davis, “Hyperspectral imaging and K-means classification for histologic evaluation of
ductal carcinoma in situ,” Frontiers in Oncology, vol. 8, Feb. 2018, doi: 10.3389/fonc.2018.00017.
[7] H. Su, Q. Du, and P. Du, “Hyperspectral image visualization using band selection,” IEEE Journal of Selected Topics in Applied
Earth Observations and Remote Sensing, vol. 7, no. 6, pp. 2647–2658, Jun. 2014, doi: 10.1109/JSTARS.2013.2272654.
[8] Yuan Yuan, Guokang Zhu, and Qi Wang, “Hyperspectral band selection by multitask sparsity pursuit,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 53, no. 2, pp. 631–644, Feb. 2015, doi: 10.1109/TGRS.2014.2326655.
[9] G. Zhu, Y. Huang, J. Lei, Z. Bi, and F. Xu, “Unsupervised hyperspectral band selection by dominant set extraction,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 54, no. 1, pp. 227–239, Jan. 2016, doi: 10.1109/TGRS.2015.2453362.
[10] C. Theoharatos, V. Tsagaris, N. Fragoulis, and G. Economou, “Hyperspectral image fusion using 2-D principal component
analysis,” in 2011 2nd International Conference on Space Technology, Sep. 2011, pp. 1–4, doi: 10.1109/ICSpT.2011.6064682.
[11] W. Liu, X. Yang, D. Tao, J. Cheng, and Y. Tang, “Multiview dimension reduction via hessian multiset canonical correlations,”
Information Fusion, vol. 41, pp. 119–128, May 2018, doi: 10.1016/j.inffus.2017.09.001.
[12] R. S. Lynch and P. K. Willett, “Use of Bayesian data reduction for the fusion of legacy classifiers,” Information Fusion, vol. 4, no.
1, pp. 23–34, Mar. 2003, doi: 10.1016/S1566-2535(02)00098-2.
[13] S. Salloum, R. Dautov, X. Chen, P. X. Peng, and J. Z. Huang, “Big data analytics on apache spark,” International Journal of Data
Science and Analytics, vol. 1, no. 3–4, pp. 145–164, Nov. 2016, doi: 10.1007/s41060-016-0027-9.
[14] A. Zbakh, Z. Alaoui, A. Benyoussef, A. El, and M. El, “Spectral classification of a set of hyperspectral images using the
convolutional neural network, in a single training,” International Journal of Advanced Computer Science and Applications, vol. 10,
no. 6, 2019, doi: 10.14569/IJACSA.2019.0100634.
[15] K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral images using bilateral filtering,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 48, no. 5, pp. 2308–2316, May 2010, doi: 10.1109/TGRS.2009.2037950.
[16] M. Mignotte, “A bicriteria-optimization-approach-based dimensionality-reduction model for the color display of hyperspectral
images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 2, pp. 501–513, Feb. 2012, doi:
10.1109/TGRS.2011.2160646.
[17] Q. Du, N. Raksuntorn, S. Cai, and R. J. Moorhead, “Color display for hyperspectral imagery,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 46, no. 6, pp. 1858–1866, Jun. 2008, doi: 10.1109/TGRS.2008.916203.
[18] B. Zhao, X. Dong, Y. Guo, X. Jia, and Y. Huang, “PCA dimensionality reduction method for image classification,” Neural
Processing Letters, vol. 54, no. 1, pp. 347–368, Feb. 2022, doi: 10.1007/s11063-021-10632-5.
[19] Xiuping Jia and J. A. Richards, “Segmented principal components transformation for efficient hyperspectral remote-sensing image
display and classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 37, no. 1, pp. 538–542, 1999, doi:
10.1109/36.739109.
[20] H. Zhang, D. W. Messinger, and E. D. Montag, “Perceptual display strategies of hyperspectral imagery based on PCA and ICA,”
in Proc. SPIE 6233, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, May 2006, doi:
10.1117/12.665696.
[21] X. Meng et al., “MLlib: Machine learning in apache spark,” Journal of Machine Learning Research, vol. 17, pp. 1–7, 2016.
[22] O. Azeroual and A. Nikiforova, “Apache spark and MLlib-based intrusion detection system or how the big data technologies can
secure the data,” Information, vol. 13, no. 2, Jan. 2022, doi: 10.3390/info13020058.
[23] N. S. Sagheer and S. A. Yousif, “A parallel clustering analysis based on hadoop multi-node and apache mahout,” Iraqi Journal of
Science, pp. 2431–2444, Jul. 2021, doi: 10.24996/ijs.2021.62.7.32.
[24] T. Elgamal, M. Yabandeh, A. Aboulnaga, W. Mustafa, and M. Hefeeda, “SPCA: Scalable principal component analysis for big data
on distributed platforms,” in Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, May 2015,
pp. 79–91, doi: 10.1145/2723372.2751520.
[25] Z. Wu, Y. Li, A. Plaza, J. Li, F. Xiao, and Z. Wei, “Parallel and distributed dimensionality reduction of hyperspectral data on cloud
computing architectures,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 6,
pp. 2270–2278, Jun. 2016, doi: 10.1109/JSTARS.2016.2542193.
[26] Y. O. Sayad, H. Mousannif, and H. Al Moatassime, “Predictive modeling of wildfires: A new dataset and machine learning
approach,” Fire Safety Journal, vol. 104, pp. 130–146, Mar. 2019, doi: 10.1016/j.firesaf.2019.01.006.
BIOGRAPHIES OF AUTHORS
Abdelali Zbakh is an assistant professor in Computer Science at National School
of Business and Management-Tangier, Abdelmalek Essaâdi University Morocco. He is a former
Teacher of computing at preparatory classes for engineering schools – Tangier. He received his
Ph.D. degree in computer science from the faculty of sciences Rabat - Mohammed V university
of Rabat. His current research interests include information systems, machine learning and deep
learning. He can be contacted at email a.zbakh@uae.ac.ma.
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 7115-7124
7124
Mohamed Taj Bennani received his master’s degree in Computer Science and
Networking from Science Faculty of Tangier, Tangier in 2011. He received his Ph.D. in 2019
(Computing Science and Networking) from Science Faculty of Tangier. At present, he is
working as a prof. at Faculty of Science of Dhar el Mahraz Fez since 2019. He can be contacted
at email Bennani.taj@gmail.com.
Adnan Souri is an assistant professor in Computer Science at Abdelmalek Essaâdi
University, Faculty of Sciences Tetouan, Morocco. He is a former Teacher of computing at
preparatory classes for engineering schools – Tangier. He received his Ph.D. degree in computer
science from the Faculty of Sciences Tetouan, Abdelmalek Essaâdi University Morocco. His
current research interests include artificial intelligence, artificial neural networks and algorithms.
Their current project is “Arabic language processing”. He can be contacted at email
a.souri@uae.ac.ma.
Outman El Hichami received his Ph.D. in 2017 from the Faculty of Sciences in
Tetouan, Morocco. In 2018, he joined the Higher Normal School in Tetouan as an assistant
professor in Computer Science, where he was promoted to associate professor in 2022. His
research interests are in formal methods and machine learning. He can be contacted at email
oelhichami@uae.ac.ma.