This document summarizes the results of testing an image retrieval technique that uses row means of transformed image columns as a feature vector on a database of 1000 images across 11 categories. Seven different transforms were tested, including DCT, DST, Haar, Walsh, Kekre, Slant, and Hartley. For each transform, precision and recall were calculated with and without including the DC component in the feature vector. The technique was tested on both gray and color versions of the database. In general, the technique produced higher precision and recall than using the full transform or simple row means, especially when including the DC component. For gray images, the best performing transforms were DST, Haar, Hartley, DCT,
Comprehensive Performance Comparison of Cosine, Walsh, Haar, Kekre, Sine, Sla...CSCJournals
This document presents a comparison of various image transforms for content-based image retrieval (CBIR) using fractional coefficients of transformed images. It proposes CBIR techniques that extract features from images transformed using discrete cosine, Walsh, Haar, Kekre, discrete sine, slant, and discrete Hartley transforms. Features are extracted from the gray-scale and individual color planes of images. Fractional coefficients, representing percentages of the full transformed image, are used to reduce feature vector size and speed up retrieval times compared to using the full transform. The techniques are tested on a database of 1000 images from 11 categories, and results show the Kekre transform achieves the best precision and recall, outperforming other transforms when using 6
Analysis of combined approaches of CBIR systems by clustering at varying prec...IJECEIAES
The image retrieving system is used to retrieve images from the image database. Two types of Image retrieval techniques are commonly used: content-based and text-based techniques. One of the well-known image retrieval techniques that extract the images in an unsupervised way, known as the cluster-based image retrieval technique. In this cluster-based image retrieval, all visual features of an image are combined to find a better retrieval rate and precisions. The objectives of the study were to develop a new model by combining the three traits i.e., color, shape, and texture of an image. The color-shape and colortexture models were compared to a threshold value with various precision levels. A union was formed of a newly developed model with a color-shape, and color-texture model to find the retrieval rate in terms of precisions of the image retrieval system. The results were experimented on on the COREL standard database and it was found that the union of three models gives better results than the image retrieval from the individual models. The newly developed model and the union of the given models also gives better results than the existing system named clusterbased retrieval of images by unsupervised learning (CLUE).
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
Contourlet Transform Based Method For Medical Image DenoisingCSCJournals
Noise is an important factor of the medical image quality, because the high noise of medical imaging will not give us the useful information of the medical diagnosis. Basically, medical diagnosis is based on normal or abnormal information provided diagnose conclusion. In this paper, we proposed a denoising algorithm based on Contourlet transform for medical images. Contourlet transform is an extension of the wavelet transform in two dimensions using the multiscale and directional filter banks. The Contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. For verifying the denoising performance of the Contourlet transform, two kinds of noise are added into our samples; Gaussian noise and speckle noise. Soft thresholding value for the Contourlet coefficients of noisy image is computed. Finally, the experimental results of proposed algorithm are compared with the results of wavelet transform. We found that the proposed algorithm has achieved acceptable results compared with those achieved by wavelet transform.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comprehensive Performance Comparison of Cosine, Walsh, Haar, Kekre, Sine, Sla...CSCJournals
This document presents a comparison of various image transforms for content-based image retrieval (CBIR) using fractional coefficients of transformed images. It proposes CBIR techniques that extract features from images transformed using discrete cosine, Walsh, Haar, Kekre, discrete sine, slant, and discrete Hartley transforms. Features are extracted from the gray-scale and individual color planes of images. Fractional coefficients, representing percentages of the full transformed image, are used to reduce feature vector size and speed up retrieval times compared to using the full transform. The techniques are tested on a database of 1000 images from 11 categories, and results show the Kekre transform achieves the best precision and recall, outperforming other transforms when using 6
Analysis of combined approaches of CBIR systems by clustering at varying prec...IJECEIAES
The image retrieving system is used to retrieve images from the image database. Two types of Image retrieval techniques are commonly used: content-based and text-based techniques. One of the well-known image retrieval techniques that extract the images in an unsupervised way, known as the cluster-based image retrieval technique. In this cluster-based image retrieval, all visual features of an image are combined to find a better retrieval rate and precisions. The objectives of the study were to develop a new model by combining the three traits i.e., color, shape, and texture of an image. The color-shape and colortexture models were compared to a threshold value with various precision levels. A union was formed of a newly developed model with a color-shape, and color-texture model to find the retrieval rate in terms of precisions of the image retrieval system. The results were experimented on on the COREL standard database and it was found that the union of three models gives better results than the image retrieval from the individual models. The newly developed model and the union of the given models also gives better results than the existing system named clusterbased retrieval of images by unsupervised learning (CLUE).
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHEScscpconf
Image retrieval system is an active area to propose a new approach to retrieve images from the
large image database. In this concerned, we proposed an algorithm to represent images using
divisive based and partitioned based clustering approaches. The HSV color component and Haar wavelet transform is used to extract image features. These features are taken to segment an image to obtain objects. For segmenting an image, we used modified k-means clustering algorithm to group similar pixel together into K groups with cluster centers. To modify Kmeans, we proposed a divisive based clustering algorithm to determine the number of cluster and get back with number of cluster to k-means to obtain significant object groups. In addition, we also discussed the similarity distance measure using threshold value and object uniqueness to quantify the results.
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
Contourlet Transform Based Method For Medical Image DenoisingCSCJournals
Noise is an important factor of the medical image quality, because the high noise of medical imaging will not give us the useful information of the medical diagnosis. Basically, medical diagnosis is based on normal or abnormal information provided diagnose conclusion. In this paper, we proposed a denoising algorithm based on Contourlet transform for medical images. Contourlet transform is an extension of the wavelet transform in two dimensions using the multiscale and directional filter banks. The Contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. For verifying the denoising performance of the Contourlet transform, two kinds of noise are added into our samples; Gaussian noise and speckle noise. Soft thresholding value for the Contourlet coefficients of noisy image is computed. Finally, the experimental results of proposed algorithm are compared with the results of wavelet transform. We found that the proposed algorithm has achieved acceptable results compared with those achieved by wavelet transform.
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
The Ultimate significance of Images lies in processing the digital image which stems from two principal application areas: Advances of pictorial information for human interpretation; and dispensation of image data for storage, communication, and illustration for self-sufficient machine perception. The objective of this research work is to define the meaning and possibility of image segmentation based on remote sensing images which are successively classified with statistical measures. In this paper kernel induced Possiblistic C-means clustering algorithm has been implemented for classifying remote sensing image data with image features. As a final point of the proposed work is to point out that this algorithm works well for segmenting and classifying the image with better accuracy with statistical metrices.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
IRJET- Image Enhancement using Various Discrete Wavelet Transformation Fi...IRJET Journal
The document discusses various image enhancement techniques using discrete wavelet transformation (DWT) methods. It analyzes existing image enhancement and super-resolution methods and identifies issues like loss of pixels and difficulty determining the best technique. The research aims to propose a comparative analysis of commonly used super-resolution techniques in the wavelet domain. Techniques like wavelet zero padding, stationary wavelet transform, discrete wavelet transform, and dual tree complex wavelet transform are described and their performance is compared by calculating PSNR values of output images from different techniques processed through MATLAB. Experimental results on various benchmark images show that discrete wavelet transform combined with interpolation methods generates higher PSNR values, meaning better quality enhanced images.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
This document discusses image fusion techniques for enhancing images. It begins with an introduction to image fusion, which combines relevant information from multiple images of the same scene into a single enhanced image. It then discusses discrete wavelet transform (DWT) based image fusion in more detail. Several image fusion rules for combining coefficient data during the DWT process are described, including maximum selection, weighted average, and window-based verification schemes. The importance of image fusion for applications like object identification, classification, and change detection is highlighted. Finally, the document reviews related work on different image fusion methods and algorithms proposed by other researchers.
IRJET - Underwater Image Enhancement using PCNN and NSCT FusionIRJET Journal
This document discusses techniques for enhancing underwater images that have been degraded due to scattering and absorption in the water medium. It proposes a new method for color image fusion using Non-Subsampled Contourlet Transform (NSCT) and Pulse Coupled Neural Network (PCNN). NSCT is used to decompose the image into sub-bands, while PCNN is used to fuse the high frequency sub-band coefficients. The proposed method is shown to outperform other fusion methods in objective quality assessment metrics. Various other underwater image enhancement techniques are also discussed, including wavelength compensation, multi-band fusion, image mode filtering, and approaches using neural networks like convolutional neural networks.
This document presents a study on medial axis transformation (MAT) based skeletonization of image patterns using image processing techniques. It discusses how the MAT of an image can be extracted by first computing the Euclidean distance transform of the binary image. Local maxima in the distance transform image correspond to the MAT. Several performance evaluation metrics for analyzing skeletonized images are also introduced, such as connectivity number, thinness measurement and sensitivity. The technique is demonstrated on sample images and results show it can effectively extract the skeleton with good computational speed.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
Improving Graph Based Model for Content Based Image RetrievalIRJET Journal
This document summarizes a research paper that proposes improvements to a graph-based model called Manifold Ranking (MR) for content-based image retrieval. Specifically, it introduces a novel scalable graph-based ranking model called Efficient Manifold Ranking (EMR) that addresses shortcomings of MR in scalable graph construction and efficient ranking computation. The proposed EMR model builds an anchor graph on the database instead of a traditional k-nearest neighbor graph, and designs a new form of adjacency matrix to speed up the ranking computation. Experimental results on large image databases demonstrate that EMR is effective for real-world image retrieval applications.
Image Contrast Enhancement Approach using Differential Evolution and Particle...IRJET Journal
This document presents a method for enhancing the contrast of gray-scale images using differential evolution optimization. It proposes using a parameterized intensity transformation function to modify pixel gray levels, with the goal of maximizing image contrast. The differential evolution algorithm is used to optimize the parameters of the transformation function. Experimental results applying this method are compared to other contrast enhancement techniques like histogram equalization and particle swarm optimization. The document provides background on image enhancement techniques, a literature review of previous work applying evolutionary algorithms like particle swarm optimization to image enhancement, and details of the proposed differential evolution approach, including the transformation function and fitness function used to evaluate contrast.
IRJET- Design and Implementation of ATM Security System using Vibration Senso...IRJET Journal
This document discusses implementing bi-histogram equalization for contrast enhancement on the Android platform. It begins with an introduction to histogram equalization and its drawback of changing image brightness. It then presents bi-histogram equalization as an approach to overcome this by decomposing the image into two sub-images based on the mean and equalizing them independently to preserve the mean brightness. The paper outlines implementing various steps like image acquisition, preprocessing, and bi-histogram equalization on Android. It shows output images with enhanced visibility compared to the originals, avoiding the flattening property of standard histogram equalization.
This document describes an image fusion method using pyramidal decomposition. It proposes extracting fine details from input images using guided filtering and fusing the base layers of images across multiple exposures or focal points using a multiresolution pyramid approach. A weight map is generated considering exposure, contrast, and saturation to guide the fusion of base layers. The fused base layer is then combined with extracted fine details to produce a detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions of the input images. It is argued that this method can effectively fuse images from different exposures or focal points without introducing artifacts.
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET Journal
This document compares an optimized block matching algorithm to the four step search algorithm. It first provides background on block matching algorithms and motion estimation techniques used in video compression. It then describes the existing four step search algorithm and its process of checking 17-27 points to find the best motion vector match. The document proposes a new simpler and more efficient four step search algorithm that separates the search area into quadrants. It checks 3 points in the first phase to select a quadrant, then finds the lowest cost point in the second phase to set as the new origin, reducing computational complexity compared to the standard four step search.
ROI Based Image Compression in Baseline JPEGIJERA Editor
To improve the efficiency of standard JPEG compression algorithm an adaptive quantization technique based on the support for region of interest of compression is introduced. Since this is a lossy compression technique the less important bits are discarded and are not restored back during decompression. Adaptive quantization is carried out by applying two different quantization to the picture provided by the user. The user can select any part of the image and enter the required quality for compression. If according to the user the subject is more important than the background then more quality is provided to the subject than the background and vice- versa. Adaptive quantization in baseline sequential JPEG is carried out by applying Forward Discrete Cosine Transform (FDCT), two different quantization provided by the user for compression, thereby achieving region of interest compression and Inverse Discrete Cosine Transform (IDCT) for decompression. This technique makes sure that the memory is used efficiently. Moreover we have specifically designed this for identifying defects in the leather samples clearly.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
Interpolation Technique using Non Linear Partial Differential Equation with E...CSCJournals
This document presents a new image zooming algorithm that combines edge directed bicubic interpolation and a non-linear partial differential equation (PDE) method. The algorithm first uses edge directed bicubic interpolation to enlarge the image and fill empty pixels, producing a high resolution image. This noisy image is then input to a fourth-order PDE model for noise removal. Simulation results on test images show the proposed method achieves higher peak signal-to-noise ratios and structural similarity indices than other interpolation methods like bilinear and locally adaptive zooming. The method reduces artifacts and blurring near edges in zoomed images.
This document summarizes a paper presented at the 2nd International Conference on Current Trends in Engineering and Management. The paper proposes using discrete wavelet transform techniques for pixel-based fusion of multi-focus images. It discusses registering the images and then applying pixel-level fusion methods like average, minimum and maximum approaches. It also introduces a wavelet-based fusion method that decomposes images into different frequency bands for fusion. The goal is to produce a single fused image that has the maximum information and focus from the input images.
IMAGE DE-NOISING USING DEEP NEURAL NETWORKaciijournal
Deep neural network as a part of deep learning algorithm is a state-of-the-art approach to find higher level representations of input data which has been introduced to many practical and challenging learning problems successfully. The primary goal of deep learning is to use large data to help solving a given task
on machine learning. We propose an methodology for image de-noising project defined by this model and conduct training a large image database to get the experimental output. The result shows the robustness and efficient our our algorithm.
A Review of Image Contrast Enhancement TechniquesIRJET Journal
This document reviews several techniques for image contrast enhancement. It begins with an introduction to image enhancement and its goals of improving visual appearance and clarity. The paper then surveys key approaches for contrast enhancement including histogram equalization, discrete wavelet transform, and other spatial and frequency domain methods. Finally, the conclusion is that contrast enhancement using digital image processing continues to be an active area of research that can help solve problems across many fields involving image analysis.
K nearest neighbor classification over semantically secure encryptedShakas Technologies
Data Mining has wide applications in many areas such as banking, medicine, scientific research and among government agencies. Classification is one of the commonly used tasks in data mining applications.
User-Centric Evaluation of a K-Furthest Neighbor Collaborative Filtering Reco...Alan Said
This document summarizes a study on a new recommendation algorithm called K-Furthest Neighbor (KFN) which recommends items that are disliked by users dissimilar to the target user. The study found that:
1) KFN performed worse than the standard K-Nearest Neighbor algorithm in offline evaluation metrics but was perceived as more useful by users in online evaluations.
2) Users found the recommendations from KFN to be less obvious and recognizable but similarly serendipitous and useful as the standard algorithm.
3) Recommending items disliked by dissimilar users leads to more diverse recommendations while maintaining comparable overall usefulness, even if standard offline metrics say otherwise.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
IRJET- Image Enhancement using Various Discrete Wavelet Transformation Fi...IRJET Journal
The document discusses various image enhancement techniques using discrete wavelet transformation (DWT) methods. It analyzes existing image enhancement and super-resolution methods and identifies issues like loss of pixels and difficulty determining the best technique. The research aims to propose a comparative analysis of commonly used super-resolution techniques in the wavelet domain. Techniques like wavelet zero padding, stationary wavelet transform, discrete wavelet transform, and dual tree complex wavelet transform are described and their performance is compared by calculating PSNR values of output images from different techniques processed through MATLAB. Experimental results on various benchmark images show that discrete wavelet transform combined with interpolation methods generates higher PSNR values, meaning better quality enhanced images.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
This document discusses image fusion techniques for enhancing images. It begins with an introduction to image fusion, which combines relevant information from multiple images of the same scene into a single enhanced image. It then discusses discrete wavelet transform (DWT) based image fusion in more detail. Several image fusion rules for combining coefficient data during the DWT process are described, including maximum selection, weighted average, and window-based verification schemes. The importance of image fusion for applications like object identification, classification, and change detection is highlighted. Finally, the document reviews related work on different image fusion methods and algorithms proposed by other researchers.
IRJET - Underwater Image Enhancement using PCNN and NSCT FusionIRJET Journal
This document discusses techniques for enhancing underwater images that have been degraded due to scattering and absorption in the water medium. It proposes a new method for color image fusion using Non-Subsampled Contourlet Transform (NSCT) and Pulse Coupled Neural Network (PCNN). NSCT is used to decompose the image into sub-bands, while PCNN is used to fuse the high frequency sub-band coefficients. The proposed method is shown to outperform other fusion methods in objective quality assessment metrics. Various other underwater image enhancement techniques are also discussed, including wavelength compensation, multi-band fusion, image mode filtering, and approaches using neural networks like convolutional neural networks.
This document presents a study on medial axis transformation (MAT) based skeletonization of image patterns using image processing techniques. It discusses how the MAT of an image can be extracted by first computing the Euclidean distance transform of the binary image. Local maxima in the distance transform image correspond to the MAT. Several performance evaluation metrics for analyzing skeletonized images are also introduced, such as connectivity number, thinness measurement and sensitivity. The technique is demonstrated on sample images and results show it can effectively extract the skeleton with good computational speed.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
Improving Graph Based Model for Content Based Image RetrievalIRJET Journal
This document summarizes a research paper that proposes improvements to a graph-based model called Manifold Ranking (MR) for content-based image retrieval. Specifically, it introduces a novel scalable graph-based ranking model called Efficient Manifold Ranking (EMR) that addresses shortcomings of MR in scalable graph construction and efficient ranking computation. The proposed EMR model builds an anchor graph on the database instead of a traditional k-nearest neighbor graph, and designs a new form of adjacency matrix to speed up the ranking computation. Experimental results on large image databases demonstrate that EMR is effective for real-world image retrieval applications.
Image Contrast Enhancement Approach using Differential Evolution and Particle...IRJET Journal
This document presents a method for enhancing the contrast of gray-scale images using differential evolution optimization. It proposes using a parameterized intensity transformation function to modify pixel gray levels, with the goal of maximizing image contrast. The differential evolution algorithm is used to optimize the parameters of the transformation function. Experimental results applying this method are compared to other contrast enhancement techniques like histogram equalization and particle swarm optimization. The document provides background on image enhancement techniques, a literature review of previous work applying evolutionary algorithms like particle swarm optimization to image enhancement, and details of the proposed differential evolution approach, including the transformation function and fitness function used to evaluate contrast.
IRJET- Design and Implementation of ATM Security System using Vibration Senso...IRJET Journal
This document discusses implementing bi-histogram equalization for contrast enhancement on the Android platform. It begins with an introduction to histogram equalization and its drawback of changing image brightness. It then presents bi-histogram equalization as an approach to overcome this by decomposing the image into two sub-images based on the mean and equalizing them independently to preserve the mean brightness. The paper outlines implementing various steps like image acquisition, preprocessing, and bi-histogram equalization on Android. It shows output images with enhanced visibility compared to the originals, avoiding the flattening property of standard histogram equalization.
This document describes an image fusion method using pyramidal decomposition. It proposes extracting fine details from input images using guided filtering and fusing the base layers of images across multiple exposures or focal points using a multiresolution pyramid approach. A weight map is generated considering exposure, contrast, and saturation to guide the fusion of base layers. The fused base layer is then combined with extracted fine details to produce a detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions of the input images. It is argued that this method can effectively fuse images from different exposures or focal points without introducing artifacts.
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET Journal
This document compares an optimized block matching algorithm to the four step search algorithm. It first provides background on block matching algorithms and motion estimation techniques used in video compression. It then describes the existing four step search algorithm and its process of checking 17-27 points to find the best motion vector match. The document proposes a new simpler and more efficient four step search algorithm that separates the search area into quadrants. It checks 3 points in the first phase to select a quadrant, then finds the lowest cost point in the second phase to set as the new origin, reducing computational complexity compared to the standard four step search.
ROI Based Image Compression in Baseline JPEGIJERA Editor
To improve the efficiency of standard JPEG compression algorithm an adaptive quantization technique based on the support for region of interest of compression is introduced. Since this is a lossy compression technique the less important bits are discarded and are not restored back during decompression. Adaptive quantization is carried out by applying two different quantization to the picture provided by the user. The user can select any part of the image and enter the required quality for compression. If according to the user the subject is more important than the background then more quality is provided to the subject than the background and vice- versa. Adaptive quantization in baseline sequential JPEG is carried out by applying Forward Discrete Cosine Transform (FDCT), two different quantization provided by the user for compression, thereby achieving region of interest compression and Inverse Discrete Cosine Transform (IDCT) for decompression. This technique makes sure that the memory is used efficiently. Moreover we have specifically designed this for identifying defects in the leather samples clearly.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
Interpolation Technique using Non Linear Partial Differential Equation with E...CSCJournals
This document presents a new image zooming algorithm that combines edge directed bicubic interpolation and a non-linear partial differential equation (PDE) method. The algorithm first uses edge directed bicubic interpolation to enlarge the image and fill empty pixels, producing a high resolution image. This noisy image is then input to a fourth-order PDE model for noise removal. Simulation results on test images show the proposed method achieves higher peak signal-to-noise ratios and structural similarity indices than other interpolation methods like bilinear and locally adaptive zooming. The method reduces artifacts and blurring near edges in zoomed images.
This document summarizes a paper presented at the 2nd International Conference on Current Trends in Engineering and Management. The paper proposes using discrete wavelet transform techniques for pixel-based fusion of multi-focus images. It discusses registering the images and then applying pixel-level fusion methods like average, minimum and maximum approaches. It also introduces a wavelet-based fusion method that decomposes images into different frequency bands for fusion. The goal is to produce a single fused image that has the maximum information and focus from the input images.
IMAGE DE-NOISING USING DEEP NEURAL NETWORKaciijournal
Deep neural network as a part of deep learning algorithm is a state-of-the-art approach to find higher level representations of input data which has been introduced to many practical and challenging learning problems successfully. The primary goal of deep learning is to use large data to help solving a given task
on machine learning. We propose an methodology for image de-noising project defined by this model and conduct training a large image database to get the experimental output. The result shows the robustness and efficient our our algorithm.
A Review of Image Contrast Enhancement TechniquesIRJET Journal
This document reviews several techniques for image contrast enhancement. It begins with an introduction to image enhancement and its goals of improving visual appearance and clarity. The paper then surveys key approaches for contrast enhancement including histogram equalization, discrete wavelet transform, and other spatial and frequency domain methods. Finally, the conclusion is that contrast enhancement using digital image processing continues to be an active area of research that can help solve problems across many fields involving image analysis.
K nearest neighbor classification over semantically secure encryptedShakas Technologies
Data Mining has wide applications in many areas such as banking, medicine, scientific research and among government agencies. Classification is one of the commonly used tasks in data mining applications.
User-Centric Evaluation of a K-Furthest Neighbor Collaborative Filtering Reco...Alan Said
This document summarizes a study on a new recommendation algorithm called K-Furthest Neighbor (KFN) which recommends items that are disliked by users dissimilar to the target user. The study found that:
1) KFN performed worse than the standard K-Nearest Neighbor algorithm in offline evaluation metrics but was perceived as more useful by users in online evaluations.
2) Users found the recommendations from KFN to be less obvious and recognizable but similarly serendipitous and useful as the standard algorithm.
3) Recommending items disliked by dissimilar users leads to more diverse recommendations while maintaining comparable overall usefulness, even if standard offline metrics say otherwise.
A Parallel Architecture for Multiple-Face Detection Technique Using AdaBoost ...Hadi Santoso
Face detection is a very important biometric application in the field of image
analysis and computer vision. The basic face detection method is AdaBoost
algorithm with a cascading Haar-like feature classifiers based on the
framework proposed by Viola and Jones. Real-time multiple-face detection,
for instance on CCTVs with high resolution, is a computation-intensive
procedure. If the procedure is performed sequentially, an optimal real-time
performance will not be achieved. In this paper we propose an architectural
design for a parallel and multiple-face detection technique based on Viola
and Jones' framework. To do this systematically, we look at the problem
from 4 points of view, namely: data processing taxonomy, parallel memory
architecture, the model of parallel programming, as well as the design of
parallel program. We also build a prototype of the proposed parallel
technique and conduct a series of experiments to investigate the gained
acceleration.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2014-member-meeting-scottkrig
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Scott Krig, author of the book "Computer Vision Metrics: Survey, Taxonomy, and Analysis," delivers the presentation "Introduction to Feature Descriptors in Vision: From Haar to SIFT" at the September 2014 Embedded Vision Alliance Member Meeting.
Data.Mining.C.6(II).classification and predictionMargaret Wang
The document summarizes different machine learning classification techniques including instance-based approaches, ensemble approaches, co-training approaches, and partially supervised approaches. It discusses k-nearest neighbor classification and how it works. It also explains bagging, boosting, and AdaBoost ensemble methods. Co-training uses two independent views to label unlabeled data. Partially supervised approaches can build classifiers using only positive and unlabeled data.
Data Stream Outlier Detection Algorithm Hamza Aslam
This document presents a new data stream outlier detection algorithm called SODRNN, which is based on reverse k nearest neighbors. It uses a sliding window model to detect anomalies in the current window by performing outlier queries. The algorithm consists of a Stream Manager procedure that efficiently updates the window with insertions and deletions by scanning the window only once. It also includes a Query Manager procedure that can detect concept drift. Experimental results on both synthetic and real datasets show that SODRNN is effective and efficient at detecting outliers in data streams.
The document discusses the K-nearest neighbor (K-NN) classifier, a machine learning algorithm where data is classified based on its similarity to its nearest neighbors. K-NN is a lazy learning algorithm that assigns data points to the most common class among its K nearest neighbors. The value of K impacts the classification, with larger K values reducing noise but possibly oversmoothing boundaries. K-NN is simple, intuitive, and can handle non-linear decision boundaries, but has disadvantages such as computational expense and sensitivity to K value selection.
This document discusses k-nearest neighbor (k-NN) machine learning algorithms. It explains that k-NN is an instance-based, lazy learning method that stores all training data and classifies new examples based on their similarity to stored examples. The key steps are: (1) calculate the distance between a new example and all stored examples, (2) find the k nearest neighbors, (3) assign the new example the most common class of its k nearest neighbors. Important considerations include the distance metric, value of k, and voting scheme for classification.
The document is a project report on developing an E-Property system for Mascot Software Services Pvt Ltd. It includes an introduction to the company, description of the existing manual property registration system and need for a new system. It also describes the scope, hardware requirements, software requirements and technologies used like ASP.NET, C# and SQL Server for developing the proposed online E-Property system.
A project report on commodity market with special reference to gold at karvy...Babasab Patil
This document discusses a study of the commodity market with a focus on gold. It provides an overview of Karvy Commodities Broking Limited and the services it offers. The study examines the gold commodity futures market in India, how it works, and the participants involved. It analyzes the impact of the spot gold market on future gold prices and the various economic factors that affect gold future prices. The study finds a positive correlation between spot and future gold prices. It suggests that Karvy provide more awareness and education on commodity trading to investors in order to attract more customers.
Approximate nearest neighbor methods and vector models – NYC ML meetupErik Bernhardsson
Nearest neighbors refers to something that is conceptually very simple. For a set of points in some space (possibly many dimensions), we want to find the closest k neighbors quickly.
This presentation covers a library called Annoy built my me that that helps you do (approximate) nearest neighbor queries in high dimensional spaces. We're going through vector models, how to measure similarity, and why nearest neighbor queries are useful.
Project report on ONLINE REAL ESTATE BUSINESSDivyesh Shah
A project report on 'online real estate' will help you to understand the modeling diagrams for this project and all type of information related to this project
This document appears to be a template for the appendices section of a project report submitted by a student. It includes sample cover page, title page, certificate, acknowledgements, executive summary, table of contents, list of tables, and sections for the objective and scope, limitations, company profile, research methodology, data tabulation, analysis, observations and findings, conclusions, recommendations, bibliography, and appendices. Each appendix provides headings and formatting for the different components typically included in a student project report.
Computer Science Investigatory Project Class 12Self-employed
The document describes a project report submitted by Rahul Kushwaha on a railway ticket reservation system. It includes certificates from the guide and examiner approving the report. The report contains sections describing the header files used, files generated, the working of the program, the coding, output screens, and conclusion. It was submitted for a computer science class and thanks the guide, principal, parents and classmates for their support.
This feasibility report analyzes a proposed waste water system project. It recommends the project proceed based on identified needs in the community and project viability. Key points include: the existing system is deficient; a new system is needed to serve current and projected population; and the estimated capital costs and financing plan make the project economically feasible. The report provides background on the area's needs, outlines the proposed system components, and recommends next steps for further investigation and implementation.
The Field of Human Resource Management is developing very fast and every department of Human activity is realizing it’s important in the smooth functioning of the organization. Innovative techniques are developed to improve the culture at workplace so that the employees are motivated to give in their best to the organization as also to attain job satisfaction. Hence, it important implements the latest human resource practices in the organization.
The Latest Techniques in the field of Human Resource Development are Employees for Lease, Moon Lighting by Employees, Dual Career Group, Work Life Balance (flexi time & flexi work), Training & Development, Management Participation in Employees’ organization, Employee’s Proxy, Human Resources Accounting, Organizational Politics, Exit Policy & Practice, etc.
This project is about WORK LIFE BALANCE. A latest technique in the field of a human resource. To see how the organization is adopting the new trends in the HR field.
Performance Comparison of Image Retrieval Using Fractional Coefficients of Tr...CSCJournals
The thirst of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using Discrete Cosine, Walsh, Haar and Kekre’s transforms. Here the advantage of energy compaction of transforms in higher coefficients is taken to greatly reduce the feature vector size per image by taking fractional coefficients of transformed image. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being considering all the coefficients of transformed image and then fourteen reduced coefficients sets (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image) are considered as feature vectors. The four transforms are applied on gray image equivalents and the colour components of images to extract Gray and RGB feature sets respectively. Instead of using all coefficients of transformed images as feature vector for image retrieval, these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used, resulting into better performance and lower computations. The proposed CBIR techniques are implemented on a database having 1000 images spread across 11 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre’s transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to DCT.
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATUREScscpconf
In Content Based Image Retrieval (CBIR) some problem such as recognizing the similar
images, the need for databases, the semantic gap, and retrieving the desired images from huge
collections are the keys to improve. CBIR system analyzes the image content for indexing,
management, extraction and retrieval via low-level features such as color, texture and shape.
To achieve higher semantic performance, recent system seeks to combine the low-level features
of images with high-level features that conation perceptual information for human beings.
Performance improvements of indexing and retrieval play an important role for providing
advanced CBIR services. To overcome these above problems, a new query-by-image technique
using combination of multiple features is proposed. The proposed technique efficiently sifts through the dataset of images to retrieve semantically similar images.
Content Based Video Retrieval in Transformed Domain using Fractional Coeffici...CSCJournals
With the development of multimedia and growing database there is huge demand of video retrieval systems. Due to this, there is a shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval. Good features selection also allows the time and space costs of the retrieval process to be reduced. Different methods[1,2,3] have been proposed to develop video retrievals systems to achieve better performance in terms of accuracy.
The proposed technique uses transforms to extract the features. The used transforms are Discrete Cosine, Walsh, Haar, Kekre, Discrete Sine, Slant and Discrete Hartley transforms. The benefit of energy compaction of transforms in higher coefficients is taken to reduce the feature vector size by taking fractional coefficients[5] of transformed frames of video. Smaller feature vector size results in less time for comparison of feature vectors resulting in faster retrieval of images. The feature vectors are extracted and coefficients sets are considered as feature vectors (100%, 6.25%, 3.125%, 1.5625%, 0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012%, 0.006% and 0.003% of complete transformed coefficients). The database consists of 500 videos spread across 10 categories.
A Hybrid Approach for Content Based Image Retrieval SystemIOSR Journals
This document describes a hybrid approach for content-based image retrieval. It combines several spatial features - row sum, column sum, forward and backward diagonal sums - and histograms to represent images with feature vectors. Euclidean distance is used to calculate similarity between a query image's feature vector and those in the database. The approach is evaluated using precision-recall calculations on different image groups, showing the hybrid method performs best by combining multiple features.
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
There are many researchers who have studied the relevance feedback in the literature of content based image
retrieval (CBIR) community, but none of CBIR search engines support it because of scalability, effectiveness
and efficiency issues. In this, we had implemented an integrated relevance feedback for retrieving of web
images. Here, we had concentrated on integration of both textual features (TF) and visual features (VF) based
relevance feedback (RF), simultaneously we also tested them individually. The TFRF employs and effective
search result clustering (SRC) algorithm to get salient phrases. Then a new user interface (UI) is proposed to
support RF. Experimental results show that the proposed algorithm is scalable, effective and accurated
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVALcscpconf
Basic group of visual techniques such as color, shape, texture are used in Content Based Image Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in image database. To improve query result, relevance feedback is used many times in CBIR to help user to express their preference and improve query results. In this paper, a new approach for image retrieval is proposed which is based on the features such as Color Histogram, Eigen Values and Match Point. Images from various types of database are first identified by using edge detection techniques .Once the image is identified, then the image is searched in the particular database, then all related images are displayed. This will save the retrieval time. Further to retrieve the precise query image, any of the three techniques are used and comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as compared with other two techniques.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
This document describes a content-based image retrieval system that uses 2-D discrete wavelet transform with texture features. It proposes using DWT to reduce image dimensions before extracting texture features from images using gray level co-occurrence matrix. Texture features and Euclidean distance are then used to retrieve similar images from a database. The system is tested on a dataset of 1000 images from 10 classes and achieves an average retrieval accuracy of 89.8%.
Content Based Image Retrieval Using 2-D Discrete Wavelet TransformIOSR Journals
This document proposes a content-based image retrieval system using 2D discrete wavelet transform and texture features. The system decomposes images using 2D DWT, extracts texture features from low frequency coefficients using GLCM, and retrieves similar images by calculating Euclidean distances between feature vectors. Experimental results on Wang's database show the proposed approach achieves 89.8% average retrieval accuracy.
Amalgamation of contour, texture, color, edge, and spatial features for effic...eSAT Journals
Abstract From the past few years, Content based image retrieval (CBIR) has been a progressive and curious research area. Image retrieval is a process of extraction of the set of images from the available image database resembling the query image. Many CBIR techniques have been proposed for relevant image recoveries. However most of them are based on a particular feature extraction like texture based recovery, color based retrieval system etc. Here in this paper we put forward a novel technique for image recovery based on the integration of contour, texture, color, edge, and spatial features. Contourlet decomposition is employed for the extraction of contour features such as energy and standard deviation. Directionality and anisotropy are the properties of contourlet transformation that makes it an efficient technique. After feature extraction of query and database images, similarity measurement techniques such as Squared Euclidian and Manhattan distance were used to obtain the top N image matches. The simulation results in Matlab show that the proposed technique offers a better image retrieval. Satisfactory precision-recall rate is also maintained in this method. Keywords: Contourlet Decomposition, Local Binary Pattern, Squared Euclidian Distance, Manhattan Distance
Effect of Similarity Measures for CBIR using Bins ApproachCSCJournals
This paper elaborates on the selection of suitable similarity measure for content based image retrieval. It contains the analysis done after the application of similarity measure named Minkowiski Distance from order first to fifth. It also explains the effective use of similarity measure named correlation distance in the form of angle ‘cosè’ between two vectors. Feature vector database prepared for this experimentation is based on extraction of first four moments into 27 bins formed by partitioning the equalized histogram of R, G and B planes of image into three parts. This generates the feature vector of dimension 27. Image database used in this work includes 2000 BMP images from 20 different classes. Three feature vector databases of four moments namely Mean, Standard deviation, Skewness and Kurtosis are prepared for three color intensities (R, G and B) separately. Then system enters in the second phase of comparing the query image and database images which makes of set of similarity measures mentioned above. Results obtained using all distance measures are then evaluated using three parameters PRCP, LSRR and Longest String. Results obtained are then refined and narrowed by combining the three different results of three different colors R, G and B using criterion 3. Analysis of these results with respect to similarity measures describes the effectiveness of lower orders of Minkowiski distance as compared to higher orders. Use of Correlation distance also proved its best for these CBIR results.
EFFICIENT APPROACH FOR CONTENT BASED IMAGE RETRIEVAL USING MULTIPLE SVM IN YA...cscpconf
Due to the enormous increase in image database sizes, the need for an image search and
indexing tool is crucial. Content-based image retrieval systems (CBIR) have become very
popular for browsing, searching and retrieving images in different fields including web based
searching, industry inspection, satellite images, medical diagnosis images, etc. The challenge,
however, is in designing a system that returns a set of relevant images i.e. if the query image
represents a horse then the first images returned from a large image dataset must return horse
images as first responses. In this paper, we have combined YACBIR [7], a CBIR that relies on
color, texture and points of interest and Multiple Support Vector Machines Ensemble to reduce
the existing gap between high-level semantic and low-level descriptors and enhance the
performance of retrieval by minimize the empirical classification error and maximize the
geometric margin classifiers. The experimental results show that the method proposed reaches
high recall and precision.
Efficient Approach for Content Based Image Retrieval Using Multiple SVM in YA...csandit
Due to the enormous increase in image database sizes, the need for an image search and indexing tool is crucial. Content-based image retrieval systems (CBIR) have become very popular for browsing, searching and retrieving images in different fields including web based searching, industry inspection, satellite images, medical diagnosis images, etc. The challenge, however, is in designing a system that returns a set of relevant images i.e. if the query image represents a horse then the first images returned from a large image dataset must return horse images as first responses. In this paper, we have combined YACBIR [7], a CBIR that relies on color, texture and points of interest and Multiple Support Vector Machines Ensemble to reduce the existing gap between high-level semantic and low-level descriptors and enhance the performance of retrieval by minimize the empirical classification error and maximize the geometric margin classifiers. The experimental results show that the method proposed reaches high recall and precision.
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
A Study on Image Retrieval Features and Techniques with Various CombinationsIRJET Journal
This document discusses image retrieval techniques for content-based image retrieval systems. It begins with an introduction to the growth of digital image collections and the need for large-scale image retrieval systems. It then reviews different features used for image retrieval, such as color histograms, color moments, color coherence vectors, and discrete wavelet transforms. Edge features and corner features are also discussed. The document concludes that using only one feature type such as color or texture is not sufficient, and the best approach is to extract multiple high-quality features and combine them for image retrieval.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
This document summarizes a research paper that proposes a content-based image retrieval system using cascaded color and texture features. Color features are first extracted from images using statistical measures like mean, standard deviation, energy, entropy, skewness and kurtosis. Similarity to a query image is then measured using distance metrics. The top 150 most similar images are then analyzed to extract Haralick texture features. Similarity is again measured to retrieve the most relevant images. The paper finds that Canberra distance provides better retrieval results than other distance metrics like City Block and Minkowski.
Image search using similarity measures based on circular sectorscsandit
With growing number of stored image data, image sea
rch and image similarity problem become
more and more important. The answer can be solved b
y Content-Based Image Retrieval
systems. This paper deals with an image search usin
g similarity measures based on circular
sectors method. The method is inspired by human eye
functionality. The main contribution of the
paper is a modified method that increases accuracy
for about 8% in comparison with original
approach. Here proposed method has used HSB colour
model and median function for feature
extraction. The original approach uses RGB colour m
odel with mean function. Implemented
method was validated on 10 image categories where o
verall average precision was 67%
IMAGE SEARCH USING SIMILARITY MEASURES BASED ON CIRCULAR SECTORScscpconf
With growing number of stored image data, image search and image similarity problem become
more and more important. The answer can be solved by Content-Based Image Retrieval
systems. This paper deals with an image search using similarity measures based on circular
sectors method. The method is inspired by human eye functionality. The main contribution of the
paper is a modified method that increases accuracy for about 8% in comparison with original
approach. Here proposed method has used HSB colour model and median function for feature
extraction. The original approach uses RGB colour model with mean function. Implemented
method was validated on 10 image categories where overall average precision was 67%.
Similar a Extended Performance Appraise of Image Retrieval Using the Feature Vector as Row Mean of Transformed Column Image (20)
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
The document proposes a new method for real-time blinking detection based on Gabor filters. It begins by reviewing existing methods and their limitations in dealing with noise, variations in eye shape, and blinking speed. The proposed method uses a Gabor filter to extract the top and bottom arcs of the eye from an image. It then measures the distance between these arcs and compares it to a threshold: a distance below the threshold indicates a closed eye, while a distance above indicates an open eye. The document claims this Gabor filter-based approach is robust to noise, variations in eye shape and blinking speed. It presents experimental results showing the method can accurately detect blinking across different users.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
This document discusses the progress of virtual teams in Albania. It provides context on virtual teams and how they differ from traditional teams in their reliance on technology for communication across distances. The document then examines the use of virtual teams in Albania, noting the growing infrastructure and technology usage that enables virtual collaboration. It highlights some virtual team examples in Albanian government and academic projects.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Extended Performance Appraise of Image Retrieval Using the Feature Vector as Row Mean of Transformed Column Image
1. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 1
Extended Performance Appraise of Image Retrieval Using the
Feature Vector as Row Mean of Transformed Column Image
Dr. H. B. Kekre hbkekre@yahoo.com
Senior Professor, Computer Engineering,
MPSTME, SVKM’S NMIMS University,
Mumbai, 400056, India
Sudeep D. Thepade sudeepthepade@gmail.com
Ph.D. Research Scholar and Associate Professor,
Computer Engineering,
MPSTME, SVKM’S NMIMS University,
Mumbai, 400056, India
Akshay Maloo akshaymaloo@gmail.com
Student, Computer Engineering,
MPSTME, SVKM’S NMIMS University
Mumbai, 400056, India
Abstract
The extension to the content based image retrieval (CBIR) technique based on row mean of
transformed columns of image is presented here. As compared to earlier contemplation three
image transforms, now the performance appraise of proposed CBIR technique is done using
seven different image transforms like Discrete Cosine Transform (DCT), Discrete Sine Transform
(DST), Hartley Transform, Haar Transform, Kekre Transform, Walsh Transform and Slant
Transform. The generic image database with 1000 images spread across 11 categories is used
to test the performance of proposed CBIR techniques. For each transform 55 queries (5 per
category) were fired on the image database. Every technique is tested on both the color and grey
version of image database. To compare the performance of image retrieval technique across
transforms average precision and recall are computed of all queries. The results have shown the
performance improvement (higher precision and recall values) with proposed methods compared
to all pixel data of image at reduced computations resulting in faster retrieval in both gray as well
as color versions of image database. Even the variation of considering DC component of
transformed columns as part of feature vector and excluding it are also tested and it is found that
presence of DC component in feature vector improvises the results in image retrieval. The
ranking of transforms for performance in proposed gray CBIR techniques with DC component
consideration can be given as DST, Haar, Hartley, DCT, Walsh, Slant and Kekre. In color variants
of proposed techniques with DC component, the performance ranking of image transforms
starting from best can be listed as DCT, Haar, Walsh, Slant, DST, Hartley and Kekre transform.
Keywords- CBIR, DCT, DST, Haar, Walsh, Kekre, Slant, Hartley,Row Mean.
1. INTRODUCTION
The hefty sized image databases which are being generated from a variety of sources (digital
camera, video, scanner, the internet etc.) have posed technical challenges to computer systems to
store/transmit and index/manage image data effectively to make such large collections easily
accessible. Storage and transmission challenges are taken care by Image compression. The
challenges of image indexing are studied in the context of image database [2,6,7,10,11], which has
become one of the most important and promising research area for researchers from a wide range
of disciplines like computer vision, image processing and database areas. The need for faster and
better image retrieval techniques is increasing day by day. Some of important applications for
CBIR technology could be identified as art galleries [12,14], museums, archaeology [3],
2. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 2
architecture design [8,13], geographic information systems [5], trademark databases [21,23],
weather forecast [5,22], medical imaging [5,18], criminal investigations [24,25], image search on
the Internet [9,19,20].
1.1 Content Based Image Retrieval
In literature the term content based image retrieval (CBIR) has been used for the first time by Kato
et.al. [4], to describe his experiments into automatic retrieval of images from a database by color
and shape feature. The typical CBIR system performs two major tasks [16,17]. The first one is
feature extraction (FE), where a set of features, called feature vector, is generated to accurately
represent the content of each image in the database. The second task is similarity measurement
(SM), where a distance between the query image and each image in the database using their
feature vectors is used to retrieve the top “closest” images [16,17,26].
For feature extraction in CBIR there are mainly two approaches [5] feature extraction in spatial
domain and feature extraction in transform domain. The feature extraction in spatial domain
includes the CBIR techniques based on histograms [5], BTC [1,2,16], VQ [21,25,26]. The
transform domain methods are widely used in image compression, as they give high energy
compaction in transformed image [17,24]. So it is obvious to use images in transformed domain for
feature extraction in CBIR [23,28]. But taking transform of image is time consuming. Reducing the
size of feature vector by applying transform on columns of the image and finally taking row mean
of transformed columns and till getting the improvement in performance of image retrieval is the
theme of the work presented here. Many current CBIR systems use Euclidean distance [1-3,8-14]
on the extracted feature set as a similarity measure. The Direct Euclidian Distance between image
P and query image Q can be given as equation 1, where Vpi and Vqi are the feature vectors of
image P and Query image Q respectively with size ‘n’.
∑=
−=
n
i
VqiVpiED
1
2
)( (1)
Total seven well-known image transforms [10,11,18,28] like Discrete Cosine Transform (DCT),
Walsh Transform, Haar Transform, Kekre Transform, Discrete Sine Transform (DST), Slant
Transform and Hartley Transform are used for performance comparison of the proposed CBIR
techniques.
FIGURE 1: Feature Extraction in Proposed CBIR Technique with Row Mean of Transformed Image
Columns
2. CBIR USING ROW MEAN OF TRANSFORMED COLUMN IMAGE [28]
Here image transform is applied on each column of image. Then row mean of the transformed
columns is used as feature vector. Figure 1 shows the Feature Extraction in Proposed CBIR
Technique with Row Mean of Transformed Image Columns. The obtained feature vector is used in
two different ways (with and without DC component) to see the variations in retrieval accuracy. As
indicated by experimental results, image retrieval using DC component value proves to be better
than retrieval excluding it.
3. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 3
The following steps need to be followed for image retrieval using the proposed image retrieval
techniques:
1. Apply transform T on the column of image of size NxN (INxN) to get column transformed
image of the same size (cINxN)
cINxN (column transformed) = [TNxN] [INxN] (2)
2. Calculate row mean of column transformed image to get feature vector of size N (instead of
N
2
)
3. The feature vector is considered with and without DC component to see variations in results.
Then Euclidean Distance is applied to obtain precision and recall.
Applying transform on image columns instead of applying transform on the whole image, saves
50% of computations required resulting in faster retrieval [28]. Again row mean of column
transformed image is taken as feature vector which further reduces the required number of
comparisons among feature vectors resulting in faster retrieval. The results obtained from
proposed techniques of row mean of column transformed image with DC component and row
mean of column transformed image without DC component are compared with applying transform
on full image and spatial row mean of image in both gray and color versions of image database.
3. IMPLEMENTATION
3.1 The Platform and Image Database
The implementation of the proposed CBIR techniques is done in MATLAB 7.0 using a computer
with Intel Core 2 Duo Processor T8100 (2.1GHz) and 2 GB RAM.
The proposed CBIR techniques are tested on the image database of 1000 variable size images
spread across 11 categories of human being, animals, natural scenery and manmade things. This
image database in augmented version of Wang image database [15]. Figure 2 shows sample
image of generic database.
FIGURE 2: Sample Images from Generic Image Database
[Image database contains total 1000 images with 11 categories]
4. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 4
3.2 Precision/Recall
To assess the retrieval effectiveness, we have used the precision and recall as statistical
comparison parameters [1,2] for the proposed CBIR techniques. The standard definitions for
these two measures are given by following equations.
retrievedimagesofnumberTotal
retrievedimagesrelevantofNumber
ecision
____
____
Pr = (3)
databaseinimagesreleventofnumberTotal
retrievedimagesrelevantofNumber
call
______
____
Re = (4)
4.RESULTS AND DISCUSSIONS
For testing the performance of each proposed CBIR technique, per technique 55 queries (5 from
each category) are fired on the database of 1000 variable size generic images spread across 11
categories. The query and database image matching is done using Euclidian distance. The
average precision and average recall values for each proposed technique with respective image
transform are computed and plotted against number of retrieved images for performance
comparison.
The crossover point of precision and recall plays very important role in performance analysis of
image retrieval method. At this crossover point value of precision equals to that of recall, which
means all the relevant images from database have been retrieved and are exactly equal to the
number of retrieved result images. In ideal situation the height of precision and recall crossover
point should be at value one, which means all the retrieved images are relevant and all relevant
from database are retrieved. Always the performance of image retrieval technique is compared to
this ideal situation. The height of crossover point of precision and recall gives idea about how
much the proposed technique is deviating from ideal one, more the height better the technique is.
The performance of proposed techniques with DC component (referred as ‘Transform-RM-DC’)
and without DC component (referred as ‘Transform-RM’) for each transform is compared with
CBIR using complete transformed image as feature vector (referred as ‘Full’), spatial row mean
vector of image as feature vector (referred as ‘RM’).
The proposed techniques are tested for both grey and color versions of image database. In all
image transforms the color versions of the discussed CBIR techniques give higher performance as
compared to gray versions.
4.1 Results on Gray Version of Image Database
In Figure 3 the precision-recall crossover points of DCT applied to the full gray image (Full), gray
row mean (RM), the proposed technique of row mean of DCT transformed gray columns applied
with DC component (DCT-RM-DC) and without DC component (DCT-RM) are shown. Here the
proposed method with DC component gives the highest crossover point indicating best
performance. Even the computational complexity in proposed retrieval technique is less than that
of applying full transform. This proves proposed image retrieval method is faster and better with
DCT. The performance of proposed CBIR method degrades if the DC component is not
considered.
5. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 5
FIGURE 3: Gray Crossover Point of Precision and Recall v/s Number of Retrieved Images using DCT
FIGURE 4: Gray Crossover Point of Precision and Recall v/s Number of Retrieved Images using DST
6. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 6
In Figure 4 the performance of proposed gray CBIR method with DST is shown. The gray
crossover point of DST based proposed technique with DC component is highest indicating best
performance. Therefore better retrieval of images is possible with lower computations in the
proposed CBIR technique with DST. Even here the performance degrades if the DC component
is neglected in proposed image retrieval technique.
FIGURE 5: Gray Crossover Point of Precision and Recall v/s Number of Retrieved Images using Walsh
Transform
In Figure 5 the performance comparison of gray proposed CBIR methods for Walsh transform is
given. Here also the best results are obtained using the proposed technique with DC component
and the performance degrades if the DC component is not considered.
In Figure 6 the precision-recall crossover points of gray CBIR using Haar Transform applied to the
full image (Full), row mean of image mage (RM), the proposed technique with DC component
(HAAR-RM-DC) and without DC component (HAAR-RM) are shown. Also in case of Haar
transform, the proposed technique with DC component gives best performance indicated by
highest crossover point value and the performance degrades if the DC component is not
considered. In Figure 7 the performance comparison of Slant transform based proposed gray
CBIR techniques is shown, which again proves the proposed image retrieval technique with DC
component to be the best.
In Figure 8 the gray crossover points of precision and recall for proposed CBIR methods with
Hartley Transform are given. The result that proposed gray CBIR technique with DC component is
best is again proved even in case of Hartley transform as indicated by highest precision-recall
crossover point value. Here also performance degrades drastically of DC component is neglected
in proposed image retrieval method. In Kekre transform used in proposed gray CBIR methods the
techniques with DC component considered gives almost same performance to that of complete
Kekre transform applied to image as shown in figure 9, but at great complexity reduction and
performance of proposed method of CBIR degrades with negligence of DC component.
7. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 7
FIGURE 6: Gray Crossover Point of Precision and Recall v/s Number of Retrieved Images using Haar
Transform
FIGURE 7: Gray Crossover Point of Precision and Recall v/s Number of Retrieved Images using Slant
Transform
8. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 8
FIGURE 8: Gray Crossover Point of Precision and Recall v/s Number of Retrieved Images using Hartley
Transform
FIGURE 9: Gray Crossover Point of Precision and Recall v/s Number of Retrieved Images using Kekre
Transform
9. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 9
FIGURE 10: Gray Crossover Point of average Precision and Recall v/s Row Mean with & without DC
component for all transforms with Full Image
To decide which image transform proves to be the best for proposed gray CBIR methods, the
crossover points of proposed gray CBIR techniques with and without DC coefficient are shown in
figure 10. Here it is observed that the proposed technique for all transforms is giving better
performance in DC component consideration than neglecting it. Also in all transforms proposed
gray CBIR method with DC component outperforms the complete transform based gray CBIR
technique. Here the best results are obtained using DST-RM-D followed by HAAR-RM-DC. The
ranking of transforms for performance in proposed CBIR techniques with DC component
consideration can be given as DST, Haar, Hartley, DCT, Walsh, Slant and Kekre. All transforms
with proposed gray CBIR technique are showing improvement in performance as compared to
gray CBIR based on complete transform of image as feature vector at great reduction in
computational complexity. Therefore better and faster image retrieval is achieved using proposed
gray CBIR technique.
4.2 Results on Color Version of Image Database
The performance comparison of CBIR using Image transformed full image as feature vector
(referred as ‘Full’), CBIR using simple row mean feature vector of image (referred as ‘RM’) and
the proposed CBIR techniques with DC coefficient (referred as ‘Transform-RM-DC’) and without
DC component (referred as (‘Transform-RM’) is given in Figure 11 to figure 17 in the form of color
crossover points of precision and recall obtained by applying all these image retrieval techniques
on color version of image database. In all transforms it is observed that consideration of DC
coefficient in feature vector improves the CBIR performance (as indicated by higher precision and
recall crossover point values). Also for all image transforms the proposed CBIR method with DC
component have given best performance as indicated by uppermost height of crossover point of
precision and recall. Even on color version of image database for al image transforms the
proposed image retrieval techniques using row mean of column transformed image with DC
component gives best performance. Figure 11 to figure 17 are showing the color crossover points
of precision and recall plotted against number of retrieved images obtained by firing the queries on
color version of image database respectively for image transforms like DCT, DST, Walsh
transform, Haar transform, Slant transform, Hartley transform and Kekre transform.
10. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 10
FIGURE 11: Color Crossover Point of Precision and Recall v/s Number of Retrieved Images using DCT
FIGURE 12: Color Crossover Point of Precision and Recall v/s Number of Retrieved Images using DST
11. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 11
FIGURE 13: Color Crossover Point of Precision and Recall v/s Number of Retrieved Images using Walsh
Transform
FIGURE 14: Color Crossover Point of Precision and Recall v/s Number of Retrieved Images using Haar
Transform
12. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 12
FIGURE 15: Color Crossover Point of Precision and Recall v/s Number of Retrieved Images using Slant
Transform
FIGURE 16: Color Crossover Point of Precision and Recall v/s Number of Retrieved Images using Hartley
Transform
13. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 13
FIGURE 17: Color Crossover Point of Precision and Recall v/s Number of Retrieved Images using Kekre
Transform
FIGURE 18: Color Crossover Point of average Precision and Recall v/s Row Mean with & without DC
component for all transforms with Full Image
14. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 14
FIGURE 19: Gray and Color Crossover Point of average Precision and Recall v/s Row Mean with & without
DC component for all transforms with Full Image
To decide which image transform proves to be the best for proposed color CBIR techniques, the
crossover points of proposed gray CBIR methods with and without DC coefficient are shown in
figure 18. Here it is observed that the proposed technique for all transforms is giving better
performance in DC component consideration than neglecting it. Also in all transforms proposed
color CBIR method with DC component outperforms the complete transform based color CBIR
technique. Here the best results are obtained using DCT-RM-DC followed by HAAR-RM-DC. The
ranking of transforms for performance in proposed color CBIR techniques with DC component
consideration can be given as DCT, Haar, Walsh, Slant, DST, Hartley and Kekre. All transforms
with proposed color CBIR technique are showing improvement in performance as compared to
color CBIR based on complete transform of image as feature vector at great reduction in
computational complexity. So better and faster image retrieval is achieved using proposed color
CBIR technique.
Figure 19 shows the performance comparison between gray and color images, we can see a
considerable improvement in performance using color images.
5.CONCLUSION
The thirst for improvising content based image retrieval techniques with respect to performance
and computational time has still not been satisfied. The herculean task of improving the
performance of image retrieval and simultaneously reducing the computational complexity is
achieved by proposed image retrieval technique using row mean of transformed column image.
The performance of proposed techniques is compared with CBIR using complete transformed
image as feature vector and row mean of image as feature vector. Total seven image transforms
like DCT, DST, Haar, Hartley, Kekre, Walsh and Slant are considered.
The techniques were tested on generic image database with 1000 images spread across 11
categories. Experimental results show that in all transforms proposed CBIR technique with DC
component outperforms other methods with great reduction in computation time. Consideration of
15. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 15
DC component in proposed image retrieval techniques gives higher performance as compared to
neglecting it.
In all transforms DST gives best performance for proposed gray image retrieval method with DC
component and DCT proves to be the best for color image retrieval method with DC component.
The ranking of transforms for performance in proposed gray CBIR techniques with DC component
consideration can be given as DST, Haar, Hartley, DCT, Walsh, Slant and Kekre. In case of
proposed color CBIR methods he performance ranking can be given as DCT, Haar, Walsh, Slant,
DST, Hartley and Kekre.
6. REFERENCES
[1] H.B.Kekre, Sudeep D. Thepade, “Boosting Block Truncation Coding using Kekre’s LUV
Color Space for Image Retrieval”, WASET International Journal of Electrical, Computer
and System Engineering (IJECSE), Volume 2, Number 3, pp. 172-180, Summer 2008.
Available online at http://www.waset.org/ijecse/v2/v2-3-23.pdf
[2] H.B.Kekre, Sudeep D. Thepade, “Image Retrieval using Augmented Block Truncation
Coding Techniques”, ACM International Conference on Advances in Computing,
Communication and Control (ICAC3-2009), pp. 384-390, 23-24 Jan 2009, Fr. Conceicao
Rodrigous College of Engg., Mumbai. Is uploaded on online ACM portal.
[3] H.B.Kekre, Sudeep D. Thepade, “Scaling Invariant Fusion of Image Pieces in Panorama
Making and Novel Image Blending Technique”, International Journal on Imaging (IJI),
www.ceser.res.in/iji.html, Volume 1, No. A08, pp. 31-46, Autumn 2008.
[4] Hirata K. and Kato T. “Query by visual example – content-based image retrieval”, In Proc.
of Third International Conference on Extending Database Technology, EDBT’92, 1992, pp
56-71
[5] H.B.Kekre, Sudeep D. Thepade, “Rendering Futuristic Image Retrieval System”, National
Conference on Enhancements in Computer, Communication and Information Technology,
EC2IT-2009, 20-21 Mar 2009, K.J.Somaiya College of Engineering, Vidyavihar, Mumbai-
77.
[6] Minh N. Do, Martin Vetterli, “Wavelet-Based Texture Retrieval Using Generalized
Gaussian Density and Kullback-Leibler Distance”, IEEE Transactions On Image
Processing, Volume 11, Number 2, pp.146-158, February 2002.
[7] B.G.Prasad, K.K. Biswas, and S. K. Gupta, “Region –based image retrieval using
integrated color, shape, and location index”, International Journal on Computer Vision and
Image Understanding Special Issue: Color for Image Indexing and Retrieval, Volume 94,
Issues 1-3, April-June 2004, pp.193-233.
[8] H.B.Kekre, Sudeep D. Thepade, “Creating the Color Panoramic View using Medley of
Grayscale and Color Partial Images ”, WASET International Journal of Electrical, Computer
and System Engineering (IJECSE), Volume 2, No. 3, Summer 2008. Available online at
www.waset.org/ijecse/v2/v2-3-26.pdf.
[9] Stian Edvardsen, “Classification of Images using color, CBIR Distance Measures and
Genetic Programming”, Ph.D. Thesis, Master of science in Informatics, Norwegian
university of science and Technology, Department of computer and Information science,
June 2006.
16. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 16
[10] H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “DCT Applied to Row Mean and Column
Vectors in Fingerprint Identification”, In Proceedings of International Conference on
Computer Networks and Security (ICCNS), 27-28 Sept. 2008, VIT, Pune.
[11] H.B.Kekre, Sudeep D. Thepade, Akshay Maloo “Performance Comparison for Face
Recognition using PCA, DCT &WalshTransform of Row Mean and Column Mean”,
Computer Science Journals, International Journal oj Image Processing (IJIP), Volume 4,
Issue II, May.2010, pp.142-155, available online at http://www.cscjournals.org/
csc/manuscript/Journals/IJIP/volume4/Issue2/IJIP-165.pdf.
[12] H.B.kekre, Sudeep D. Thepade, “Improving ‘Color to Gray and Back’ using Kekre’s LUV
Color Space”, IEEE International Advanced Computing Conference 2009 (IACC’09),
Thapar University, Patiala, INDIA, 6-7 March 2009. Is uploaded on online at IEEE Xplore.
[13] H.B.Kekre, Sudeep D. Thepade, “Image Blending in Vista Creation using Kekre's LUV
Color Space”, SPIT-IEEE Colloquium and International Conference, Sardar Patel Institute
of Technology, Andheri, Mumbai, 04-05 Feb 2008.
[14] H.B.Kekre, Sudeep D. Thepade, “Color Traits Transfer to Grayscale Images”, In Proc.of
IEEE First International Conference on Emerging Trends in Engg. & Technology, (ICETET-
08), G.H.Raisoni COE, Nagpur, INDIA. Uploaded on online IEEE Xplore.
[15] http://wang.ist.psu.edu/docs/related/Image.orig (Last referred on 23 Sept 2008)
[16] H.B.Kekre, Sudeep D. Thepade, “Using YUV Color Space to Hoist the Performance of
Block Truncation Coding for Image Retrieval”, IEEE International Advanced Computing
Conference 2009 (IACC’09), Thapar University, Patiala, INDIA, 6-7 March 2009.
[17] H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah, Prathmesh Verlekar,
Suraj Shirke,“Energy Compaction and Image Splitting for Image Retrieval using Kekre
Transform over Row and Column Feature Vectors”, International Journal of Computer
Science and Network Security (IJCSNS),Volume:10, Number 1, January 2010, (ISSN:
1738-7906) Available at www.IJCSNS.org.
[18] H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah, Prathmesh Verlekar,
Suraj Shirke, “Walsh Transform over Row Mean and Column Mean using Image
Fragmentation and Energy Compaction for Image Retrieval”, International Journal on
Computer Science and Engineering (IJCSE),Volume 2S, Issue1, January 2010, (ISSN:
0975–3397). Available online at www.enggjournals.com/ijcse.
[19] H.B.Kekre, Sudeep D. Thepade, “Image Retrieval using Color-Texture Features Extracted
from Walshlet Pyramid”, ICGST International Journal on Graphics, Vision and Image
Processing (GVIP), Volume 10, Issue I, Feb.2010, pp.9-18, Available online
www.icgst.com/ gvip/Volume10/Issue1/P1150938876.html
[20] H.B.Kekre, Sudeep D. Thepade, “Color Based Image Retrieval using Amendment Block
Truncation Coding with YCbCr Color Space”, International Journal on Imaging (IJI),
Volume 2, Number A09, Autumn 2009, pp. 2-14. Available online at
www.ceser.res.in/iji.html (ISSN: 0974-0627).
17. Dr. H. B. Kekre, Sudeep D. Thepade & Akshay Maloo
Advances in Multimedia - An International Journal (AMIJ), Volume (2) : Issue (1) : 2011 17
[21] H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “Color-Texture Feature based Image
Retrieval using DCT applied on Kekre’s Median Codebook”, International Journal on
Imaging (IJI), Volume 2, Number A09, Autumn 2009,pp. 55-65. Available online at
www.ceser.res.in/iji.html (ISSN: 0974-0627).
[22] H.B.Kekre, Sudeep D. Thepade, Akshay Maloo “Performance Comparison for Face
Recognition using PCA, DCT &WalshTransform of Row Mean and Column Mean”, ICGST
International Journal on Graphics, Vision and Image Processing (GVIP), Volume 10, Issue
II, Jun.2010, pp.9-18, available online at http://209.61.248.177/gvip/Volu
me10/Issue2/P1181012028.pdf..
[23] H.B.Kekre, Sudeep D. Thepade, “Improving the Performance of Image Retrieval using
Partial Coefficients of Transformed Image”, International Journal of Information Retrieval,
Serials Publications, Volume 2, Issue 1, 2009, pp. 72-79 (ISSN: 0974-6285)
[24] H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah, Prathmesh Verlekar,
Suraj Shirke, “Performance Evaluation of Image Retrieval using Energy Compaction and
Image Tiling over DCT Row Mean and DCT Column Mean”, Springer-International
Conference on Contours of Computing Technology (Thinkquest-2010), Babasaheb Gawde
Institute of Technology, Mumbai, 13-14 March 2010, The paper will be uploaded on online
Springerlink.
[25] H.B.Kekre, Sudeep D. Thepade, Akshay Maloo “Query by Image Content Using Color
Averaging Techniques”, Engineering journals, International Journal of Engineering,
Science and Technology (IJEST), Volume 2, Issue 6, Jun.2010, pp.1612-1622, Available
online http://www.ijest.info/docs/IJEST10-02-06-14.pdf.
[26] H.B.Kekre, Tanuja K. Sarode, Sudeep D. Thepade, “Image Retrieval by Kekre’s Transform
Applied on Each Row of Walsh Transformed VQ Codebook”, (Invited), ACM-International
Conference and Workshop on Emerging Trends in Technology (ICWET 2010),Thakur
College of Engg. And Tech., Mumbai, 26-27 Feb 2010, The paper is invited at ICWET
2010. Also will be uploaded on online ACM Portal.
[27] H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “DCT Applied to Row Mean and Column
Vectors in Fingerprint Identification”, In Proceedings of Int. Conf. on Computer Networks
and Security (ICCNS), 27-28 Sept. 2008, VIT, Pune.
[28] H.B.Kekre, Sudeep D. Thepade, Akshay Maloo, “Performance Comparision of Image
Retrieval using Row Mean of Transformed Column Image ”, International Journal on
Computer Science and Engineering (IJCSE), Volume 2, Issue 5.