SlideShare una empresa de Scribd logo
1 de 12
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
38
TEXT REGION EXTRACTION FROM LOW RESOLUTION DISPLAY
BOARD IMAGES USING WAVELET FEATURES
M. M. Kodabagi1
, S. A. Angadi2
, Anuradha. R. Pujari 3
1
Department of Computer Science and Engineering, Basaveshwar Engineering College,
Bagalkot-587102, Karnataka, India
2
Department of Computer Science and Engineering, Basaveshwar Engineering College,
Bagalkot-587102, Karnataka, India
3
Department of Computer Science and Engineering, Basaveshwar Engineering College,
Bagalkot-587102, Karnataka, India
ABSTRACT
Automated systems for understanding display boards are finding many applications
useful in guiding tourists, assisting visually challenged and also in providing location aware
information. Such systems require an automated method to detect and extract text prior to
further image analysis. In this paper, a methodology to detect and extract text regions from
low resolution display board images is presented. The proposed work is texture based and
uses wavelet energy features for text region detection and extraction. The wavelet energy
features are obtained at 2 resolution levels on every 50x50 block of the image and potential
text blocks are identified using newly defined discriminant functions. Further, the detected
text blocks are merged to extract text regions. The proposed method is robust and achieves a
detection rate of 97% on a variety of 100 low resolution display board images each of size
240x320.
Keywords: Text Detection, Text Region Extraction, Display Board Images, Wavelet
Features
INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY &
MANAGEMENT INFORMATION SYSTEM (IJITMIS)
ISSN 0976 – 6405(Print)
ISSN 0976 – 6413(Online)
Volume 4, Issue 1, January – April (2013), pp. 38-49
© IAEME: www.iaeme.com/ijitmis.html
Journal Impact Factor (2013): 5.2372 (Calculated by GISI)
www.jifactor.com
IJITMIS
© I A E M E
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
39
1. INTRODUCTION
As the people move across world for business, field works and/or pleasure, they find
it difficult to understand the text written on display boards in foreign environment. In such a
scenario, people either look for guides or intelligent devices that can help them in providing
translated information to their native language. As most of the individuals carry camera
embedded, hand held devices such as mobile phones and PDA’s, there is a possibility to
integrate technological solutions into such systems inorder to provide facilities for
automatically understanding display boards in foreign environment. These facilities may be
provided as an integral solution through web service as necessary computing function, which
are not available in hand held systems. Such web based hand held systems must be enabled to
capture natural scene images containing display boards and query the web service to retrieve
translated localized information of the text written on display boards.
The written matter on display/name boards provides information necessary for the
needs and safety of people, and may be written in languages unknown. And the written
matter can be street names, restaurant names, building names, company names, traffic
directions, warning signs etc. Hence, lot of commercial and academic interest is veered
towards development of techniques for web service based hand held systems useful in
understanding written text in display boards. There is a spurt of activity in development of
web based intelligent hand held tour guide systems, blind assistants to read written text and
Location aware computing systems and many more in recent years. A few such works are
presented in the following and a more elaborate survey of related works is given in the next
section. A point by photograph paradigm where users can specify an object by simply taking
picture to retrieve matching images from the web is found in [1]. The comMotion is a
location aware hand held system that links personal information to locations. It reminds users
about shopping list when he/she nears a shopping mall [2]. At Hewlett Packard (HP), mobile
Optical Character Reading (OCR) applications were developed to retrieve information related
to the text image captured through a pen-size camera [3]. Mobile phone image matching and
retrieval has been used by insurance and trading firms for remote item appraisal and
verification with a central database [4].
The image matching and retrieval applications cannot be embedded in hand held
devices such as mobile phones due to limited availability of computing resources, hence such
services are being developed as web services. The researchers have also worked towards
development of web based intelligent hand held tour guide systems. The cyberGuide [5] is an
intelligent hand held tour guide system, which provides the information based on user’s
location. The cyberGuide continuously monitors the users location using Global Positioning
System (GPS) and provides new information at the right time. Museums could provide these
tour guides to visitors allowing them to take personalized tours observing any displayed
object. As the visitors move across museum floors, the information about the location is
pushed to hand held tour guides. The research prototypes used to search information about an
object image captured by cameras embedded in mobile phones are described in [6-7].
The state of art hand held systems available across the world are not automated for
understanding written text on display boards in foreign environment. Scope exists for
exploring such possibilities through automation of hand held systems. One of the very
important processing steps for development of such systems is automatic detection and
extraction of text regions from low resolution natural scene images prior to further analysis.
The written text provides important information and it is not an easy problem to reliably
detect and localize text embedded in natural scene images [8]. The size of the characters can
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
40
vary from very small to very big. The font of the text can be different. Text present in the
image may have multiple colors. The text may appear in different orientation. Text can occur
in a complex background. And also the textual and other information captured is affected by
significant degradations such as perspective distortion, blur, shadow and uneven lighting.
Hence, the automatic detection and segmentation of text is a difficult and challenging
problem. Reported works have identified a number of approaches for text localization from
natural scene images. The existing approaches are categorized as connected component
based, edge based and texture based methods. Connected component based methods use
bottom up approach to group smaller components into larger components until all regions are
identified in the image. A geometrical analysis is later needed to identify text components
and group them to localize text regions. Edge based methods focus on the high contrast
between the background and text and the edges of the text boundary are identified and
merged. Later several heuristics are required to filter out nontext regions. But, the presence of
noise, complex background, and significant degradation in the low resolution natural scene
image can affect the extraction of connected components and identification of boundary lines,
thus making both the approaches inefficient. Texture analysis techniques are good choice for
solving such a problem as they give global measure of properties of a region.
In this paper, a new model for text extraction from low resolution display board
images using wavelet features is presented. The proposed work is texture based and uses
wavelet features for text extraction. The wavelet energy features are obtained at 2 resolution
levels on every 50x50 block of the image and potential text blocks are identified using newly
defined discriminant functions. Further, the detected text blocks are merged to extract text
regions. The proposed method is robust and achieves a detection rate of 97% on a variety of
natural scene images.
The rest of the paper is organized as follows: The related work is discussed in section
II. The proposed model is presented in section III. Experimental results are given in section
IV. Section V concludes the work and lists future directions.
2. RELATED WORKS
The web based hand held systems useful in understanding display boards requires
analysis of natural scene images to extract text regions for further processing. A number of
methods for text localization have been published in recent years and are categorized into
connected component based, edge based and texture based methods. The performance of the
methods belonging to first two categories is found to be inefficient and computationally
expensive for low resolution natural scene images due to the presence of noise, complex
background and significant degradation. Hence, the techniques based on texture analysis have
become a good choice for image analysis, and texture analysis is further investigated in the
proposed work.
A few state of the art approaches that use texture features for text localization have
been summarized here; the use of horizontal window of size 1×21 (Mask size) to compute
the spatial variance for identification of edges in an image, which are further used to locate
the boundaries of a text line is proposed in [9]. However, the approach will only detect
horizontal components with a large variation compared to the background and a processing
time of 6.6 seconds with 256x256 images on SPARC station 20 is reported. The Vehicle
license plate localization method that uses similar criteria is presented in [10]. It uses time
delay neural networks (TDNNs) as a texture discriminator in the HSI color space to decide
whether the window of an image contains a license plate number. The detected windows are
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
41
later merged for extracting license plates. A multi-scale texture segmentation schemes
are presented in [11-12]. The methods detect potential text regions based on nine second-
order Gaussian derivatives and is evaluated for different images including video frames,
newspapers, magazines, envelopes etc. The approach is insensitive to the image
resolution and tends to miss very small text and gives a localization rate of 90%.
A methodology that uses frequency features such as the number of edge pixels in
horizontal and vertical directions and Fourier spectrum to detect text regions in real scene
images is discussed in [13]. The texture-based text localization method using Wavelet
transform is proposed in [14]. The techniques for text extraction in complex color images,
where a neural network is employed to train a set of texture discrimination masks that
minimize the classification error for the two texture classes: text regions and non-text
regions are reported in [16-17].
Learning-based methods for localizing text in documents and video are proposed
in [15] and [18]. The method [18] is evaluated for various video images and the text
localization procedure required about 1 second to process a 352x240 image on Sun
workstation. And for text detection a precision rate of 91% and a recall rate of 92.8% is
reported. This method was subsequently enhanced for skewed text in [19]. A work similar
to the proposed method which uses DCT coefficients to capture textural properties for
caption localization is presented in [8]. The authors claim that the method is very fast and
gives a detection rate of 99.17%. However, the precise localization results are not
reported. The method that use homogeneity and contrast textural features for text region
detection and extraction is reported in [20-21]. In recent years, several approaches for
sign detection, text detection and segmentation from natural images are also reported [22-
34].
Out of many works cited in the literature it is generally agreed that the robustness
of texture based methods depends on texture features extracted from the window/block
or the region of interest that are used by the discriminant functions for classification
decisions. The probability of misclassification is directly related to the number and
quality of details available in texture features. Hence, the extracted texture features must
give sufficient information to distinguish text from the background. Hence, there is a
scope to explore such possibilities. The detailed description of the proposed methodology
is given in the next section.
3. PROPOSED MODEL
The proposed method comprises of 4 phases; Preprocessing for gray level
conversion and dividing image into 50x50 blocks, Extraction of wavelet energy features
on every 50x50 block, Classification of blocks into text and non text categories, and
merging of text blocks to detect text regions. The block schematic diagram of the
proposed model is given in the fig. 1. The detailed description of each phase is presented
in the following subsections.
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
42
Low resolution natural scene image
f(x,y)
Text regions
Fig. 1. Block Diagram of Proposed Model
3.1. Preprocessing and dividing image into 50x50 blocks
Initially, the input image is preprocessed for gray level conversion and resized to a
constant size of 300x300 pixels. Further, the resized image is divided into 50x50 blocks. For
each block Discrete Wavelet Transform (DWT) is applied to obtain wavelet energy features.
There are several benefits of using larger sized image blocks for extracting energy features.
One such benefit is, the larger size image blocks cover more details and hence extracted
features give sufficient information for correct classification of blocks into text and non-text
categories. The other benefits include; robustness and insensitiveness to variation in size, font
and alignment.
3.2. Feature Extraction
In this phase, the wavelet energy features are obtained from every 50x50 block of the
processed image at 2 resolution levels. Totally 6 features are extracted from every block and
are stored into a feature vector Xi (Subscript “i” corresponds to ith
block). The feature vector
Xi also records block coordinates which corresponds to minimum and maximum row and
column numbers of the block. Feature vectors of all N blocks are combined to form a feature
matrix D as depicted in equation (1). The feature vector is described in equation (2).
D = [ X1, X2, X3…………………… XN] T
(1)
Xi = [rmin, rmax, cmin, cmax, fj1 fj2 fj3]; 1 <= j<= 2 (2)
g(x,y)
Preprocessing for gray level conversion and dividing image into 50x50 blocks
Texture Features Extraction from every 50x50 block
Classification of blocks into text and non text category
using discriminant functions
Merging of text blocks into text regions
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
43
fj1 = (∑ Hj (x, y)) / (M x N) (3)
fj2 = (∑ Vj (x, y)) / (M x N) (4)
fj3 = (∑ Dj (x, y)) / (M x N) (5)
Where,
• rmin, rmax, cmin, cmax corresponds to coordinates of ith
block in terms of minimum and
maximum row and column numbers.
• fj1, fj2, fj3 are wavelet energy features of corresponding detail band/coefficients at jth
resolution level.
• Hj, Vj and Dj correspond to detail coefficients/bands of wavelet transform.
• M X N represents size of detail band at the corresponding resolution level.
The features of all image blocks are further used to identify potential text blocks as
detailed in the classification phase.
3.3. Classification
The classification phase uses discriminant function to categories every block into two
classes’ w1 and w2. Where, w1 corresponds to text blocks and w2 corresponds to non-text
blocks category. The discriminant functions thresholds wavelet energy features at both
decomposition levels for categorization of image blocks. Then, the coordinates rmin, rmax, cmin,
cmax which corresponds to minimum and maximum row and column numbers of every
classified text block Ci will be stored into a new vector B as shown in equation (6), which are
later used during merging process to obtain text regions.
B = [Ci, i = 1,N1] (6)
Ci = [rmin, rmax, cmin, cmax ] (7)
Where,
• Ci corresponds to ith
text block.
• rmin, rmax, cmin, cmax corresponds to the coordinates of ith
block in terms of minimum
and maximum row and column numbers.
• N1 is Number of text blocks.
After this phase, the classified text blocks are subjected to merging process as given
in the next section.
3.4. Merging of text blocks to detect text regions
The merging process combines the potential text blocks Ci, connected in rows and
columns, to obtain new text regions ri, whose coordinates are recorded into vector R as
depicted in equation (8).
R = [ri, i = 1,W] (8)
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
44
ri = [rmin, rmax, cmin, cmax ] (9)
where,
• ri corresponds to ith
text region.
• rmin, rmax, cmin, cmax corresponds to the coordinates of ith
text region.
• W is number of text regions.
The merging procedure is described in algorithm 1;
Algorithm1
Input: Vector B which contains coordinates of identified text blocks
Output: Vector R which records text regions
Begin
1. Choose the first block Cs from vector B.
2. Initialize coordinates of a new text region ri to coordinates of block Cs.
3. Select next block Cp from the vector B.
4. if (the block Cp is connected to ri in row or column) then
begin
Merge and update coordinates rmin, rmax, cmin, cmax of block ri.
rmin = min{ ri [rmin ], Cp[rmin] } rmax = max{ ri [rmax ], Cp[rmax] }
cmin = min{ ri [cmin ], Cp[cmin] } cmax = max{ ri [cmax ], Cp[cmax] }
else
Store text region ri into vector R.
Initialize coordinates of a new text region ri to coordinates of current block Cp.
end
5. Repeat steps 2 -5 until p=N1.
4. EXPERIMENTAL RESULTS AND ANALYSIS
The proposed methodology for text region detection and extraction has been
evaluated for 100 indoor and outdoor low resolution natural scene images (having 3600,
50x50 size blocks) with complex backgrounds. The experimental tests were conducted for
most of the images containing Kannada text and few containing English text and results were
highly encouraging. The experimental results of processing several images dealing with
various issues and the overall performance of the system are given below;
Fig.2(a) Input Image Fig.2(b) Extracted Text Region
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
45
Fig.3(a) Input Image Fig.3(b) Extracted Text Region
Fig.4 (a) Input Image Fig.4 (b) Extracted Text Region
Fig.5(a) Input Image Fig.5(b) Extracted Text Region
Fig.6(a) Input Image Fig.6(b) Extracted Text Region
Fig.7(a) Input Image Fig.7(b) Extracted Text Region
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
46
Fig.8(a) Input Image Fig.8(b) Extracted Text Region
Fig.9(a) Input Image Fig.9(b) Extracted Text Region
Fig.10(a) Input Image Fig.10(b) Extracted Text Region
Fig.11(a) Input Image Fig.11(b) Extracted Text Region
The proposed methodology has produced good results for natural scene images
containing text of different size, font, and alignment with varying background. The approach
also detects nonlinear text regions. Hence, the proposed method is robust and achieves an
overall detection rate of 97%, and a false reject rate of 3% is obtained for 100 low resolution
display board natural scene images. The method is advantageous as it uses only wavelet
texture features for text extraction. The reason for false reject rate is the low contrast energy
of blocks containing minute part of the text, which is too weak for acceptance to classify
blocks as text blocks. However, the method has detected larger region than the actual size of
the text region, when display board images with more complex backgrounds containing trees,
buildings and vehicles are tested.
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
47
As the texture features used in the method does not capture language dependent
information, the method can be extended for text localization from the images of other
languages with little modifications. To explore such possibilities the performance of the
method has been tested for localizing English text without any modifications, But the
thorough experimentation is not carried out for various images containing English and other
language text. The use of different block sizes can also be experimented to improve the
detection accuracy and reduce false acceptance rate. The overall performance of the system
of testing 100 low resolution natural scene display board images dealing with various issues
is given in Table 1.
TABLE 1: Overall System Performance
Total no of
blocks tested(#
of text blocks)
Correctly
detected text
blocks
Falsy detected
text blocks
(FAR)
Missed text
blocks
(FRR)
Detection rate
%
3600 (1211) 1175 19 (1.5%) 36 (3%) 97%
5. CONCLUSION
The effectiveness of the method that performs texture analysis for text localization
from low resolution natural scene images is presented. The wavelet texture features have
performed well in detection and segmentation of text region and are the ideal choice for
degraded noisy natural scene images, where the connected component analysis techniques are
found to be inefficient. The proposed method is robust and has achieved a detection rate of
97% on a variety of 100 low resolution natural scene images each of size 240x320.
The proposed methodology has produced good results for natural scene images
containing text of different size, font, and alignment with varying background. The approach
also detects nonlinear text regions. However, it detects larger text regions than the actual size
when the background in the image is more complex containing trees, vehicles, and other
details from sources of outdoor scenes for some images.
As the texture features does not capture language dependent information, the method
can be extended for text localization from the images of other languages with little
modifications. The performance of the method has been tested for localizing English text, but
needs further exploration.
REFERENCES
[1] Tollmar K. Yeh T. and Darrell T. “IDeixis - Image-Based Deixis for Finding
Location-Based Information”, In Proceedings of Conference on Human Factors in
Computing Systems (CHI’04), pp.781-782, 2004.
[2] Natalia Marmasse and Chris Schamandt. “Location aware information delivery with
comMotion”, In Proceedings of Conference on Human Factors in Computing
Systems, pp.157-171, 2000.
[3] Eve Bertucci, Maurizio Pilu and Majid Mirmehdi. "Text Selection by Structured Light
Marking for Hand-held Cameras" Seventh International Conference on Document
Analysis and Recognition (ICDAR'03), pp.555-559, August 2003.
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
48
[4] Tom yeh, Kristen Grauman, and K. Tollmar. “A picture is worth a thousand
keywords: image-based object search on a mobile platform”, In Proceedings of
Conference on Human Factors in Computing Systems, pp.2025-2028, 2005.
[5] Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper, and
Mike Pinkerton, “CyberGuide: Amobile context-aware tour guide”, Wireless
Networks, 3(5):421-433, 1997.
[6] Fan X. Xie X. Li Z. Li M. and Ma. “Photo-to-search: using multimodal queries to
search web from mobile phones”, In proceedings of 7th
ACM SIGMM international
workshop on multimedia information retrieval, 2005.
[7] Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, “SnapToTell:
Ubiquitous information access from camera”, Mobile human computer interaction
with mobile devices and services, Glasgow, Scotland, 2005.
[8] Yu Zhong, Hongjiang Zhang, and Anil. K. Jain. “Automatic caption localization in
compressed video”, IEEE transactions on Pattern Analysis and Machine Intelligence,
22(4):385-389, April 2000.
[9] Yu Zhong, Kalle Karu, and Anil. K. Jain. “Locating Text in Complex Color Images”,
Pattern Recognition, 28(10):1523-1535, 1995.
[10] S. H. Park, K. I. Kim, K. Jung, and H. J. Kim. “Locating Car License Plates using
Neural Networks”, IEE Electronics Letters, 35(17):1475-1477, 1999.
[11] V. Wu, R. Manmatha, and E. M. Riseman. “TextFinder: An Automatic System to
Detect and Recognize Text in Images”, IEEE Transactions on Pattern Analysis and
Machine Intelligence, 21(11):1224-1229, 1999.
[12] V. Wu, R. Manmatha, and E. R. Riseman. “Finding Text in Images”, In Proceedings
of ACM International Conference on Digital Libraries, pp. 1-10, 1997.
[13] B. Sin, S. Kim, and B. Cho. “Locating Characters in Scene Images using Frequency
Features”, In proceedings of International Conference on Pattern Recognition,
Vol.3:489-492, 2002.
[14] W. Mao, F. Chung, K. Lanm, and W. Siu. “Hybrid Chinese/English Text Detection in
Images and Video Frames”, In Proceedings of International Conference on Pattern
Recognition, Vol.3:1015-1018, 2002.
[15] A. K. Jain, and K. Karu. “Learning Texture Discrimination Masks”, IEEE
Transactions on Pattern Analysis and Machine Intelligence, 18(2):195-205, 1996.
[16] K. Jung. “Neural network-based Text Location in Color Images”, Pattern Recognition
Letters, 22(14):1503-1515, December 2001.
[17] K. Y. Jeong, K. Jung, E. Y. Kim and H. J. Kim. “Neural Network-based Text
Location for News Video Indexing”, In Proceedings of IEEE International
Conference on Image Processing, Vol.3:319-323, January 2000.
[18] H. Li. D. Doerman and O. Kia. “Automatic Text Detection and Tracking in Digital
Video”, IEEE Transactions on Image Processing, 9(1):147-156, January 2000.
[19] H. Li and D. Doermann. “A Video Text Detection System based on Automated
Training”, In Proceedings of IEEE International Conference on Pattern Recognition,
pp.223-226, 2000.
[20] S. A. Angadi, M. M. Kodabagi, 2009,”Texture based methodology for text region
extraction from low resolution natural scene images”, International Journal of Image
Processing (IJIP), Vol.3 No.5, pp.229-245, September-October 2009.
[21] S. A. Angadi, M. M. Kodabagi, 2010,”Texture region extraction from low resolution
natural scene images using texture features”, 2nd
International Advanced Computing
Conference, pp.121-128, 2010.
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME
49
[22] W. Y. Liu, and D. Dori. “A Proposed Scheme for Performance Evaluation of
Graphics/Text Separation Algorithm”, Graphics Recognition – Algorithms and
Systems, K. Tombre and A. Chhabra (eds.), Lecture Notes in Computer Science,
Vol.1389:359-371, 1998.
[23] Y. Watanabe, Y. Okada, Y. B. Kim and T. Takeda. ”Translation Camera”, In
Proceedings of International Conference on Pattern Recognition, Vol.1: 613-617,
1998.
[24] I. Haritaoglu. “Scene Text Extraction and Translation for Handheld Devices”, In
Proceedings IEEE Conference on Computer Vision and Pattern Recognition,
Vol.2:408-413, 2001.
[25] K. Jung, K. Kim, T. Kurata, M. Kourogi, and J. Han. “Text Scanner with Text
Detection Technology on Image Sequence”, In Proceedings of International
Conference on Pattern Recognition, Vol.3:473-476, 2002.
[26] H. Hase, T. Shinokawa, M. Yoneda, and C. Y. Suen. “Character String Extraction
from Color Documents”, Pattern Recognition, 34(7):1349-1365, 2001.
[27] Ceiline Thillou, Silvio Frreira and Bernard Gosselin. “An embedded application for
degraded text recognition”, EURASIP Journal on applied signal processing,
1(1):2127-2135, 2005.
[28] Nobuo Ezaki, Marius Bulacu and Lambert Schomaker. “Text Detection from Natural
Scene Images: Towards a System for Visually Impaired Persons”, In Proceedings of
17th International Conference on Pattern Recognition (ICPR’04), IEEE Computer
Society vol.2:683-686, 2004.
[29] Chitrakala Gopalan and Manjula. “Text Region Segmentation from Heterogeneous
Images”, International Journal of Computer Science and Network Security,
8(10):108-113, October 2008.
[30] Rui Wu, Jianhua Huang, Xianglong Tang and Jiafeng Liu. ” A Text Image
Segmentation Method Based on Spectral Clustering”, IEEE Sixth Indian Conference
on Computer Vision, Graphics & Image Processing, 1(4): 9-15, 2008.
[31] Mohammad Osiur Rahman, Fouzia Asharf Mousumi, Edgar Scavino, Aini Hussain,
and Hassan Basri. ”Real Time Road Sign Recognition System Using Artificial Neural
Networks for Bengali Textual Information Box”, European Journal of Scientific
Research, 25(3):478-487, 2009.
[32] Rajiv K. Sharma and Amardeep Singh. “Segmentation of Handwritten Text in
Gurmukhi Script”, International Journal of Image Processing, 2(3):13-17, May/June
2008.
[33] Amjad Rehman Khan and Zulkifli Mohammad. ”A Simple Segmentation Approach
for Unconstrained Cursive Handwritten Words in Conjunction with the Neural
Network”, International Journal of Image Processing, 2(3):29-35, May/June 2008.
[34] Amjad Rehman Khan and Zulkifli Mohammad. ”Text Localization in Low Resolution
Camera”, International Journal of Image Processing, 1(2):78-86, 2007.
[35] R.M. Haralick, K. Shanmugam and I. Dinstein. “Textural Features for Image
Classification”, IEEE Transactions Systems, Man, and Cyber-netics, 3(6):610-621,
1973.

Más contenido relacionado

La actualidad más candente

Text Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR TransformText Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR TransformIOSR Journals
 
Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...
Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...
Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...IJECEIAES
 
Anatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern DetectionAnatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern DetectionIJEACS
 
Research on object detection and recognition using machine learning algorithm...
Research on object detection and recognition using machine learning algorithm...Research on object detection and recognition using machine learning algorithm...
Research on object detection and recognition using machine learning algorithm...YousefElbayomi
 
Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388Editor IJARCET
 
IRJET-MText Extraction from Images using Convolutional Neural Network
IRJET-MText Extraction from Images using Convolutional Neural NetworkIRJET-MText Extraction from Images using Convolutional Neural Network
IRJET-MText Extraction from Images using Convolutional Neural NetworkIRJET Journal
 
Estimation of 3d Visualization for Medical Machinary Images
Estimation of 3d Visualization for Medical Machinary ImagesEstimation of 3d Visualization for Medical Machinary Images
Estimation of 3d Visualization for Medical Machinary Imagestheijes
 
Relevance feedback a novel method to associate user subjectivity to image
Relevance feedback a novel method to associate user subjectivity to imageRelevance feedback a novel method to associate user subjectivity to image
Relevance feedback a novel method to associate user subjectivity to imageIAEME Publication
 
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATION
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATIONMULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATION
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATIONgerogepatton
 
A Method to Provide Accessibility for Visual Components to Vision Impaired
A Method to Provide Accessibility for Visual Components to Vision ImpairedA Method to Provide Accessibility for Visual Components to Vision Impaired
A Method to Provide Accessibility for Visual Components to Vision ImpairedWaqas Tariq
 
Object extraction using edge, motion and saliency information from videos
Object extraction using edge, motion and saliency information from videosObject extraction using edge, motion and saliency information from videos
Object extraction using edge, motion and saliency information from videoseSAT Journals
 
A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...
A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...
A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...ijscai
 
IRJET - Facial In-Painting using Deep Learning in Machine Learning
IRJET -  	  Facial In-Painting using Deep Learning in Machine LearningIRJET -  	  Facial In-Painting using Deep Learning in Machine Learning
IRJET - Facial In-Painting using Deep Learning in Machine LearningIRJET Journal
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionIAEME Publication
 
Multilabel Image Annotation using Multimodal Analysis
Multilabel Image Annotation using Multimodal AnalysisMultilabel Image Annotation using Multimodal Analysis
Multilabel Image Annotation using Multimodal Analysisijtsrd
 

La actualidad más candente (16)

Text Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR TransformText Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR Transform
 
Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...
Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...
Improving of Fingerprint Segmentation Images Based on K-MEANS and DBSCAN Clus...
 
Anatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern DetectionAnatomical Survey Based Feature Vector for Text Pattern Detection
Anatomical Survey Based Feature Vector for Text Pattern Detection
 
Research on object detection and recognition using machine learning algorithm...
Research on object detection and recognition using machine learning algorithm...Research on object detection and recognition using machine learning algorithm...
Research on object detection and recognition using machine learning algorithm...
 
Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388Ijarcet vol-2-issue-4-1383-1388
Ijarcet vol-2-issue-4-1383-1388
 
IRJET-MText Extraction from Images using Convolutional Neural Network
IRJET-MText Extraction from Images using Convolutional Neural NetworkIRJET-MText Extraction from Images using Convolutional Neural Network
IRJET-MText Extraction from Images using Convolutional Neural Network
 
50120130405017 2
50120130405017 250120130405017 2
50120130405017 2
 
Estimation of 3d Visualization for Medical Machinary Images
Estimation of 3d Visualization for Medical Machinary ImagesEstimation of 3d Visualization for Medical Machinary Images
Estimation of 3d Visualization for Medical Machinary Images
 
Relevance feedback a novel method to associate user subjectivity to image
Relevance feedback a novel method to associate user subjectivity to imageRelevance feedback a novel method to associate user subjectivity to image
Relevance feedback a novel method to associate user subjectivity to image
 
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATION
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATIONMULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATION
MULTI-LEVEL FEATURE FUSION BASED TRANSFER LEARNING FOR PERSON RE-IDENTIFICATION
 
A Method to Provide Accessibility for Visual Components to Vision Impaired
A Method to Provide Accessibility for Visual Components to Vision ImpairedA Method to Provide Accessibility for Visual Components to Vision Impaired
A Method to Provide Accessibility for Visual Components to Vision Impaired
 
Object extraction using edge, motion and saliency information from videos
Object extraction using edge, motion and saliency information from videosObject extraction using edge, motion and saliency information from videos
Object extraction using edge, motion and saliency information from videos
 
A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...
A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...
A UTILIZATION OF CONVOLUTIONAL MATRIX METHODS ON SLICED HIPPOCAMPAL NEURON RE...
 
IRJET - Facial In-Painting using Deep Learning in Machine Learning
IRJET -  	  Facial In-Painting using Deep Learning in Machine LearningIRJET -  	  Facial In-Painting using Deep Learning in Machine Learning
IRJET - Facial In-Painting using Deep Learning in Machine Learning
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognition
 
Multilabel Image Annotation using Multimodal Analysis
Multilabel Image Annotation using Multimodal AnalysisMultilabel Image Annotation using Multimodal Analysis
Multilabel Image Annotation using Multimodal Analysis
 

Destacado

Paula gonzález méndez, francés pintores
Paula gonzález méndez, francés pintoresPaula gonzález méndez, francés pintores
Paula gonzález méndez, francés pintorespacitina
 
6 directive to submit qtrwise_implementation plan
6 directive to submit qtrwise_implementation plan6 directive to submit qtrwise_implementation plan
6 directive to submit qtrwise_implementation planradha2013
 
PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...
PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...
PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...I3E Technologies
 
STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...
STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...
STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...I3E Technologies
 
Amr Mahrous Abd El - ENGLISH
Amr Mahrous Abd El - ENGLISHAmr Mahrous Abd El - ENGLISH
Amr Mahrous Abd El - ENGLISHamr dahroug
 
bidang garapan kesiswaan
bidang garapan kesiswaanbidang garapan kesiswaan
bidang garapan kesiswaanizza nur baiti
 
Presentation supervisi pendidikan
Presentation supervisi pendidikanPresentation supervisi pendidikan
Presentation supervisi pendidikanaan agung prasetyo
 
Google Glass app for colorblind individuals and people with impaired vision
Google Glass app for colorblind individuals and people with impaired visionGoogle Glass app for colorblind individuals and people with impaired vision
Google Glass app for colorblind individuals and people with impaired visionEducational Technology
 

Destacado (13)

2012 03-31 21.53.35
2012 03-31 21.53.352012 03-31 21.53.35
2012 03-31 21.53.35
 
Paula gonzález méndez, francés pintores
Paula gonzález méndez, francés pintoresPaula gonzález méndez, francés pintores
Paula gonzález méndez, francés pintores
 
6 directive to submit qtrwise_implementation plan
6 directive to submit qtrwise_implementation plan6 directive to submit qtrwise_implementation plan
6 directive to submit qtrwise_implementation plan
 
ABSC Presentation Eng Version
ABSC Presentation Eng VersionABSC Presentation Eng Version
ABSC Presentation Eng Version
 
PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...
PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...
PANDA: PUBLIC AUDITING FOR SHARED DATA WITH EFFICIENT USER REVOCATION IN THE ...
 
STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...
STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...
STATE-CLUSTERING BASED MULTIPLE DEEP NEURAL NETWORKS MODELING APPROACH FOR SP...
 
Amr Mahrous Abd El - ENGLISH
Amr Mahrous Abd El - ENGLISHAmr Mahrous Abd El - ENGLISH
Amr Mahrous Abd El - ENGLISH
 
Niños indigos
Niños indigosNiños indigos
Niños indigos
 
La Coste Clun
La Coste ClunLa Coste Clun
La Coste Clun
 
災害應變參考程序手冊
災害應變參考程序手冊災害應變參考程序手冊
災害應變參考程序手冊
 
bidang garapan kesiswaan
bidang garapan kesiswaanbidang garapan kesiswaan
bidang garapan kesiswaan
 
Presentation supervisi pendidikan
Presentation supervisi pendidikanPresentation supervisi pendidikan
Presentation supervisi pendidikan
 
Google Glass app for colorblind individuals and people with impaired vision
Google Glass app for colorblind individuals and people with impaired visionGoogle Glass app for colorblind individuals and people with impaired vision
Google Glass app for colorblind individuals and people with impaired vision
 

Similar a Text region extraction from low resolution display board ima

Character recognition of kannada text in scene images using neural
Character recognition of kannada text in scene images using neuralCharacter recognition of kannada text in scene images using neural
Character recognition of kannada text in scene images using neuralIAEME Publication
 
Product Label Reading System for visually challenged people
Product Label Reading System for visually challenged peopleProduct Label Reading System for visually challenged people
Product Label Reading System for visually challenged peopleIRJET Journal
 
IRJET- Image to Text Conversion using Tesseract
IRJET-  	  Image to Text Conversion using TesseractIRJET-  	  Image to Text Conversion using Tesseract
IRJET- Image to Text Conversion using TesseractIRJET Journal
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...IRJET Journal
 
IRJET - A Review on Text Recognition for Visually Blind People
IRJET - A Review on Text Recognition for Visually Blind PeopleIRJET - A Review on Text Recognition for Visually Blind People
IRJET - A Review on Text Recognition for Visually Blind PeopleIRJET Journal
 
IRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
 
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...Editor IJMTER
 
An approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsAn approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsVivek Chamorshikar
 
A Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In VideosA Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In VideosCSCJournals
 
A Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In VideosA Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In VideosCSCJournals
 
Image Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic PeopleImage Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic PeopleIRJET Journal
 
Erdas Imagine Tool Geographic imaging professionals use specialized software.pdf
Erdas Imagine Tool Geographic imaging professionals use specialized software.pdfErdas Imagine Tool Geographic imaging professionals use specialized software.pdf
Erdas Imagine Tool Geographic imaging professionals use specialized software.pdfbkbk37
 
Autonomous Driving Scene Parsing
Autonomous Driving Scene ParsingAutonomous Driving Scene Parsing
Autonomous Driving Scene ParsingIRJET Journal
 
IRJET- Detection and Recognition of Hypertexts in Imagery using Text Reco...
IRJET-  	  Detection and Recognition of Hypertexts in Imagery using Text Reco...IRJET-  	  Detection and Recognition of Hypertexts in Imagery using Text Reco...
IRJET- Detection and Recognition of Hypertexts in Imagery using Text Reco...IRJET Journal
 

Similar a Text region extraction from low resolution display board ima (20)

Character recognition of kannada text in scene images using neural
Character recognition of kannada text in scene images using neuralCharacter recognition of kannada text in scene images using neural
Character recognition of kannada text in scene images using neural
 
Product Label Reading System for visually challenged people
Product Label Reading System for visually challenged peopleProduct Label Reading System for visually challenged people
Product Label Reading System for visually challenged people
 
IRJET- Image to Text Conversion using Tesseract
IRJET-  	  Image to Text Conversion using TesseractIRJET-  	  Image to Text Conversion using Tesseract
IRJET- Image to Text Conversion using Tesseract
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
50320140502001 2
50320140502001 250320140502001 2
50320140502001 2
 
50320140502001
5032014050200150320140502001
50320140502001
 
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
 
IRJET - A Review on Text Recognition for Visually Blind People
IRJET - A Review on Text Recognition for Visually Blind PeopleIRJET - A Review on Text Recognition for Visually Blind People
IRJET - A Review on Text Recognition for Visually Blind People
 
IRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture Recognition
 
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...
 
Cartographic systems visualization in mobile devices: issues, approaches and ...
Cartographic systems visualization in mobile devices: issues, approaches and ...Cartographic systems visualization in mobile devices: issues, approaches and ...
Cartographic systems visualization in mobile devices: issues, approaches and ...
 
An approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsAn approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind persons
 
40120140501009
4012014050100940120140501009
40120140501009
 
A Robust Embedded Based String Recognition for Visually Impaired People
A Robust Embedded Based String Recognition for Visually Impaired PeopleA Robust Embedded Based String Recognition for Visually Impaired People
A Robust Embedded Based String Recognition for Visually Impaired People
 
A Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In VideosA Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In Videos
 
A Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In VideosA Survey On Thresholding Operators of Text Extraction In Videos
A Survey On Thresholding Operators of Text Extraction In Videos
 
Image Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic PeopleImage Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic People
 
Erdas Imagine Tool Geographic imaging professionals use specialized software.pdf
Erdas Imagine Tool Geographic imaging professionals use specialized software.pdfErdas Imagine Tool Geographic imaging professionals use specialized software.pdf
Erdas Imagine Tool Geographic imaging professionals use specialized software.pdf
 
Autonomous Driving Scene Parsing
Autonomous Driving Scene ParsingAutonomous Driving Scene Parsing
Autonomous Driving Scene Parsing
 
IRJET- Detection and Recognition of Hypertexts in Imagery using Text Reco...
IRJET-  	  Detection and Recognition of Hypertexts in Imagery using Text Reco...IRJET-  	  Detection and Recognition of Hypertexts in Imagery using Text Reco...
IRJET- Detection and Recognition of Hypertexts in Imagery using Text Reco...
 

Más de IAEME Publication

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME Publication
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEIAEME Publication
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
 

Más de IAEME Publication (20)

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdf
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICE
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
 

Último

TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????blackmambaettijean
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 

Último (20)

TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 

Text region extraction from low resolution display board ima

  • 1. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 38 TEXT REGION EXTRACTION FROM LOW RESOLUTION DISPLAY BOARD IMAGES USING WAVELET FEATURES M. M. Kodabagi1 , S. A. Angadi2 , Anuradha. R. Pujari 3 1 Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India 2 Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India 3 Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India ABSTRACT Automated systems for understanding display boards are finding many applications useful in guiding tourists, assisting visually challenged and also in providing location aware information. Such systems require an automated method to detect and extract text prior to further image analysis. In this paper, a methodology to detect and extract text regions from low resolution display board images is presented. The proposed work is texture based and uses wavelet energy features for text region detection and extraction. The wavelet energy features are obtained at 2 resolution levels on every 50x50 block of the image and potential text blocks are identified using newly defined discriminant functions. Further, the detected text blocks are merged to extract text regions. The proposed method is robust and achieves a detection rate of 97% on a variety of 100 low resolution display board images each of size 240x320. Keywords: Text Detection, Text Region Extraction, Display Board Images, Wavelet Features INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY & MANAGEMENT INFORMATION SYSTEM (IJITMIS) ISSN 0976 – 6405(Print) ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), pp. 38-49 © IAEME: www.iaeme.com/ijitmis.html Journal Impact Factor (2013): 5.2372 (Calculated by GISI) www.jifactor.com IJITMIS © I A E M E
  • 2. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 39 1. INTRODUCTION As the people move across world for business, field works and/or pleasure, they find it difficult to understand the text written on display boards in foreign environment. In such a scenario, people either look for guides or intelligent devices that can help them in providing translated information to their native language. As most of the individuals carry camera embedded, hand held devices such as mobile phones and PDA’s, there is a possibility to integrate technological solutions into such systems inorder to provide facilities for automatically understanding display boards in foreign environment. These facilities may be provided as an integral solution through web service as necessary computing function, which are not available in hand held systems. Such web based hand held systems must be enabled to capture natural scene images containing display boards and query the web service to retrieve translated localized information of the text written on display boards. The written matter on display/name boards provides information necessary for the needs and safety of people, and may be written in languages unknown. And the written matter can be street names, restaurant names, building names, company names, traffic directions, warning signs etc. Hence, lot of commercial and academic interest is veered towards development of techniques for web service based hand held systems useful in understanding written text in display boards. There is a spurt of activity in development of web based intelligent hand held tour guide systems, blind assistants to read written text and Location aware computing systems and many more in recent years. A few such works are presented in the following and a more elaborate survey of related works is given in the next section. A point by photograph paradigm where users can specify an object by simply taking picture to retrieve matching images from the web is found in [1]. The comMotion is a location aware hand held system that links personal information to locations. It reminds users about shopping list when he/she nears a shopping mall [2]. At Hewlett Packard (HP), mobile Optical Character Reading (OCR) applications were developed to retrieve information related to the text image captured through a pen-size camera [3]. Mobile phone image matching and retrieval has been used by insurance and trading firms for remote item appraisal and verification with a central database [4]. The image matching and retrieval applications cannot be embedded in hand held devices such as mobile phones due to limited availability of computing resources, hence such services are being developed as web services. The researchers have also worked towards development of web based intelligent hand held tour guide systems. The cyberGuide [5] is an intelligent hand held tour guide system, which provides the information based on user’s location. The cyberGuide continuously monitors the users location using Global Positioning System (GPS) and provides new information at the right time. Museums could provide these tour guides to visitors allowing them to take personalized tours observing any displayed object. As the visitors move across museum floors, the information about the location is pushed to hand held tour guides. The research prototypes used to search information about an object image captured by cameras embedded in mobile phones are described in [6-7]. The state of art hand held systems available across the world are not automated for understanding written text on display boards in foreign environment. Scope exists for exploring such possibilities through automation of hand held systems. One of the very important processing steps for development of such systems is automatic detection and extraction of text regions from low resolution natural scene images prior to further analysis. The written text provides important information and it is not an easy problem to reliably detect and localize text embedded in natural scene images [8]. The size of the characters can
  • 3. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 40 vary from very small to very big. The font of the text can be different. Text present in the image may have multiple colors. The text may appear in different orientation. Text can occur in a complex background. And also the textual and other information captured is affected by significant degradations such as perspective distortion, blur, shadow and uneven lighting. Hence, the automatic detection and segmentation of text is a difficult and challenging problem. Reported works have identified a number of approaches for text localization from natural scene images. The existing approaches are categorized as connected component based, edge based and texture based methods. Connected component based methods use bottom up approach to group smaller components into larger components until all regions are identified in the image. A geometrical analysis is later needed to identify text components and group them to localize text regions. Edge based methods focus on the high contrast between the background and text and the edges of the text boundary are identified and merged. Later several heuristics are required to filter out nontext regions. But, the presence of noise, complex background, and significant degradation in the low resolution natural scene image can affect the extraction of connected components and identification of boundary lines, thus making both the approaches inefficient. Texture analysis techniques are good choice for solving such a problem as they give global measure of properties of a region. In this paper, a new model for text extraction from low resolution display board images using wavelet features is presented. The proposed work is texture based and uses wavelet features for text extraction. The wavelet energy features are obtained at 2 resolution levels on every 50x50 block of the image and potential text blocks are identified using newly defined discriminant functions. Further, the detected text blocks are merged to extract text regions. The proposed method is robust and achieves a detection rate of 97% on a variety of natural scene images. The rest of the paper is organized as follows: The related work is discussed in section II. The proposed model is presented in section III. Experimental results are given in section IV. Section V concludes the work and lists future directions. 2. RELATED WORKS The web based hand held systems useful in understanding display boards requires analysis of natural scene images to extract text regions for further processing. A number of methods for text localization have been published in recent years and are categorized into connected component based, edge based and texture based methods. The performance of the methods belonging to first two categories is found to be inefficient and computationally expensive for low resolution natural scene images due to the presence of noise, complex background and significant degradation. Hence, the techniques based on texture analysis have become a good choice for image analysis, and texture analysis is further investigated in the proposed work. A few state of the art approaches that use texture features for text localization have been summarized here; the use of horizontal window of size 1×21 (Mask size) to compute the spatial variance for identification of edges in an image, which are further used to locate the boundaries of a text line is proposed in [9]. However, the approach will only detect horizontal components with a large variation compared to the background and a processing time of 6.6 seconds with 256x256 images on SPARC station 20 is reported. The Vehicle license plate localization method that uses similar criteria is presented in [10]. It uses time delay neural networks (TDNNs) as a texture discriminator in the HSI color space to decide whether the window of an image contains a license plate number. The detected windows are
  • 4. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 41 later merged for extracting license plates. A multi-scale texture segmentation schemes are presented in [11-12]. The methods detect potential text regions based on nine second- order Gaussian derivatives and is evaluated for different images including video frames, newspapers, magazines, envelopes etc. The approach is insensitive to the image resolution and tends to miss very small text and gives a localization rate of 90%. A methodology that uses frequency features such as the number of edge pixels in horizontal and vertical directions and Fourier spectrum to detect text regions in real scene images is discussed in [13]. The texture-based text localization method using Wavelet transform is proposed in [14]. The techniques for text extraction in complex color images, where a neural network is employed to train a set of texture discrimination masks that minimize the classification error for the two texture classes: text regions and non-text regions are reported in [16-17]. Learning-based methods for localizing text in documents and video are proposed in [15] and [18]. The method [18] is evaluated for various video images and the text localization procedure required about 1 second to process a 352x240 image on Sun workstation. And for text detection a precision rate of 91% and a recall rate of 92.8% is reported. This method was subsequently enhanced for skewed text in [19]. A work similar to the proposed method which uses DCT coefficients to capture textural properties for caption localization is presented in [8]. The authors claim that the method is very fast and gives a detection rate of 99.17%. However, the precise localization results are not reported. The method that use homogeneity and contrast textural features for text region detection and extraction is reported in [20-21]. In recent years, several approaches for sign detection, text detection and segmentation from natural images are also reported [22- 34]. Out of many works cited in the literature it is generally agreed that the robustness of texture based methods depends on texture features extracted from the window/block or the region of interest that are used by the discriminant functions for classification decisions. The probability of misclassification is directly related to the number and quality of details available in texture features. Hence, the extracted texture features must give sufficient information to distinguish text from the background. Hence, there is a scope to explore such possibilities. The detailed description of the proposed methodology is given in the next section. 3. PROPOSED MODEL The proposed method comprises of 4 phases; Preprocessing for gray level conversion and dividing image into 50x50 blocks, Extraction of wavelet energy features on every 50x50 block, Classification of blocks into text and non text categories, and merging of text blocks to detect text regions. The block schematic diagram of the proposed model is given in the fig. 1. The detailed description of each phase is presented in the following subsections.
  • 5. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 42 Low resolution natural scene image f(x,y) Text regions Fig. 1. Block Diagram of Proposed Model 3.1. Preprocessing and dividing image into 50x50 blocks Initially, the input image is preprocessed for gray level conversion and resized to a constant size of 300x300 pixels. Further, the resized image is divided into 50x50 blocks. For each block Discrete Wavelet Transform (DWT) is applied to obtain wavelet energy features. There are several benefits of using larger sized image blocks for extracting energy features. One such benefit is, the larger size image blocks cover more details and hence extracted features give sufficient information for correct classification of blocks into text and non-text categories. The other benefits include; robustness and insensitiveness to variation in size, font and alignment. 3.2. Feature Extraction In this phase, the wavelet energy features are obtained from every 50x50 block of the processed image at 2 resolution levels. Totally 6 features are extracted from every block and are stored into a feature vector Xi (Subscript “i” corresponds to ith block). The feature vector Xi also records block coordinates which corresponds to minimum and maximum row and column numbers of the block. Feature vectors of all N blocks are combined to form a feature matrix D as depicted in equation (1). The feature vector is described in equation (2). D = [ X1, X2, X3…………………… XN] T (1) Xi = [rmin, rmax, cmin, cmax, fj1 fj2 fj3]; 1 <= j<= 2 (2) g(x,y) Preprocessing for gray level conversion and dividing image into 50x50 blocks Texture Features Extraction from every 50x50 block Classification of blocks into text and non text category using discriminant functions Merging of text blocks into text regions
  • 6. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 43 fj1 = (∑ Hj (x, y)) / (M x N) (3) fj2 = (∑ Vj (x, y)) / (M x N) (4) fj3 = (∑ Dj (x, y)) / (M x N) (5) Where, • rmin, rmax, cmin, cmax corresponds to coordinates of ith block in terms of minimum and maximum row and column numbers. • fj1, fj2, fj3 are wavelet energy features of corresponding detail band/coefficients at jth resolution level. • Hj, Vj and Dj correspond to detail coefficients/bands of wavelet transform. • M X N represents size of detail band at the corresponding resolution level. The features of all image blocks are further used to identify potential text blocks as detailed in the classification phase. 3.3. Classification The classification phase uses discriminant function to categories every block into two classes’ w1 and w2. Where, w1 corresponds to text blocks and w2 corresponds to non-text blocks category. The discriminant functions thresholds wavelet energy features at both decomposition levels for categorization of image blocks. Then, the coordinates rmin, rmax, cmin, cmax which corresponds to minimum and maximum row and column numbers of every classified text block Ci will be stored into a new vector B as shown in equation (6), which are later used during merging process to obtain text regions. B = [Ci, i = 1,N1] (6) Ci = [rmin, rmax, cmin, cmax ] (7) Where, • Ci corresponds to ith text block. • rmin, rmax, cmin, cmax corresponds to the coordinates of ith block in terms of minimum and maximum row and column numbers. • N1 is Number of text blocks. After this phase, the classified text blocks are subjected to merging process as given in the next section. 3.4. Merging of text blocks to detect text regions The merging process combines the potential text blocks Ci, connected in rows and columns, to obtain new text regions ri, whose coordinates are recorded into vector R as depicted in equation (8). R = [ri, i = 1,W] (8)
  • 7. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 44 ri = [rmin, rmax, cmin, cmax ] (9) where, • ri corresponds to ith text region. • rmin, rmax, cmin, cmax corresponds to the coordinates of ith text region. • W is number of text regions. The merging procedure is described in algorithm 1; Algorithm1 Input: Vector B which contains coordinates of identified text blocks Output: Vector R which records text regions Begin 1. Choose the first block Cs from vector B. 2. Initialize coordinates of a new text region ri to coordinates of block Cs. 3. Select next block Cp from the vector B. 4. if (the block Cp is connected to ri in row or column) then begin Merge and update coordinates rmin, rmax, cmin, cmax of block ri. rmin = min{ ri [rmin ], Cp[rmin] } rmax = max{ ri [rmax ], Cp[rmax] } cmin = min{ ri [cmin ], Cp[cmin] } cmax = max{ ri [cmax ], Cp[cmax] } else Store text region ri into vector R. Initialize coordinates of a new text region ri to coordinates of current block Cp. end 5. Repeat steps 2 -5 until p=N1. 4. EXPERIMENTAL RESULTS AND ANALYSIS The proposed methodology for text region detection and extraction has been evaluated for 100 indoor and outdoor low resolution natural scene images (having 3600, 50x50 size blocks) with complex backgrounds. The experimental tests were conducted for most of the images containing Kannada text and few containing English text and results were highly encouraging. The experimental results of processing several images dealing with various issues and the overall performance of the system are given below; Fig.2(a) Input Image Fig.2(b) Extracted Text Region
  • 8. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 45 Fig.3(a) Input Image Fig.3(b) Extracted Text Region Fig.4 (a) Input Image Fig.4 (b) Extracted Text Region Fig.5(a) Input Image Fig.5(b) Extracted Text Region Fig.6(a) Input Image Fig.6(b) Extracted Text Region Fig.7(a) Input Image Fig.7(b) Extracted Text Region
  • 9. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 46 Fig.8(a) Input Image Fig.8(b) Extracted Text Region Fig.9(a) Input Image Fig.9(b) Extracted Text Region Fig.10(a) Input Image Fig.10(b) Extracted Text Region Fig.11(a) Input Image Fig.11(b) Extracted Text Region The proposed methodology has produced good results for natural scene images containing text of different size, font, and alignment with varying background. The approach also detects nonlinear text regions. Hence, the proposed method is robust and achieves an overall detection rate of 97%, and a false reject rate of 3% is obtained for 100 low resolution display board natural scene images. The method is advantageous as it uses only wavelet texture features for text extraction. The reason for false reject rate is the low contrast energy of blocks containing minute part of the text, which is too weak for acceptance to classify blocks as text blocks. However, the method has detected larger region than the actual size of the text region, when display board images with more complex backgrounds containing trees, buildings and vehicles are tested.
  • 10. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 47 As the texture features used in the method does not capture language dependent information, the method can be extended for text localization from the images of other languages with little modifications. To explore such possibilities the performance of the method has been tested for localizing English text without any modifications, But the thorough experimentation is not carried out for various images containing English and other language text. The use of different block sizes can also be experimented to improve the detection accuracy and reduce false acceptance rate. The overall performance of the system of testing 100 low resolution natural scene display board images dealing with various issues is given in Table 1. TABLE 1: Overall System Performance Total no of blocks tested(# of text blocks) Correctly detected text blocks Falsy detected text blocks (FAR) Missed text blocks (FRR) Detection rate % 3600 (1211) 1175 19 (1.5%) 36 (3%) 97% 5. CONCLUSION The effectiveness of the method that performs texture analysis for text localization from low resolution natural scene images is presented. The wavelet texture features have performed well in detection and segmentation of text region and are the ideal choice for degraded noisy natural scene images, where the connected component analysis techniques are found to be inefficient. The proposed method is robust and has achieved a detection rate of 97% on a variety of 100 low resolution natural scene images each of size 240x320. The proposed methodology has produced good results for natural scene images containing text of different size, font, and alignment with varying background. The approach also detects nonlinear text regions. However, it detects larger text regions than the actual size when the background in the image is more complex containing trees, vehicles, and other details from sources of outdoor scenes for some images. As the texture features does not capture language dependent information, the method can be extended for text localization from the images of other languages with little modifications. The performance of the method has been tested for localizing English text, but needs further exploration. REFERENCES [1] Tollmar K. Yeh T. and Darrell T. “IDeixis - Image-Based Deixis for Finding Location-Based Information”, In Proceedings of Conference on Human Factors in Computing Systems (CHI’04), pp.781-782, 2004. [2] Natalia Marmasse and Chris Schamandt. “Location aware information delivery with comMotion”, In Proceedings of Conference on Human Factors in Computing Systems, pp.157-171, 2000. [3] Eve Bertucci, Maurizio Pilu and Majid Mirmehdi. "Text Selection by Structured Light Marking for Hand-held Cameras" Seventh International Conference on Document Analysis and Recognition (ICDAR'03), pp.555-559, August 2003.
  • 11. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 48 [4] Tom yeh, Kristen Grauman, and K. Tollmar. “A picture is worth a thousand keywords: image-based object search on a mobile platform”, In Proceedings of Conference on Human Factors in Computing Systems, pp.2025-2028, 2005. [5] Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper, and Mike Pinkerton, “CyberGuide: Amobile context-aware tour guide”, Wireless Networks, 3(5):421-433, 1997. [6] Fan X. Xie X. Li Z. Li M. and Ma. “Photo-to-search: using multimodal queries to search web from mobile phones”, In proceedings of 7th ACM SIGMM international workshop on multimedia information retrieval, 2005. [7] Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, “SnapToTell: Ubiquitous information access from camera”, Mobile human computer interaction with mobile devices and services, Glasgow, Scotland, 2005. [8] Yu Zhong, Hongjiang Zhang, and Anil. K. Jain. “Automatic caption localization in compressed video”, IEEE transactions on Pattern Analysis and Machine Intelligence, 22(4):385-389, April 2000. [9] Yu Zhong, Kalle Karu, and Anil. K. Jain. “Locating Text in Complex Color Images”, Pattern Recognition, 28(10):1523-1535, 1995. [10] S. H. Park, K. I. Kim, K. Jung, and H. J. Kim. “Locating Car License Plates using Neural Networks”, IEE Electronics Letters, 35(17):1475-1477, 1999. [11] V. Wu, R. Manmatha, and E. M. Riseman. “TextFinder: An Automatic System to Detect and Recognize Text in Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(11):1224-1229, 1999. [12] V. Wu, R. Manmatha, and E. R. Riseman. “Finding Text in Images”, In Proceedings of ACM International Conference on Digital Libraries, pp. 1-10, 1997. [13] B. Sin, S. Kim, and B. Cho. “Locating Characters in Scene Images using Frequency Features”, In proceedings of International Conference on Pattern Recognition, Vol.3:489-492, 2002. [14] W. Mao, F. Chung, K. Lanm, and W. Siu. “Hybrid Chinese/English Text Detection in Images and Video Frames”, In Proceedings of International Conference on Pattern Recognition, Vol.3:1015-1018, 2002. [15] A. K. Jain, and K. Karu. “Learning Texture Discrimination Masks”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(2):195-205, 1996. [16] K. Jung. “Neural network-based Text Location in Color Images”, Pattern Recognition Letters, 22(14):1503-1515, December 2001. [17] K. Y. Jeong, K. Jung, E. Y. Kim and H. J. Kim. “Neural Network-based Text Location for News Video Indexing”, In Proceedings of IEEE International Conference on Image Processing, Vol.3:319-323, January 2000. [18] H. Li. D. Doerman and O. Kia. “Automatic Text Detection and Tracking in Digital Video”, IEEE Transactions on Image Processing, 9(1):147-156, January 2000. [19] H. Li and D. Doermann. “A Video Text Detection System based on Automated Training”, In Proceedings of IEEE International Conference on Pattern Recognition, pp.223-226, 2000. [20] S. A. Angadi, M. M. Kodabagi, 2009,”Texture based methodology for text region extraction from low resolution natural scene images”, International Journal of Image Processing (IJIP), Vol.3 No.5, pp.229-245, September-October 2009. [21] S. A. Angadi, M. M. Kodabagi, 2010,”Texture region extraction from low resolution natural scene images using texture features”, 2nd International Advanced Computing Conference, pp.121-128, 2010.
  • 12. International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 1, January – April (2013), © IAEME 49 [22] W. Y. Liu, and D. Dori. “A Proposed Scheme for Performance Evaluation of Graphics/Text Separation Algorithm”, Graphics Recognition – Algorithms and Systems, K. Tombre and A. Chhabra (eds.), Lecture Notes in Computer Science, Vol.1389:359-371, 1998. [23] Y. Watanabe, Y. Okada, Y. B. Kim and T. Takeda. ”Translation Camera”, In Proceedings of International Conference on Pattern Recognition, Vol.1: 613-617, 1998. [24] I. Haritaoglu. “Scene Text Extraction and Translation for Handheld Devices”, In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Vol.2:408-413, 2001. [25] K. Jung, K. Kim, T. Kurata, M. Kourogi, and J. Han. “Text Scanner with Text Detection Technology on Image Sequence”, In Proceedings of International Conference on Pattern Recognition, Vol.3:473-476, 2002. [26] H. Hase, T. Shinokawa, M. Yoneda, and C. Y. Suen. “Character String Extraction from Color Documents”, Pattern Recognition, 34(7):1349-1365, 2001. [27] Ceiline Thillou, Silvio Frreira and Bernard Gosselin. “An embedded application for degraded text recognition”, EURASIP Journal on applied signal processing, 1(1):2127-2135, 2005. [28] Nobuo Ezaki, Marius Bulacu and Lambert Schomaker. “Text Detection from Natural Scene Images: Towards a System for Visually Impaired Persons”, In Proceedings of 17th International Conference on Pattern Recognition (ICPR’04), IEEE Computer Society vol.2:683-686, 2004. [29] Chitrakala Gopalan and Manjula. “Text Region Segmentation from Heterogeneous Images”, International Journal of Computer Science and Network Security, 8(10):108-113, October 2008. [30] Rui Wu, Jianhua Huang, Xianglong Tang and Jiafeng Liu. ” A Text Image Segmentation Method Based on Spectral Clustering”, IEEE Sixth Indian Conference on Computer Vision, Graphics & Image Processing, 1(4): 9-15, 2008. [31] Mohammad Osiur Rahman, Fouzia Asharf Mousumi, Edgar Scavino, Aini Hussain, and Hassan Basri. ”Real Time Road Sign Recognition System Using Artificial Neural Networks for Bengali Textual Information Box”, European Journal of Scientific Research, 25(3):478-487, 2009. [32] Rajiv K. Sharma and Amardeep Singh. “Segmentation of Handwritten Text in Gurmukhi Script”, International Journal of Image Processing, 2(3):13-17, May/June 2008. [33] Amjad Rehman Khan and Zulkifli Mohammad. ”A Simple Segmentation Approach for Unconstrained Cursive Handwritten Words in Conjunction with the Neural Network”, International Journal of Image Processing, 2(3):29-35, May/June 2008. [34] Amjad Rehman Khan and Zulkifli Mohammad. ”Text Localization in Low Resolution Camera”, International Journal of Image Processing, 1(2):78-86, 2007. [35] R.M. Haralick, K. Shanmugam and I. Dinstein. “Textural Features for Image Classification”, IEEE Transactions Systems, Man, and Cyber-netics, 3(6):610-621, 1973.