SlideShare una empresa de Scribd logo
1 de 4
Descargar para leer sin conexión
International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 
E-ISSN: 2321-9637 
36 
Shrug and Voice Recognition System for Dumb and 
Hearing Impards 
Ms.M.Yuvarani1,Mr.S.Vinothkumar2, Mr.R.Prabu3,Mr.T.Rajkumar4 , 
1234Department of ECE,1P.G.Scholar,2Project Coordinator,3Guide, 4Asst.Professor 
1,2,3Aksheyaa College of Engineering,4Vidyaa Vikas College of Engineering and Technology 
1234Chennai ,1234 India 
mapynfam@gmail.com1,aceece12@gmail.com2, prabu6037@gmail.com3, tsrajkumar87@gmail.com4. 
Abstract: Now a day physical challenges cause limitations in basic learning processes like reading, listening, 
writing and accessing resources like websites. These causes made difficult to communicate with the society. 
This project will help dumb and hearing impards who cannot communicate easily with others. To develop the 
interface for impards with normal people, this project proposes the medium which translates the sign language 
and voice into text. The gesture from mute communities is converted into text. Similarly the voice information 
from the normal people is converted into sign and text as visual representation for deaf communities. 
Index Terms: pattern matching, gesture recognition, SVM algorithm, framing, HMM model, Preprocessing. 
1. INTRODUCTION 
Different type of disabilities requires some support 
to handle that problem and multiple disorders will 
become more complicated. Therefore additional 
efforts are needed to overcome those problems. 
Sign language is a communication skill which uses 
the gestures instead of using sounds to 
communicate with others. This will reduce the 
barrier to deaf and mute people. To communicate 
words and sentences the signs are used. Some of 
the disability factors are given below: 
· Visual Impairment 
· Mobility Impairment 
· Hearing Impairment 
· Vocal Impairment 
2. IMPLEMENTATION 
Shrug Recognition 
Implementation process contains the 
following steps: 
Fig 2: Schematic View of Hand Gesture 
Recognition System. 
Skin Color Training 
Process of finding skin colored pixels and regions 
in an image in the skin color training. This is the 
preprocessing step in an implementation process. 
Here the skin detector transforms the selected 
pixels into appropriate color spaces. Then skin 
classifier which classifies the pixels as either skin 
or non-skin pixel is used. 
Fig 1: Color Space Model 
Calibration 
In some cases, the problem arises with 
skin color objects and background shown below: 
Fig 3: problem with skin color objects and 
background
International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 
E-ISSN: 2321-9637 
37 
In order to overcome this problem the calibration 
process is used. The process of extracting the 
symbol from the background is called as 
calibration. 
Fig 4: Extracting the symbol from the background 
Hand location detection 
After extracting the symbol from the 
background, we have to find the location of the 
symbol in an image. Then the bounding box is 
created which contains the gesture with any 
different colors. The bounding box with the symbol 
is then considered as a sample pattern. 
Fig 5: Finding location of hand and creating 
bounding box 
Finger Region Detection 
By examining the bounding box, we have 
to find the number of finger regions. Only with the 
border of the image we cannot find the gesture. 
Therefore the number of finger regions found to be 
compared with the reference pattern stored in the 
database. For the simplification remove all the 
small connected regions as shown in the following 
figure: 
Fig 6: Connected finger regions. 
Pattern Matching 
Shrug recognition consists of data 
acquisition, space transformation, recognition and 
matching. Before compared with the pattern stored 
in database, preprocessing steps are implemented 
with line detection, circle detection, 
parameterization. Normally the system is not 
practical for large number of gestures and 
complicated ones. 
Gesture Determination 
To recognize the gesture the sample 
pattern is compared with reference pattern. The 
number of finger regions and number of large 
connected regions of fingers and hand is 
determined. Then the ratio of the finger to the 
space between the fingers is calculated. And this 
ratio is compared with that of the reference pattern 
stored in the database. 
Fig 7: Gesture Determined 
Voice Recognition 
Shrug recognition is used only for dumb 
people. To overcome the barrier with deaf people 
the voice conversion system is used. There are two 
methods of voice conversion. Voice information is 
converted into appropriate signs as visual 
representation for the deaf people who are known 
sign language. But the people who do not know 
sign language cannot understand the gestures. For 
this the recognized gesture is again converted into 
text. 
Fig 8: Speech to Cues Conversion
International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 
E-ISSN: 2321-9637 
38 
Fig 9: Speech to Text Conversion 
· The speech signal is given to the 
recognition unit where HMM process is used to 
recognize the sequence of words. Hidden Markov 
Model (HMM) is the statistical model which can be 
well characterized like parametric random process. 
The parameters of this process can be precisely 
determined. 
· In preprocessing unit the speech signal is 
converted into number of frames. Then they are 
examined to extract the useful information in the 
given speech. 
· Then the speech samples are compared 
with the patterns already stored in the database. 
Then the recognized cues are displayed for the 
people who are known sign language. Again the 
cues are converted into text as visual representation 
for the deaf people who are not known sign 
language 
. 
3. ALGORITHM USED 
Support Vector Machine: SVM is the train 
algorithm with pattern matching concept. The word 
training indicates that the efficiency of the result 
will be increased with the number of times we 
trained. SVM produces the hyperplanes which 
separate the different types of classes shown below: 
Fig 10: Hyperplane Separation 
H1 does not separate the classes, H2 does, but only 
with a small margin, H3 separates them with the 
maximum margin 
K (xi, y) = constant 
Here, 
Ρ  parameter selected 
K (xi, y)  kernel function 
Each term in the sum measures the degree 
of closeness of test point y to the corresponding 
data base point xi. 
RESULT 
Sign language is the useful tool to ease the 
communication between the deaf and mute 
community and the normal people. The simulated 
output shown in the following figure: 
(a) 
(b) 
Fig 11: (a, b) simulated output 
4. CONCLUSION 
In future, many companies will produce 
variety of applications with speech synthesis and 
SL conversion separately. In this project the 
communication barrier between normal people with 
deaf and mute communities will be reduced. 
REFERENCES 
[1] Campbell, F. (2004). The case of Clint 
Hallam’s Wayward Hand: Print Media 
Disabled Patient. Journal of Media  Cultural 
Studies, Vol. 18, no. 3, pp. 443-458(16).
International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 
E-ISSN: 2321-9637 
39 
[2] F. Jelinek, “1997 Large Vocabulary 
Continuous for Speech Recognition Summer 
Research Workshop Technical Reports,” 
Center for Language and Speech Processing, 
Johns Hopkins University, Research Notes no. 
30, January1998. 
[3] Kerabetsos, S. Tsiakoulis, P. Chalamandaris, 
A. Raptis, S. “Embedded unit selection text-to-speech 
synthesis for mobile devices” 
Consumer Electronics, IEEE Transactions, 
Vol. 55, Issue 2, pp-613-621, May 2009. 
[4] Sign Languages, Wikipedia source from-http:// 
en.wikipedia.org/wiki/Sign 
Language(accessed in 2012). 
[5] The Microsoft Projects on ‘Speech Processing’ 
Retrieved on 18th February, 2012 from 
http/Microsoft.com/enus/groups/srg 
[5] Venkatraman.S and T.V. Padmavathi, “Speech 
for the Disabled”, Proceedings of the 
International Multiconference of Engineers 
and Computer scientists 2009 Vol I IMECS 
2009, March18-20, 2009.

Más contenido relacionado

La actualidad más candente

CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
ijcsit
 

La actualidad más candente (19)

IRJET- American Sign Language Fingerspelling Recognition using Convolutional ...
IRJET- American Sign Language Fingerspelling Recognition using Convolutional ...IRJET- American Sign Language Fingerspelling Recognition using Convolutional ...
IRJET- American Sign Language Fingerspelling Recognition using Convolutional ...
 
Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...
 
[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...
[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...
[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...
 
Hand Gesture Recognition using Neural Network
Hand Gesture Recognition using Neural NetworkHand Gesture Recognition using Neural Network
Hand Gesture Recognition using Neural Network
 
Report for Speech Emotion Recognition
Report for Speech Emotion RecognitionReport for Speech Emotion Recognition
Report for Speech Emotion Recognition
 
IRJET- Sign Language Interpreter using Image Processing and Machine Learning
IRJET- Sign Language Interpreter using Image Processing and Machine LearningIRJET- Sign Language Interpreter using Image Processing and Machine Learning
IRJET- Sign Language Interpreter using Image Processing and Machine Learning
 
Ai based character recognition and speech synthesis
Ai based character recognition and speech  synthesisAi based character recognition and speech  synthesis
Ai based character recognition and speech synthesis
 
Marathi Isolated Word Recognition System using MFCC and DTW Features
Marathi Isolated Word Recognition System using MFCC and DTW FeaturesMarathi Isolated Word Recognition System using MFCC and DTW Features
Marathi Isolated Word Recognition System using MFCC and DTW Features
 
A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...
 
EFFECT OF MFCC BASED FEATURES FOR SPEECH SIGNAL ALIGNMENTS
EFFECT OF MFCC BASED FEATURES FOR SPEECH SIGNAL ALIGNMENTSEFFECT OF MFCC BASED FEATURES FOR SPEECH SIGNAL ALIGNMENTS
EFFECT OF MFCC BASED FEATURES FOR SPEECH SIGNAL ALIGNMENTS
 
Automated Bangla sign language translation system for alphabets by means of M...
Automated Bangla sign language translation system for alphabets by means of M...Automated Bangla sign language translation system for alphabets by means of M...
Automated Bangla sign language translation system for alphabets by means of M...
 
Speech compression analysis using matlab
Speech compression analysis using matlabSpeech compression analysis using matlab
Speech compression analysis using matlab
 
Speech compression analysis using matlab
Speech compression analysis using matlabSpeech compression analysis using matlab
Speech compression analysis using matlab
 
Emotion recognition from facial expression using fuzzy logic
Emotion recognition from facial expression using fuzzy logicEmotion recognition from facial expression using fuzzy logic
Emotion recognition from facial expression using fuzzy logic
 
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
 
Deep Learning in practice : Speech recognition and beyond - Meetup
Deep Learning in practice : Speech recognition and beyond - MeetupDeep Learning in practice : Speech recognition and beyond - Meetup
Deep Learning in practice : Speech recognition and beyond - Meetup
 
Deep Learning For Speech Recognition
Deep Learning For Speech RecognitionDeep Learning For Speech Recognition
Deep Learning For Speech Recognition
 
Kc3517481754
Kc3517481754Kc3517481754
Kc3517481754
 
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...
 

Destacado (20)

Paper id 252014136
Paper id 252014136Paper id 252014136
Paper id 252014136
 
Paper id 21201429
Paper id 21201429Paper id 21201429
Paper id 21201429
 
Paper id 27201411
Paper id 27201411Paper id 27201411
Paper id 27201411
 
Benjamin medina Slideshow
Benjamin medina SlideshowBenjamin medina Slideshow
Benjamin medina Slideshow
 
Paper id 2520148 (2)
Paper id 2520148 (2)Paper id 2520148 (2)
Paper id 2520148 (2)
 
Instituto tecnologico de cerro azul
Instituto tecnologico de cerro azulInstituto tecnologico de cerro azul
Instituto tecnologico de cerro azul
 
Paper id 25201463
Paper id 25201463Paper id 25201463
Paper id 25201463
 
Paper id 212014106
Paper id 212014106Paper id 212014106
Paper id 212014106
 
Paper id 24201484
Paper id 24201484Paper id 24201484
Paper id 24201484
 
Paper id 24201448
Paper id 24201448Paper id 24201448
Paper id 24201448
 
Paper id 23201429
Paper id 23201429Paper id 23201429
Paper id 23201429
 
Paper id 2520147
Paper id 2520147Paper id 2520147
Paper id 2520147
 
Paper id 252014114
Paper id 252014114Paper id 252014114
Paper id 252014114
 
Paper id 252014121
Paper id 252014121Paper id 252014121
Paper id 252014121
 
Paper id 252014144
Paper id 252014144Paper id 252014144
Paper id 252014144
 
Paper id 212014133
Paper id 212014133Paper id 212014133
Paper id 212014133
 
Paper id 27201432
Paper id 27201432Paper id 27201432
Paper id 27201432
 
Paper id 28201417
Paper id 28201417Paper id 28201417
Paper id 28201417
 
Paper id 27201428
Paper id 27201428Paper id 27201428
Paper id 27201428
 
Oops pramming with examples
Oops pramming with examplesOops pramming with examples
Oops pramming with examples
 

Similar a Paper id 23201490

Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
paperpublications3
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
paperpublications3
 
Classification of Language Speech Recognition System
Classification of Language Speech Recognition SystemClassification of Language Speech Recognition System
Classification of Language Speech Recognition System
ijtsrd
 

Similar a Paper id 23201490 (20)

Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and Dumb
 
A survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech RecognitionA survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech Recognition
 
183
183183
183
 
Projects1920 c6 fina;
Projects1920 c6 fina;Projects1920 c6 fina;
Projects1920 c6 fina;
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
IRJET - Sign Language Converter
IRJET -  	  Sign Language ConverterIRJET -  	  Sign Language Converter
IRJET - Sign Language Converter
 
Sign Language Recognition using Deep Learning
Sign Language Recognition using Deep LearningSign Language Recognition using Deep Learning
Sign Language Recognition using Deep Learning
 
IRJET - Sign Language Text to Speech Converter using Image Processing and...
IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...
IRJET - Sign Language Text to Speech Converter using Image Processing and...
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
SignReco: Sign Language Translator
SignReco: Sign Language TranslatorSignReco: Sign Language Translator
SignReco: Sign Language Translator
 
Real-Time Sign Language Detector
Real-Time Sign Language DetectorReal-Time Sign Language Detector
Real-Time Sign Language Detector
 
Sign language recognition for deaf and dumb people
Sign language recognition for deaf and dumb peopleSign language recognition for deaf and dumb people
Sign language recognition for deaf and dumb people
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Language
 
Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...
 
A Survey on Speech Recognition with Language Specification
A Survey on Speech Recognition with Language SpecificationA Survey on Speech Recognition with Language Specification
A Survey on Speech Recognition with Language Specification
 
Sign Language Recognition
Sign Language RecognitionSign Language Recognition
Sign Language Recognition
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language Detection
 
Classification of Language Speech Recognition System
Classification of Language Speech Recognition SystemClassification of Language Speech Recognition System
Classification of Language Speech Recognition System
 
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
 

Más de IJRAT

Más de IJRAT (20)

96202108
9620210896202108
96202108
 
97202107
9720210797202107
97202107
 
93202101
9320210193202101
93202101
 
92202102
9220210292202102
92202102
 
91202104
9120210491202104
91202104
 
87202003
8720200387202003
87202003
 
87202001
8720200187202001
87202001
 
86202013
8620201386202013
86202013
 
86202008
8620200886202008
86202008
 
86202005
8620200586202005
86202005
 
86202004
8620200486202004
86202004
 
85202026
8520202685202026
85202026
 
711201940
711201940711201940
711201940
 
711201939
711201939711201939
711201939
 
711201935
711201935711201935
711201935
 
711201927
711201927711201927
711201927
 
711201905
711201905711201905
711201905
 
710201947
710201947710201947
710201947
 
712201907
712201907712201907
712201907
 
712201903
712201903712201903
712201903
 

Paper id 23201490

  • 1. International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 E-ISSN: 2321-9637 36 Shrug and Voice Recognition System for Dumb and Hearing Impards Ms.M.Yuvarani1,Mr.S.Vinothkumar2, Mr.R.Prabu3,Mr.T.Rajkumar4 , 1234Department of ECE,1P.G.Scholar,2Project Coordinator,3Guide, 4Asst.Professor 1,2,3Aksheyaa College of Engineering,4Vidyaa Vikas College of Engineering and Technology 1234Chennai ,1234 India mapynfam@gmail.com1,aceece12@gmail.com2, prabu6037@gmail.com3, tsrajkumar87@gmail.com4. Abstract: Now a day physical challenges cause limitations in basic learning processes like reading, listening, writing and accessing resources like websites. These causes made difficult to communicate with the society. This project will help dumb and hearing impards who cannot communicate easily with others. To develop the interface for impards with normal people, this project proposes the medium which translates the sign language and voice into text. The gesture from mute communities is converted into text. Similarly the voice information from the normal people is converted into sign and text as visual representation for deaf communities. Index Terms: pattern matching, gesture recognition, SVM algorithm, framing, HMM model, Preprocessing. 1. INTRODUCTION Different type of disabilities requires some support to handle that problem and multiple disorders will become more complicated. Therefore additional efforts are needed to overcome those problems. Sign language is a communication skill which uses the gestures instead of using sounds to communicate with others. This will reduce the barrier to deaf and mute people. To communicate words and sentences the signs are used. Some of the disability factors are given below: · Visual Impairment · Mobility Impairment · Hearing Impairment · Vocal Impairment 2. IMPLEMENTATION Shrug Recognition Implementation process contains the following steps: Fig 2: Schematic View of Hand Gesture Recognition System. Skin Color Training Process of finding skin colored pixels and regions in an image in the skin color training. This is the preprocessing step in an implementation process. Here the skin detector transforms the selected pixels into appropriate color spaces. Then skin classifier which classifies the pixels as either skin or non-skin pixel is used. Fig 1: Color Space Model Calibration In some cases, the problem arises with skin color objects and background shown below: Fig 3: problem with skin color objects and background
  • 2. International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 E-ISSN: 2321-9637 37 In order to overcome this problem the calibration process is used. The process of extracting the symbol from the background is called as calibration. Fig 4: Extracting the symbol from the background Hand location detection After extracting the symbol from the background, we have to find the location of the symbol in an image. Then the bounding box is created which contains the gesture with any different colors. The bounding box with the symbol is then considered as a sample pattern. Fig 5: Finding location of hand and creating bounding box Finger Region Detection By examining the bounding box, we have to find the number of finger regions. Only with the border of the image we cannot find the gesture. Therefore the number of finger regions found to be compared with the reference pattern stored in the database. For the simplification remove all the small connected regions as shown in the following figure: Fig 6: Connected finger regions. Pattern Matching Shrug recognition consists of data acquisition, space transformation, recognition and matching. Before compared with the pattern stored in database, preprocessing steps are implemented with line detection, circle detection, parameterization. Normally the system is not practical for large number of gestures and complicated ones. Gesture Determination To recognize the gesture the sample pattern is compared with reference pattern. The number of finger regions and number of large connected regions of fingers and hand is determined. Then the ratio of the finger to the space between the fingers is calculated. And this ratio is compared with that of the reference pattern stored in the database. Fig 7: Gesture Determined Voice Recognition Shrug recognition is used only for dumb people. To overcome the barrier with deaf people the voice conversion system is used. There are two methods of voice conversion. Voice information is converted into appropriate signs as visual representation for the deaf people who are known sign language. But the people who do not know sign language cannot understand the gestures. For this the recognized gesture is again converted into text. Fig 8: Speech to Cues Conversion
  • 3. International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 E-ISSN: 2321-9637 38 Fig 9: Speech to Text Conversion · The speech signal is given to the recognition unit where HMM process is used to recognize the sequence of words. Hidden Markov Model (HMM) is the statistical model which can be well characterized like parametric random process. The parameters of this process can be precisely determined. · In preprocessing unit the speech signal is converted into number of frames. Then they are examined to extract the useful information in the given speech. · Then the speech samples are compared with the patterns already stored in the database. Then the recognized cues are displayed for the people who are known sign language. Again the cues are converted into text as visual representation for the deaf people who are not known sign language . 3. ALGORITHM USED Support Vector Machine: SVM is the train algorithm with pattern matching concept. The word training indicates that the efficiency of the result will be increased with the number of times we trained. SVM produces the hyperplanes which separate the different types of classes shown below: Fig 10: Hyperplane Separation H1 does not separate the classes, H2 does, but only with a small margin, H3 separates them with the maximum margin K (xi, y) = constant Here, Ρ parameter selected K (xi, y) kernel function Each term in the sum measures the degree of closeness of test point y to the corresponding data base point xi. RESULT Sign language is the useful tool to ease the communication between the deaf and mute community and the normal people. The simulated output shown in the following figure: (a) (b) Fig 11: (a, b) simulated output 4. CONCLUSION In future, many companies will produce variety of applications with speech synthesis and SL conversion separately. In this project the communication barrier between normal people with deaf and mute communities will be reduced. REFERENCES [1] Campbell, F. (2004). The case of Clint Hallam’s Wayward Hand: Print Media Disabled Patient. Journal of Media Cultural Studies, Vol. 18, no. 3, pp. 443-458(16).
  • 4. International Journal of Research in Advent Technology, Vol.2, No.4, April 2014 E-ISSN: 2321-9637 39 [2] F. Jelinek, “1997 Large Vocabulary Continuous for Speech Recognition Summer Research Workshop Technical Reports,” Center for Language and Speech Processing, Johns Hopkins University, Research Notes no. 30, January1998. [3] Kerabetsos, S. Tsiakoulis, P. Chalamandaris, A. Raptis, S. “Embedded unit selection text-to-speech synthesis for mobile devices” Consumer Electronics, IEEE Transactions, Vol. 55, Issue 2, pp-613-621, May 2009. [4] Sign Languages, Wikipedia source from-http:// en.wikipedia.org/wiki/Sign Language(accessed in 2012). [5] The Microsoft Projects on ‘Speech Processing’ Retrieved on 18th February, 2012 from http/Microsoft.com/enus/groups/srg [5] Venkatraman.S and T.V. Padmavathi, “Speech for the Disabled”, Proceedings of the International Multiconference of Engineers and Computer scientists 2009 Vol I IMECS 2009, March18-20, 2009.