SlideShare una empresa de Scribd logo
1 de 6
Descargar para leer sin conexión
International Journal of Electronic Engineering Research
ISSN 0975 - 6450 Volume 2 Number 3 (2010) pp. 377–381
© Research India Publications
http://www.ripublication.com/ijeer.htm



              Independent Speaker Recognition for
                     Native English Vowels

       1
           G.N. Kodandaramaiah, 2M.N. Giriprasad and 3M. Mukunda Rao
        1
          HOD, Department of Electronics and Communications Engineering,
               Madanapalli Institute of Technology, Madanapalli, India
     2
       Principal, Jawaharlal Nehru Technological University, Pulivendula, India
                  3
                    Honorary Research Professor, Biomedical Sciences
       Sri Ramachandra Medical College & Research Institute, Chennai, India
                         E-mail: kodandramaiah@yahoo.com


                                       Abstract

   This paper presents the standard method for vocal tract shape estimation has
   been the basis for many successful automatic speech recognition (ASR)
   systems. Analytic results presented demonstrate that estimation of vocal tract
   shape, based on reflection co-efficients obtained from LPC analysis of speech,
   is satisfactory and is related to the place of articulation of the vowels. Here we
   describe a “standard” approach for classification of vowels based on formants,
   which are meaningfully distinguishable frequency components of human
   speech. These formant frequencies depend upon the shape and dimensions of
   the vocal tract, Vocal tract shape is characterized by a set of formant
   frequencies, and different sounds are produced by varying the shape of the
   vocal tract, leading to the property of spoken speech. It has been implemented
   in many of speech related applications such as, speech/speaker recognition.
   This work uses Euclidean distance measure, is applied in order to measure the
   similarity or the dissimilarity between two spoken words, which take place
   after quantizing a spoken word into its code book.

   Keywords: Speech, Vocal tract, Formants, Euclidean distance.


Introduction
The Fig 1.1 shows the block diagram of Independent Speaker Recognition for vowels.
Let S(n) be the test sample of a vowel. Then parameters i.e. formants F1 and F2 are
extracted.
378                                                        G.N. Kodandaramaiah et al

    The extracted formants are compared with the threshold of reference formants.
Euclidean distance measure is applied in order to measure the similarity or the
dissimilarity between two spoken words, which take place after quantizing a spoken
word into its code book. The matching of an unknown vowel is performed by
measuring the Euclidean distance between the features vector (formants) of the
unknown vowel to the reference model (codebook) of the known vowel formants F1,
F2 in the database. The goal is to find the codebook that has the minimum distance
measurement in order to identify the unknown vowel (Franti et al., 1997). For
example, in the testing or identification session, the Euclidean distance between the
features vector formants F1, F2 and codebook for each spoken vowel is calculated and
the vowel with the smallest average minimum distance is picked as shown in the Eq.
(1.1). Note that xi is the i th input features vector (formants F1, F2), yi is the i th
features vector in the codebook (Reference Model) and distance d is the distance
between xi and yi.

               d(x,y)=√[                            ]                                1.1

where D=2, xi is the ith input features vector (formants F1, F2), yi is the ith feature
vector in the code book (Reference Model) and d is the distance between xi and
yi.s,Wi = weight associated with ith feature vector, recognition score.


Decision Rule
The weights ‘w’ are important to use if the information contained in the underlining
features is not proportional to the feature variances. In this case of vowel recognition
based on formants F1 and F2, they do not uniformly contribute to vowel recognition.
Based on study, relative weights-F1 =2; F2 =1 are given but normalized such that the
sum of the weights is 1.0.
     We refer to classification based on this distance as Maximum Likelihood
Regression, since this is based on Gaussian assumptions used to obtain the parameters
in the classifier. To provide verification that the vowels displayed are producing
accurate results, the MLR has calculated the distance of average features for the given
vowels.If the feature distance is within the threshold criteria Di (F1, F2), then equation
1.1 becomes
                Di (f) < α√m                                                          1.2
where m is number of features i.e. F1 and F2, α is arbitrary scale factor used for
performance tuning. Then, the vector xi is identified as the vector yi, otherwise not. If
it is too small the MLR rejects many correct vowel samples. If it is too large the
output of category vowels will not be rejected. In our work the threshold α=x has
given optimum results.
Independent Speaker Recognition for Native English Vowels                                       379




                      Figure 1.1: Block diagram of vowel recognition.


Result of Vowel Ecognition of Male and Female Speakers
Male Speakers
The table 3.1 gives the result for male vowel recognition based on MLR method.
Vowel /a/ has achieved perfect classification compared to other vowels. The detection
rate for vowel /u/ and /e/ is better than vowel /o/ and /i/ for all tested samples. Vowel
/e/ and vowel /i/ tend to mis-classify with each other due to the variations of
utterances from different inter-speakers. The Fig 3.1 shows vowel ‘X’ versus %
vowel recognition for 50 male samples, where ‘X’ is the actual vowel.
    For vowel /a/, /a/ in /a/ is 46; /a/ in /e/ is 0; /a/ in /i/ is 4;/a/ in /o/ is 0;/a/ in /u/ is 0.
Hence the percentage correctness of recognition of vowel /a/ is = ( /a/ in /a/ )*
100÷(/a/ in all the vowels) = 46 * 100÷(46+0+4+0+0) = 46*100/50 = 92 %.




             Figure 3.1: Vowel Vs % vowel recognition for male speaker.
380                                                         G.N. Kodandaramaiah et al

   Table 3.1 Shows the percentage recognition for vowel of male speakers.


                      vowels            Predicted
                      Actual /a/ /e/ /i/ /o/ /u/ % correct
                        /a/  46 0 4 0           0  92%
                        /e/   2 44 0 4          0  89%
                        /i/   6 0 40 0          4  80%
                        /o/   0 3 0 44 3           88%
                        /u/   3 1 1 0 45           90%


Female Speakers
The table 3.2 gives the result for female vowel recognition based on MLR method.
Vowel /o/ has achieved perfect classification compared to other vowels. The detection
rate for vowel /u/ and /e/ is better than vowel /a/ and /i/ for all tested samples. Vowel
/a/ and vowel /i/ tend to mis-classify with each other due to the variations of
utterances from different inter-speakers. The Fig 3.2 shows the percentage of
recogniton of vowel for 40 female samples. For vowel /o/, /o/ in /a/ is 0; /o/ in /e/ is 0;
/o/ in /i/ is 0;/o/ in /o/ is 39;/o/ in /u/ is 1. Hence the Percentage correctness of
recognition of vowel /o/ is = ( /o/ in /o/ )* 100÷(/o/ in all the vowels) = 39 *
100÷(0+0+0+39+1) = 39*100/40 = 98 %.




           Figure 3.2: Vowel Vs % vowel recognition for female speaker.
Independent Speaker Recognition for Native English Vowels                        381

   Table 3.2 Shows percentage recognition of vowel for female speakers

                     Vowel /a/ /e/ /i/ /o/ /u/ %correct
                      /a/  34 4 0 0         2   85%
                      /e/   0 37 0 3        0   92%
                      /i/   0 4 34 0        2   86%
                      /o/   0 0 0 39 1          98%
                      /u/   3 0 0 0 37          94%


Conclussion
It was an attempt presents to the standard method for vocal tract shape estimation has
been the basis for many successful automatic speech recognition (ASR) systems. Here
we describe a “standard” approach for classification of vowels based on formants. We
achieved 80 to 95 percentage of speaker recognition using Euclidean distance
measure.


Acknowledgements
We would like to thanks the Management, Principal of Madanapalli Institute of
Technology and Science, Madanapalli, A.P., for their Cooperation and
Encouragement


References
[1]    L.R.Rabiner and R.W.Schafer, Digital processing of Speech signals, Droling
       Kindersly(india)pvt.Ltd.,licensees of pearson eduction in south asia, 1978, PP.
       54-101,412-460.
[2]    Thomas F. Quatieri, Discrete time speech signal processing principles and
       practice,2002, pp 56-59.
[3]    P. Ladefoged, R. Harshman, L. Goldstein, and L. Rice, “Generating vocal tract
       shapes from formant frequencies,” J. Acoust. Soc. Am., vol. 64, no. 4, , 1978,
       pp. 1027–1035.
[4]    Mayukh Bhaowal & Kunal Chawla Isolated word Recognition for English
       Language using LPC, Vq and HMM,pp.2-4.
[5]    G.E Peterson and H.L Barney,” control methods used in a study of the vowels
       ” J.Acoustic.Soc.Amer., Volume 24,PP.175-184
[6]    P.Rose,Long-and short-term within-speaker differences in the formants of
       Australian hello, j.Int. Phonetic. Assoc. 29(1) (1999) 1-31.
[7]    AhmedAli Safiullah Bhatti, dr.Munammad Sleem Miam. formants based
       Analysis for speech recognition, IEEE 2006.
382   G.N. Kodandaramaiah et al

Más contenido relacionado

La actualidad más candente

CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...ijcsit
 
Implementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) SynthesizerImplementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) SynthesizerIOSR Journals
 
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnnAutomatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnnijcsa
 
An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalAn Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
 
Survey On Speech Synthesis
Survey On Speech SynthesisSurvey On Speech Synthesis
Survey On Speech SynthesisCSCJournals
 
33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)IAESIJEECS
 
A Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis SystemA Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis Systemiosrjce
 
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...CSCJournals
 
10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroom10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroomCathy Liu
 
ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209Sangsook Choi
 
A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...IJECEIAES
 
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorDynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
 
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...CSCJournals
 
Speaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained DataSpeaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained Datasipij
 

La actualidad más candente (16)

CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
 
Implementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) SynthesizerImplementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) Synthesizer
 
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnnAutomatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
 
An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalAn Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity Removal
 
Isolated English Word Recognition System: Appropriate for Bengali-accented En...
Isolated English Word Recognition System: Appropriate for Bengali-accented En...Isolated English Word Recognition System: Appropriate for Bengali-accented En...
Isolated English Word Recognition System: Appropriate for Bengali-accented En...
 
Survey On Speech Synthesis
Survey On Speech SynthesisSurvey On Speech Synthesis
Survey On Speech Synthesis
 
33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)
 
A Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis SystemA Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis System
 
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
 
10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroom10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroom
 
ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209
 
A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...
 
Speaker Recognition Using Vocal Tract Features
Speaker Recognition Using Vocal Tract FeaturesSpeaker Recognition Using Vocal Tract Features
Speaker Recognition Using Vocal Tract Features
 
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorDynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
 
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
 
Speaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained DataSpeaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained Data
 

Similar a Ijeer journal

Gender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rateGender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rateTELKOMNIKA JOURNAL
 
Speech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data VisualisationSpeech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data VisualisationITIIIndustries
 
A new framework based on KNN and DT for speech identification through emphat...
A new framework based on KNN and DT for speech  identification through emphat...A new framework based on KNN and DT for speech  identification through emphat...
A new framework based on KNN and DT for speech identification through emphat...nooriasukmaningtyas
 
Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features  Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features ijsc
 
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...IJERA Editor
 
Broad phoneme classification using signal based features
Broad phoneme classification using signal based featuresBroad phoneme classification using signal based features
Broad phoneme classification using signal based featuresijsc
 
Energy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowelsEnergy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowelsIJECEIAES
 
Automatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class PhonemesAutomatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class PhonemesEditor IJCATR
 
Estimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech EnvelopeEstimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech Envelopesipij
 
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...TELKOMNIKA JOURNAL
 
A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...Pravena Duplex
 
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...cscpconf
 
A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...IAEME Publication
 
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...cscpconf
 
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...sipij
 
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and PhonemesEffect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemeskevig
 

Similar a Ijeer journal (20)

Gender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rateGender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rate
 
Speech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data VisualisationSpeech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data Visualisation
 
A new framework based on KNN and DT for speech identification through emphat...
A new framework based on KNN and DT for speech  identification through emphat...A new framework based on KNN and DT for speech  identification through emphat...
A new framework based on KNN and DT for speech identification through emphat...
 
F010334548
F010334548F010334548
F010334548
 
Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features  Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features
 
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
 
Int journal 01
Int journal 01Int journal 01
Int journal 01
 
Broad phoneme classification using signal based features
Broad phoneme classification using signal based featuresBroad phoneme classification using signal based features
Broad phoneme classification using signal based features
 
Energy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowelsEnergy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowels
 
Automatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class PhonemesAutomatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class Phonemes
 
T0 numtq0nzq=
T0 numtq0nzq=T0 numtq0nzq=
T0 numtq0nzq=
 
Estimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech EnvelopeEstimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech Envelope
 
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
 
A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...
 
B110512
B110512B110512
B110512
 
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
 
A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...
 
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
 
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
 
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and PhonemesEffect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemes
 

Último

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 

Último (20)

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 

Ijeer journal

  • 1. International Journal of Electronic Engineering Research ISSN 0975 - 6450 Volume 2 Number 3 (2010) pp. 377–381 © Research India Publications http://www.ripublication.com/ijeer.htm Independent Speaker Recognition for Native English Vowels 1 G.N. Kodandaramaiah, 2M.N. Giriprasad and 3M. Mukunda Rao 1 HOD, Department of Electronics and Communications Engineering, Madanapalli Institute of Technology, Madanapalli, India 2 Principal, Jawaharlal Nehru Technological University, Pulivendula, India 3 Honorary Research Professor, Biomedical Sciences Sri Ramachandra Medical College & Research Institute, Chennai, India E-mail: kodandramaiah@yahoo.com Abstract This paper presents the standard method for vocal tract shape estimation has been the basis for many successful automatic speech recognition (ASR) systems. Analytic results presented demonstrate that estimation of vocal tract shape, based on reflection co-efficients obtained from LPC analysis of speech, is satisfactory and is related to the place of articulation of the vowels. Here we describe a “standard” approach for classification of vowels based on formants, which are meaningfully distinguishable frequency components of human speech. These formant frequencies depend upon the shape and dimensions of the vocal tract, Vocal tract shape is characterized by a set of formant frequencies, and different sounds are produced by varying the shape of the vocal tract, leading to the property of spoken speech. It has been implemented in many of speech related applications such as, speech/speaker recognition. This work uses Euclidean distance measure, is applied in order to measure the similarity or the dissimilarity between two spoken words, which take place after quantizing a spoken word into its code book. Keywords: Speech, Vocal tract, Formants, Euclidean distance. Introduction The Fig 1.1 shows the block diagram of Independent Speaker Recognition for vowels. Let S(n) be the test sample of a vowel. Then parameters i.e. formants F1 and F2 are extracted.
  • 2. 378 G.N. Kodandaramaiah et al The extracted formants are compared with the threshold of reference formants. Euclidean distance measure is applied in order to measure the similarity or the dissimilarity between two spoken words, which take place after quantizing a spoken word into its code book. The matching of an unknown vowel is performed by measuring the Euclidean distance between the features vector (formants) of the unknown vowel to the reference model (codebook) of the known vowel formants F1, F2 in the database. The goal is to find the codebook that has the minimum distance measurement in order to identify the unknown vowel (Franti et al., 1997). For example, in the testing or identification session, the Euclidean distance between the features vector formants F1, F2 and codebook for each spoken vowel is calculated and the vowel with the smallest average minimum distance is picked as shown in the Eq. (1.1). Note that xi is the i th input features vector (formants F1, F2), yi is the i th features vector in the codebook (Reference Model) and distance d is the distance between xi and yi. d(x,y)=√[ ] 1.1 where D=2, xi is the ith input features vector (formants F1, F2), yi is the ith feature vector in the code book (Reference Model) and d is the distance between xi and yi.s,Wi = weight associated with ith feature vector, recognition score. Decision Rule The weights ‘w’ are important to use if the information contained in the underlining features is not proportional to the feature variances. In this case of vowel recognition based on formants F1 and F2, they do not uniformly contribute to vowel recognition. Based on study, relative weights-F1 =2; F2 =1 are given but normalized such that the sum of the weights is 1.0. We refer to classification based on this distance as Maximum Likelihood Regression, since this is based on Gaussian assumptions used to obtain the parameters in the classifier. To provide verification that the vowels displayed are producing accurate results, the MLR has calculated the distance of average features for the given vowels.If the feature distance is within the threshold criteria Di (F1, F2), then equation 1.1 becomes Di (f) < α√m 1.2 where m is number of features i.e. F1 and F2, α is arbitrary scale factor used for performance tuning. Then, the vector xi is identified as the vector yi, otherwise not. If it is too small the MLR rejects many correct vowel samples. If it is too large the output of category vowels will not be rejected. In our work the threshold α=x has given optimum results.
  • 3. Independent Speaker Recognition for Native English Vowels 379 Figure 1.1: Block diagram of vowel recognition. Result of Vowel Ecognition of Male and Female Speakers Male Speakers The table 3.1 gives the result for male vowel recognition based on MLR method. Vowel /a/ has achieved perfect classification compared to other vowels. The detection rate for vowel /u/ and /e/ is better than vowel /o/ and /i/ for all tested samples. Vowel /e/ and vowel /i/ tend to mis-classify with each other due to the variations of utterances from different inter-speakers. The Fig 3.1 shows vowel ‘X’ versus % vowel recognition for 50 male samples, where ‘X’ is the actual vowel. For vowel /a/, /a/ in /a/ is 46; /a/ in /e/ is 0; /a/ in /i/ is 4;/a/ in /o/ is 0;/a/ in /u/ is 0. Hence the percentage correctness of recognition of vowel /a/ is = ( /a/ in /a/ )* 100÷(/a/ in all the vowels) = 46 * 100÷(46+0+4+0+0) = 46*100/50 = 92 %. Figure 3.1: Vowel Vs % vowel recognition for male speaker.
  • 4. 380 G.N. Kodandaramaiah et al Table 3.1 Shows the percentage recognition for vowel of male speakers. vowels Predicted Actual /a/ /e/ /i/ /o/ /u/ % correct /a/ 46 0 4 0 0 92% /e/ 2 44 0 4 0 89% /i/ 6 0 40 0 4 80% /o/ 0 3 0 44 3 88% /u/ 3 1 1 0 45 90% Female Speakers The table 3.2 gives the result for female vowel recognition based on MLR method. Vowel /o/ has achieved perfect classification compared to other vowels. The detection rate for vowel /u/ and /e/ is better than vowel /a/ and /i/ for all tested samples. Vowel /a/ and vowel /i/ tend to mis-classify with each other due to the variations of utterances from different inter-speakers. The Fig 3.2 shows the percentage of recogniton of vowel for 40 female samples. For vowel /o/, /o/ in /a/ is 0; /o/ in /e/ is 0; /o/ in /i/ is 0;/o/ in /o/ is 39;/o/ in /u/ is 1. Hence the Percentage correctness of recognition of vowel /o/ is = ( /o/ in /o/ )* 100÷(/o/ in all the vowels) = 39 * 100÷(0+0+0+39+1) = 39*100/40 = 98 %. Figure 3.2: Vowel Vs % vowel recognition for female speaker.
  • 5. Independent Speaker Recognition for Native English Vowels 381 Table 3.2 Shows percentage recognition of vowel for female speakers Vowel /a/ /e/ /i/ /o/ /u/ %correct /a/ 34 4 0 0 2 85% /e/ 0 37 0 3 0 92% /i/ 0 4 34 0 2 86% /o/ 0 0 0 39 1 98% /u/ 3 0 0 0 37 94% Conclussion It was an attempt presents to the standard method for vocal tract shape estimation has been the basis for many successful automatic speech recognition (ASR) systems. Here we describe a “standard” approach for classification of vowels based on formants. We achieved 80 to 95 percentage of speaker recognition using Euclidean distance measure. Acknowledgements We would like to thanks the Management, Principal of Madanapalli Institute of Technology and Science, Madanapalli, A.P., for their Cooperation and Encouragement References [1] L.R.Rabiner and R.W.Schafer, Digital processing of Speech signals, Droling Kindersly(india)pvt.Ltd.,licensees of pearson eduction in south asia, 1978, PP. 54-101,412-460. [2] Thomas F. Quatieri, Discrete time speech signal processing principles and practice,2002, pp 56-59. [3] P. Ladefoged, R. Harshman, L. Goldstein, and L. Rice, “Generating vocal tract shapes from formant frequencies,” J. Acoust. Soc. Am., vol. 64, no. 4, , 1978, pp. 1027–1035. [4] Mayukh Bhaowal & Kunal Chawla Isolated word Recognition for English Language using LPC, Vq and HMM,pp.2-4. [5] G.E Peterson and H.L Barney,” control methods used in a study of the vowels ” J.Acoustic.Soc.Amer., Volume 24,PP.175-184 [6] P.Rose,Long-and short-term within-speaker differences in the formants of Australian hello, j.Int. Phonetic. Assoc. 29(1) (1999) 1-31. [7] AhmedAli Safiullah Bhatti, dr.Munammad Sleem Miam. formants based Analysis for speech recognition, IEEE 2006.
  • 6. 382 G.N. Kodandaramaiah et al