An Android Communication Platform between Hearing Impaired and General People
Sign language translator ieee power point
1. Dept of Biomedical Engineering
AUTOMATIC LANGUAGE TRANSLATION SOFTWARE
FOR AIDING COMMUNICATION BETWEEN INDIAN
SIGN LANGUAGE AND SPOKEN ENGLISH USING
LABVIEW
By
YELLAPU MADHURI,
Reg.No.1651110002,
MTECH II YEAR,
SRM University.
Guided by
Ms.G.ANITHA
Assistant professor (O.G) /BME
2. Dept of Biomedical Engineering
INTRODUCTION
SIGN LANGUAGE (SL)
Natural way of communication of speech and/or hearing-impaired people.
SIGN
Movement of one or both hands, accompanied with facial expression, which
corresponds to a specific meaning.
TRANSLATOR
Communication between speech and/or sound impaired person and person that do
not understand sign language, avoiding by this way the intervention of an
intermediate person. And allow communication using their natural way of speaking.
6. Dept of Biomedical Engineering
AIM
To develop a mobile interactive application software for automatic translation of
Indian sign language into speech in English and vice-versa to assist the
communication between speech and/or hearing impaired people with normal
people. This language translator should be able to translate one handed finger
spelling input of Indian Sign language alphabets A-Z and numbers 1-9 into spoken
English audio output and 165 spoken English words input to Indian Sign language
picture display output.
8. Dept of Biomedical Engineering
OBJECTIVE
For Sign to Speech conversion
1. Acquire images using the inbuilt camera of the device.
2. Perform vision analysis functions in the operating system and provide speech output
through the inbuilt audio device.
For Speech to Sign conversion
1. Acquire speech input using the inbuilt microphone of the device.
2. Perform speech analysis functions in the operating system and provide visual sign
output through the inbuilt display device.
Minimize hardware requirements and expense.
9. Dept of Biomedical Engineering
LITERATURE REVIEW
1. Jose l. Hernandez-
rebollar et al [2004]
Discusses a novel approach for capturing and translating isolated
gestures of ASL into spoken and written words using combined
acceleglove and a two-link arm skeleton.
2. Paschaloudi N.
Vassilia et al [may
2006]
Extensible system to recognize GSL modules for signs or finger-
spelled words, using isolation or combined neural networks
3. Beifang yi [ may 2006] Explorations in the areas of computer graphics, interface design,
and human-computer interactions with emphasis on software
development and implementation in ASLT
4. Andreas domingo et al
[2007]
ASLT using pattern-matching algorithm.
5. Rini akmeliawatil et al
[may 2007]
ASLT for real-time english translation of the malaysia SL using
neural networks.
6. Abang irfan halil et al
[ 2007]
Extent of development details on recognition system by using
state-of-the-art graphical programming software
10. Dept of Biomedical Engineering
ALGORITHM CRITERION
1. REAL-TIME
2. VISION-BASED
3. AUTOMATIC AND CONTINUOUS OPERATION
4. EFFICIENT TRANSLATION
11. Dept of Biomedical Engineering
MATERIALS
Software Tools used: National Instruments LabVIEW and toolkits
LABVIEW 2012 version
Vision Development Module
Vision acquisition Module
Hardware tools used:
Laptop inbuilt webcamera- Acer Crystal Eye
Laptop inbuilt speaker-Acer eAudio
24. Dept of Biomedical Engineering
USER INTERFACE OF PATTERN MATCHING FOR SIGN LANGUAGE
TO ENGLISH TRANSLATION
25. Dept of Biomedical Engineering
BLOCK DIAGRAM OF SIGN LANGUAGE TO SPEECH
TRANSLATOR
26. Dept of Biomedical Engineering
DATABASE OF ONE HANDED ALPHABETS AND NUMBERS OF SIGN LANGUAGE
27. Dept of Biomedical Engineering
ADVANTAGES
Eliminates the need for an interpreter for communication between sign language
and speech language.
Easy to incorporate and execute in any supporting operating system.
Real time translation.
Does not require any additional hardware.
28. Dept of Biomedical Engineering
FUTURE APPLICATIONS
Web conference
COMPUTER AND VIDEO GAMES
PRECISION SURGERY
DOMESTIC APPLICATIONS
WEARABLE COMPUTERS
29. Dept of Biomedical Engineering
CHALLENGES
Background subtraction for robust usage.
Making the system user independent.
Pattern matching training.
30. Dept of Biomedical Engineering
LIMITATIONS
System is trained on a limited database..
Possibility of misinterpretation for closely related gestures.
Translates only static signs.
Not trained to translate dynamic signs.
Facial expressions are not considered.
Possibility of misinterpretation for words of similar pronunciation.
31. Dept of Biomedical Engineering
CONCLUSION
The feature vectors which include whole image frames containing all the aspects of
the sign are considered.
The geometric features which are extracted from the signers’ dominant hand, improve
the accuracy of the system to a great degree.
Training the speech recognition for shorter phrases is difficult than longer phrases.
32. Dept of Biomedical Engineering
FUTURE WORK
To increase the performance and accuracy of the ASLT, the quality of the training
database used should be enhanced to ensure that the ASLT picks up correct and
significant characteristics in each individual sign and further improve the
performance more efficiently.
Current collaboration with Assistive Technology researchers and members of the
Deaf community for continued design work should be considered for continued
progress.
This project did not focus on facial expressions although it is well known that facial
expressions convey important part of sign-languages.
This system can be implemented in many application areas examples include
accessing government websites whereby no video clip for deaf and mute is available
or filling out forms online whereby no interpreter may be present to help.
33. Dept of Biomedical Engineering
REFERENCES
Andreas Domingo, Rini Akmeliawati, Kuang Ye Chow ‘Pattern Matching for Automatic
Sign Language Translation System using LabVIEW’, International Conference on
Intelligent and Advanced Systems 2007.
Beifang Yi Dr. Frederick C. Harris ‘A Framework for a Sign Language Interfacing
System’, A dissertation submitted in partial fulllment of the requirements for the degree
of Doctor of Philosophy in Computer Science and Engineering May 2006 University of
Nevada, Reno.
Helene Brashear & Thad Starner ‘Using Multiple Sensors for Mobile Sign Language
Recognition’, ETH - Swiss Federal Institute of Technology Wearable Computing
Laboratory 8092 Zurich, Switzerland flukowicz, junker g@ife.ee.ethz.ch