Presentation of a A Kinect Game developed in the VirtualSign Project for ARTECH 2015 - The 7th International Conference on Digital Arts, hosted by Universidade Aberta in the ancient village of Óbidos, Portugal.
2. This work was supported by the Engineering Institute
of Oporto and GILT (Graphics, Interaction and
Learning Technologies) under grant No. 60973089, a
project FCT (Fundação para a Ciência e Tecnologia).
3. Implementation of a bidirectional translation system for
Portuguese Sign Language (LGP)
– Focused primarily on educational and didactic aspects
Main Goals
4. Scope:
• configuration and movement of the
hands;
• body position/inclination;
• facial expressions;
Sign Language
13. Game Play
1. The inventory script.
2. The graphical interface.
3. The map has objects, and those
objects contain scripts on them.
4. The inventory consists of 42 spaces
5. The handling of collisions, between
objects.
Score
• Player’s score is incremented during
the game;
• The shorter the time it takes between
the acquisition of two objects, the
greater will be the score.
15. Main Results
1. Gesture recognition using Kinect and Data Gloves
2. Gestures classifiers
3. Automatic Classification of Gestures
4. 3D avatar to represent text in PSL
5. Application chat
6. Serious Game for PSL
This project, is included in Human Computer Interaction area and is called Virtual Sign.
VS project aims to develop a tool in order to get a, bidirectional translation of Portuguese Sign Language –
to complement this tool we have also developed a Game
Contextualizing this project in the main entities…
Virtual Sign is being supervised/coordinated by the research group GILT (Games, interaction & learning technologies) and we are running this project under the frame of a national funded project.
This project, aims to develop and evaluate a model that facilitates access for the deaf and hearing impaired people to digital content in particular the educational content creating the conditions for greater social inclusion of people with these kind of disabilities.
And to help/facilitate or even to improve the process of learning the Portuguese Sign Language we have developed a Serious Game in order to teach the basic/common gestures which are used to communicate with deaf people
The Access to digital content will be supported, by an automatic translator between Portuguese Writing (LEP) and Portuguese Sign Language (LGP) supported, by an interaction model.
The main sources for the Portuguese sign language that we had to deal in order to create/develop the model were:
Hands (configuration and movement of the hands;
Body Motion and
Facial expressions
Regarding to: Handshape; This refers to the hand configuration which is used in the beginning of any word of Portuguese Sign Language (PSL).
Palm orientation; refers to the direction in which the hand is turned to produce a sign: which include
palm up,
palm down,
palm right,
palm left,
palm away from you,
or palm facing you.
Location; it is another aspect under analysis and it refers to the physical parameters or body location, where the sign is produced.
The general physical parameters for sign language production are approximately four inches in front of the chest.
also Non-Manual Markers; which are signals or gestures done without the use of the hands and mostly from the shoulders, head and face to relay a message.
Tools that are being used in our project to support the development of the model:
NUI (Natural User Interfaces) using depth sensors such as Kinect, and Data gloves.
In the first part of the project we focused on the letters of the alphabet. We focused in the range of hand motion: the range is very small and consists mainly of finger configuration and orientation of the hand in a static position.
It has the big advantage of not wearing many accessories to perform the recognition.
The performance of the classifier was evaluated on two distinct feature sets. We have started to evaluate the accuracy of the classifier based only on the input from the Data Glove. The estimated error in such a setting is 0.02.
Then we have used all the data including the input from both, the Data Glove and the Kinect (right arm). The error rate dropped to 0.01.
The difference of the mean error observed in these two situations is statistically significant.
2nd phase: In order to translate the words into gestures we move to a dynamic recognition of the Gestures. In this phase there are some challenges we faced such as:
_ The Sequential combination of movements and hands configurations, that we have to control.
_ The Significant variations, in the performance of the gesture, and the speed of the hands and body position;
_ And the difficulty of perceiving, or understand where, each word begins and where it ends.
In PSL There are 54 (fifty four) possible hands configurations (states);
A word is defined by a transition from an initial state to a final state;
Each state transition has an associated movement;
_ Words classification : we have 3 classifiers, that we have used:
1- Finite Automata
2- Algorithms for Hierarchical Classification
3- Sequence Alignment Algorithms
The Game Architecture of the project consists of two applications: the game in itself developed in Unity 3D and the interface that connects the VirtualSign to Unity.
_At the top level of this structure there is the interface. All the functionalities of the project, can be accessed through this layer by the user.
_There are 3 components at the 2nd level.
1. The sockets component that is responsible for linking the Unity game application to the Kinect in order to provide the player input.
2. the game engine; this component is responsible for the execution of the game itself representing the functions of Unity.
3. And the business component where the game functions are available to the player.
And at the lowest layer we have the hardware devices needed to recognize the gestures and providing input to the layers above.
Regarding Game play:
1_The first script that was developed, was the inventory script. The inventory stores the items acquired by the player and provides access to them, at any time during the game.
2_ Having been established some objects on the map we proceeded by creating the graphical interface.
3_The map has objects and those objects contain scripts on them.
4_ The inventory consists of 42 (forty-two) spaces that are empty upon initialization.
5_With the inventory settled up and ready to receive the objects we develop the handling of collisions between these objects in order to detect whenever the user/player is within a reasonable distance to perform the interaction.
Regarding the Score
1_ Players’ score are incremented during the game as long as the player aquire new gestures.
2_ The shorter the time it takes between the acquisition of two objects, the greater will be the score
In order to progress in the game the players are encoraged to perform Gestures matching to certin words or frases.
These gestures are recorded in real time using the Kinect and a pair of 5 DT Gloves.
After being saved, the gesture is analysed, and the player performence is evaluated.
To conclude these are the main results of Virtual Sign Project
1. Gesture recognition using Kinect and 5DT Data Gloves
2. Gestures classifiers – (The development of gestures classifiers of Portuguese Sign Language, based on the inputs generated by Kinect, and Data Gloves)
3. Automatic Classification of Gestures
4. 3D avatar to represent text in PSL
5. Application chat, and finally
6. A Serious Game for PSL