SlideShare una empresa de Scribd logo
1 de 9
Descargar para leer sin conexión
TELKOMNIKA, Vol.16, No.3, June 2018, pp. 1367~1375
ISSN: 1693-6930, accredited A by DIKTI, Decree No: 58/DIKTI/Kep/2013
DOI: 10.12928/TELKOMNIKA.v16i3.8417  1367
Received December 23, 2017; Revised March 17, 2018; Accepted April 5, 2018
Speech Input as an Alternative Mode to Perform
Multi-touch Gestures
Nor Hidayah Hussain*, Tengku Siti Meriam Tengku Wook, Siti Fadzilah Mat Noor,
Hazura Mohamed
Research Center for Software Technology and Management, Faculty of Information Science and
Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia
*Corresponding author, e-mail: nhidayahhussain@gmail.com, fadzilah@ukm.edu.my
Abstract
Speech is fundamental and the most dominant form of communication. Speech input may
facilitate natural interaction between humans and computers. For children, this input mode supports their
interaction with application systems.This study addresses speech input as an alternative mode in order to
improve multi-touch gesture interactions. Previous studies show that children difficult to perform multi-
touch gestures successfully. In fact, multi-touch is parts of basic core gestures that have been adapted to
mostof the learning applications.This study uses the Wizard-of-Oz method and posts interview, involving
nine preschool children between ages of four to six years old. Results of the study show children’s
interactional ability using speech input towards systems and positive feedback received from them
regarding the prototype of the systems. The findings from this study highlight the opportunities and
challenges in using speech input to increase the success of children’s interaction with multi -touch
gestures.
Keywords:speech input, multi-touch gestures,preschool children
Copyright © 2018 Universitas Ahmad Dahlan. All rights reserved.
1. Introduction
The phenomenon of speech mode in touchscreen mobile devices (e.g., Apple iOS Siri)
and browser platforms (e.g., Google) have increased the potential of children in exploring and
using the speech features provided. It is found that the current trend, children learn to interact
with touch screens since infancy [1]. So, it is not surprising children are among the fast and
responsive users to new technology. Additionally, technology integration in teaching and
learning process in preschool education level in accordance with this Net generation's needs.
Such technology tools are the Internet, multimedia presentations and interactive whiteboards
use. Besides, the use of latest touch screens technology is widely used for learning purposes
recently [2-5]. This includes latest multi-touch technologies such as tablet and smartphones that
provide basic core gestures single and multi-touch.
Past researches [1],[3], [6-8] show that children manage to perform single touch gesture
successfully. This is because single touch such as tap and press is easy to perform since it
involves a single index finger and similar to mouse clicking in the desktop
environment [9]. Nevertheless, preschool children face problems performing multi-touch
gestures that involve more than one finger such as rotation, zoom-in and zoom-out. Children in
this age group have limited motor and cognitive skills that are still developing. Small fingers and
weaker arms contribute to the fact that their touch input always strays far away from the target
object, consequently rendering the system to be unable to recognize them [8]. Moreover, the
fact that children's fingers touching an object inaccurately and low coordination of motor and
visual skills make their gestures incapable to reach target accuracy [7]. The difficulties of
performing multi-touch gestures may cause the ineffective use of learning applications. This will
affect their learning objective performance.
Therefore, this study proposed speech input as an alternative mode in order to perform
multi-touch gestures successfully. The importance of speech input use is to minimize children's
manual finger control [10] which is limited due to their motor skills that still develop. In fact, this
input promotes natural interaction because it uses their own voice without need any tools
[11-12], as well as easy and quick to operate on the system [13-14].
 ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375
1368
The next section describes several considerations related to speech input from previous
studies. Section 3 list the objectives of this study, followed by section 4 that presents methods
used to examine the appropriateness of speech input as an alternative. Section 5 demonstrates
the analysis and results of this study. Finally, in the last section, the conclusion and potential
future work are summarized.
2. Related Work
Speech is a basic, human-to-human mode of communication. Today, technological
development has introduced such mode in human computer interaction. This is because speech
mode promotes direct manipulation interaction and requires minimum manual control [10]. In
addition, interaction using speech could be handled quickly, and besides that, it provides
easiness to users especially children. Nevertheless, the development of speech input system
involving children is quite challenging since there are differences of speech characteristics
between these users and adults [15], in fact each children have different characteristics of
speech. However, according to Shneiderman [16], the designer may integrate speech input into
the system effectively if they understand the speech characteristics and cognitive processes of
the users involved. Thus, several criterias such as children’s interaction ability using speech
towards the system, language and speech development of children, children’s cognitive
consideration towards speech and the effectiveness of speech input on touch user interfaces
should be considered before integration of the speech into the system is to be done.
2.1. Children’s interaction ability using speech input on the system
Previous researches have discussed matters related to children’s interaction of speech
input. Amongst them were the uses of speech commands involving the children on Siri IOS
Apple application [17], question-answering systems [18], web-searching systems [13-14],
spoken dialogue systems [19] and computer game [20]. Analysis from those studies shows that
young children in the early age (especially four and five years old) were using polite words and
gave commands in descriptive sentences to the system [13],[19]. For example, their speech
commands began with quite polite words like “Please..” or “Could you..”. Besides, it was found
that several children used high-pitch sounds such as screaming when they gave commands to
the system. This is due to their excitement in interacting with the system that could actually give
feedback and to make sure their commands were received by the system [15],[17],[19]. They
always use a variety of words but these words had the same meaning [13],[15],[21]. This is due
to their weaknesses in selecting the appropriate word. Nevertheless, there are changes when
they become experienced such as six year old children in which they are able to give short and
specific commands [13]. Their speech development will fluctuate and become consistent with an
increased age and would, in turn, achieve adult-like-speech [15],[22]. Furthermore, children’s
feedback on speech commands is shown in the studies by [18] and [23-24]. Children said that
they enjoy and felt excited in interaction using speech commands on the system [18],[23-24].
In addition, they expressed their intention to play more games and asked more questions to the
system [18]. This is due to the fact that speech interaction fulfills their fantasy and curiosity to
interact with the system. Besides, they want to explore the extent of system knowledge based
on their questions. Overall, most of the children agree to interact with the speech system in the
future [18], [23-24]. Therefore, the children's interaction ability to use speech input needs to be
investigated so that the speech integrated to the system will contribute in performing multi-touch
gestures successfully.
2.2. Speech and language development of the children
According to [25], children in the preschool age are more sensitive to the use of speech
and language. Currently, they already know to use simple rules of grammar, such as personal
pronouns (I, you, he/she) and direction (outside, inside, up, down). Next, they manage to use
250 (four years old children) up to 14000 words (six years old children). With such vocabulary
acquisition, they are able to construct sentences containing four or more words in a sentence
[26-27]. Several among them were able to share their experiences and activities at both home
and school to other people. Their speech could be understood by the people around them as
they got the ability to express most sounds correctly. Moreover, children’s ability to differentiate
the word with syllables occurs early in their developmental skills [28] especially in age
TELKOMNIKA ISSN: 1693-6930 
Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain)
1369
three [29]. There are various interesting activities such as word games, clapping syllables and
songs that are taught in kindergartens and preschools to create syllable awareness among
children. This is because syllable awareness is part of phonological awareness that becomes a
key to determine children’s developmental skills of reading and spelling [28]. Phonological
awareness is sensitivity towards the sounds of language and is a difficult part in the children’s
reading development. Hence in this study the number of syllables in word commands used by
children need to be examined so that the speech commands integrated into the system fulfil
their requirement.
2.3. Children’s cognitive consideration towards speech input
Shneiderman [16] stated that speech interaction between human-computer and human-
human is different since it is related to a cognitive process. Because of this, the speech
interaction for humans and computers involve presentations and transferring of information. For
preschool children, their limited cognitive skills will affect them in interacting using speech input.
According to [19], children tend to pause between words when they talk. This is because they
require more time to formulate words in their speech [30]. In order to give speech commands to
the system, they need to have sufficient domain of knowledge to produce the appropriate word.
However, it is quite difficult for them since they are slow in processing the information and weak
in determining the relevant information needed by the system [31]. Therefore, this study will
provide a set of appropriate words that could be used by children when they interact with the
speech system. They are required to produce their own word commands first so that the
tendency of syllables number could be obtained directly from them. They will be assisted with
the words set if they do not know the word.
2.4. The effectiveness of speech input on touch user interfaces
Previous studies have discussed the effectiveness of speech input. A study by [32]
proposed Speak-As-You-Swipe (SAYS) multimodal interface that enables text entry process
using voice and gestures on a virtual keyboard on mobile devices. The integration of swipe
gestures and voice inputs were an alternative in resolving slow text entry processes on the
keyboard. Results of the study show the accuracy of predicted words which was increased by
4% after using the SAYS interface. Apart from that, voice augmented manipulation (VAM) voice-
based was proposed by [33]. The aim of this study is to augment user operations for scrolling,
zooming and panning in a mobile environment. This technique was proposed to decrease the
repeated finger gestures during the interaction of said three operations. Their findings show that
this technique helps users to scroll, zoom and pan smoothly without repeating finger gestures.
Next, the augmentation of speech input on Tetris arcade games for learning purposes was
studied by [34]. She emphasized the concept of a retrieval process that requires users to recall
input memory that was previously used. Findings from her study indicated that the use of
augmented speech to repeatedly attempt a recall from memory could improve long-term
retention for retrieval practice. Meanwhile, there is a study that proposes voice-based control on
the search interface for children ages eight to 10 years old [13-14]. Results from the study show
that the combination of voice-based control interface and touch could enhance the usability of
web search engine for children, and at the same time could solve children who had problems in
writing. Based on the improvement of performance of input mode in interactive systems from
previous research, the use of speech input as an alternative to multi-touch gestures problems is
expected to improve children’s learning process.
3. Objectives of the Study
The aim of this study is to examine speech input as an alternative mode in order to
solve the issues of difficulty in multi-touch gestures among preschool children. To achieve the
purpose of this study, there are two objectives to be attained:
a) To verify children's interaction ability using speech input towards prototype of multi-
touch gestures.
b) To identify number of syllables in a word from children's speech input.
 ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375
1370
4. Methodology
Two phases are involved in examining the speech alternative mode for multi-touch
gestures interaction, namely Wizard-of-Oz experiment and post interview session. Both phases
are important to ensure that the data collection is more informative and has quality.
4.1. Wizard-of-Oz experiment
Previous To achieve objective A and B, a study in the form of a so-called "Wizard-of-
Oz-Experiment" (WoZ) was designed to gather the data of children's speech and interaction
styles towards speech-based multi-touch gestures prototype. WoZ is a partially functional
prototype simulation that does not exist on the application interface [35-36] and is used to test
the limited functionality of the prototype for a final design development. Here, the participant
assumes that they interact with a fully functional application, but actually it is controlled by a
human operator behind the curtain (called Wizard) [9], [36-37] (Figure 1). In general, four
facilitators include the researcher that was involved in this study. One was the Wizard; the
researcher will stay next to the participant to give additional instructions and help, and two other
facilitators who will entertain other participants stay in the nearby room.
Figure 1. Interaction Between Participant and
the System
Figure 2. Speech-Based Multi-Touch Gestures
System
This study was carried out at the Multimedia Studio of Faculty of Information Science
and Technology, Universiti Kebangsaan Malaysia with nine preschool children. The children
were ages four years (3 children), five years (3 children) and six years (3 children) from Emaan
Kindy Bangi Avenue kindergarten at Kajang, Selangor. All participants had prior experience
using gadgets (smartphones or tablets) and the ability to speak according to the selection by
their class teacher. The kindergarten’s consent was obtained before conducting the study.
In this study, a checklist form is used as an instrument for data collection. The form is
divided into three parts namely; Section A which consists of 20 items (cover two items of
participants's demographic and 18 items related to completeness, and time to complete task).
Section B contains six items to identify the number of syllables in a word of children's speech.
Finally, four items in Section C is related to children's feedback towards prototype systems for
post-interview session purpose.
The device used for this study was a Dell personal computer's monitor (for display to
the participant purpose), speaker, Dell Inspiron 13 7000 series notebook (controlled by the
wizard) and a HDMI cable which connects between the monitor and the notebook. A speech-
based multi-touch gestures prototype application was developed for this study (Figure 2). The
purpose of the prototype was to examine how a participant interact using speech input. A set of
appropriate words was provided in order to help the participant in giving commands to the
systems. The language medium used in this study is Malay which includes for speech
commands to the system as well as provided word sets to participants. This is to ensure that the
participant can speak naturally as when they communicate with friends or when at home. The
setting, where the study took place, was in a quiet and isolated room from other participants. It
is important to create a study environment that allows the participant to interact naturally, full of
attention and freely thinking towards the systems [12],[14]. The participant sat on a chair facing
the computer screen. Meanwhile, the wizard sat on a chair facing the notebook behind the
TELKOMNIKA ISSN: 1693-6930 
Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain)
1371
curtain controlling user interface of the systems, captured speech commands from the
participant and reactions for each interaction between the participant and systems. The
researcher (that stays next to the participant) has recorded each interaction by the participant
based on the checklist form.
4.2. Post-interview
Each participant was interviewed by the researcher after the WoZ experiment. The
purpose is to gain their feedback towards the system. This is important as it becomes an
improvement for future system development. Each feedback from the participants was recorded
on the checklist form.
4.3. Procedure
The researcher gave instructions manually to the participants on the introduction to the
prototype system, how to use it, no restrictions to give any word commands, and how a
successful task is to be considered. Next, the participants have to follow voice instructions from
the system. The first step started off with a pre-interview session by the system related to the
participants' demographic. In the next step, the system will inform the task activity that they
need to complete and preview how to interact with the system. By using the task list provided,
the participants were given 10 minutes to interact with the system using only speech input. They
were free to give any word commands in order to perform multi-touch movements to objects.
There are six tasks of multi-touch gestures to be completed on two objects by the participants.
When the task was successfully completed, the system gave a positive audiovisual feedback. If
the participant does not know the word, the researcher gave a clue of the word's commands. If
he/she was stuck, the researcher gave set of appropriate words to be selected by the
participant. After the experiment, the participants will be interviewed on feedback to the speech
system used and recommendations to improve the system in the future. Video and audio
recordings were taken during the experiment for analysis data purposes.
4.3.1 Multi-touch gestures’ tasks
There are two images on screen; a colour image (left screen) and a target image (right).
The colour image represents experiment's object to be studied, while the target image is a real
image. The participants are required to make a multi-touch movement to the object (left screen)
using only speech commands. There are three multi-touch gestures which are rotation, zoom-in
and zoom-out as shown in Figure 3-5. Each gesture has two different objects to be completed,
making it all six tasks.
1) Rotation: The participants were required to give any rotation commands (example: “Pu-tar”)
to the colour image until the image has an equal position with the target image. Guidance:
the red line box outside the colour image will change to blue whenever the image reaches
the specified boundary line. The successful gestures will be determined when the colour
image changes to grey.
2) Zoom-In: The participants were required to give any scale up commands (example: “Be-
sar”) to the colour image until the image has an equal size with the target image. Guidance:
the red line box outside the colour image will change to blue whenever the image reaches
the specified boundary line. The successful gestures will be determined when the colour
image changes to grey.
3) Zoom-Out: The participants were required to give any scale down commands (example:
“Ke-cil”) to the colour image until the image has an equal size with the target image.
Guidance: the red line box outside the colour image will change to blue whenever the
image reaches the specified boundary line. The successful gestures will be determined
when the colour image changes to grey.
 ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375
1372
Figure 3. Steps in Rotation Task. a) Rotation commands are given to the colour image on the
left screen, b) The colour image equals the position of the target image and reaches the blue
line, c) Rotation task is completed successfully
Figure 4. Steps in Zoom-In Task. a) Scale up commands are given to the colour image, b) The
colour image equals the size of the target image and reaches the blue line, c) Zoom-in task is
completed successfully
Figure 5. Steps in Zoom-Out Task. a) Scale down commands are given to the colour image, b)
The colour image equals the size of the target image and reaches the blue line, c) Zoom-out
task is completed successfully
5. Results and Analysis
Based on the WoZ experiment and post-interview session conducted, the objectives of
the study were proven with data findings as follows:
5.1. Children's interaction ability using speech input
To achieve objective A is to verify children's interaction ability using speech input
towards the system, the participants were required to give commands using their own word to
the colour image until the image is equal to the position or size of the target image. Table 1
shows the speech interaction ability of the children before and after given set of words.
TELKOMNIKA ISSN: 1693-6930 
Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain)
1373
Table 1. Children’s Speech Interaction Ability Before and After Given Set of Words Commands.
Rotation Zoom-in Zoom-out
Object 1 Object 2 Object 1 Object 2 Object 1 Object 2
Use ow n w ord 2 3 4 4 4 4
Given set of w ords 9 9 9 9 9 9
An initial analysis found that only two and three from nine participants could use their
own words for object 1 and 2 in the rotation task without assistance from the researcher.
Meanwhile, there were four participants that can give commands using their own words and
without assistance from the researcher for zoom-in and zoom-out tasks of both objects. These
findings show that there are less than 50% of the participants who could use their own words; in
fact they took a long time to think of an appropriate word. In addition, the participants ask
assistance from the researcher regarding the words to be used. Yet, after the set of words were
provided, about 100% of the participants able to complete all the tasks by their own in the given
time. At the end of the WoZ experiment, the participant was interviewed to gain feedback about
the prototype system using speech input. Results of the post-interview shows a positive
response from the participants on the prototype developed. All nine participants had fun and
enjoyed their interaction with the system. They had agreed that interaction using speech is very
easy. Next, eight out of nine participants had agreed to interact with that application system if it
was introduced in school learning. Meanwhile, only one participant did not want to interact with
the application because of exhaustion.
From Table 1, the number of participants that could use their own words for rotation
task is less than zoom-in and zoom-out tasks. According to [13], children have weaknesses in
selecting the appropriate word. This is due to the rotation gestures that require a high level of
cognitive skills and involves the complexity of motor skills compared to other multi-touch
gestures [3]. Hence, most of the participants do not have the ability to give commands using
their own words and seek the adults's assistance to interact with the system if there are no
words given. This is because children need time to formulate a word, resulting in their slow
interaction with the system [16], [30]. Furthermore, children need practical training since they
learn information through several channels namely the visual channel, auditory and
kinesthetic [38]. This can be seen after the set of words were given; the participants could
increase their interaction ability towards the system. From the analysis and discussion above,
children's interaction ability using speech towards multi-touch gestures system were verified.
From the findings obtained which is the children’s interaction ability in using speech commands,
in addition to the positive feedback on the prototype system, this shows the system's
requirement to be developed in the future.
5.2. The number of syllables of speech commands
Based on Figure 6, the highest syllables frequency is from five to seven participants
used word commands with two syllables. Moreover, not a single participant used a word with
one syllable for the rotation task. These results show that children were comfortable with two
syllables word when giving commands to the system.
As for children in this age group, they knew how to differentiate the syllable of a word
since the age of three years old [29]. In fact, syllabic understanding is part of a phonological
awareness that should be mastered by children ages four to six years in order to determine their
reading and spelling skills [28].
The reason why it is important to gain word syllables number from children is that of the
limited findings on speech commands to object in performing touch gestures. Past studies [33],
[39-40] only applied a form of non-speech vocalization (e.g pitch or tone of speech sounds) on
the touch screen system. Indeed, this study requires speech in spoken word's form compared to
non-speech vocalization in order to give commands to screen object in performing multi-touch
gestures. Therefore, the actual system development will use two syllable words of command
based on the findings obtained.
 ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375
1374
Figure 6. Frequency of Syllables Number in a Word of Speech Input
6. Conclusion
In conclusion, it can be mused that pre-school level children have the ability to interact
using speech input on multi-touch gestures system. The positive feedback from these children
shows that they were satisfied with the speech input mode available on the system. The use of
speech input as an alternative mode for multi-touch gestures is seen to be beneficial for children
to overcome their cognitive skill limitations. The findings of this study are important for further
development of multimodal interactions using touch and speech modes, yet further research is
necessary. Future work includes the development of actual systems and involves more
participants.
References
[1] Aziz NAA, Batmaz F, Stone R, Chung PWH. Selection of touch gestures for children’s applications:
repeated experimentto increase reliability. International Journal of Advanced Computer Science and
Applications. 2014; 5(4): 97-102.
[2] Ibharim LFM, Zaki NAA, Yatim MHM. Touch gesture interaction of preschool children towards games
application using touch screen gadjet. Asia-Pacific Journal ofInformation Technology and Multimedia.
2015; 4(1): 47-58.
[3] Nacher V, Jaen J, Navarro E, Catala A, González P. Multi-touch gestures for pre-kindergarten
children. International Journal of Human-Computer Studies. 2015; 73: 37–51.
[4] Yu X, Zhang M, Xue Y, Zhu Z. An exploration of developing multi-touch virtual learning tools for young
children. 2010 2nd Int. Conference on Education Technology and Computer (ICETC). 2010; 3: 0-3.
[5] McKnight L, Cassidy B. Children's interaction with mobile touch-screen devices: experiences and
guidelines for design. International Journal of Mobile Human Computer Interaction. 2010; 2(2): 1-18.
[6] Hussain NH, Wook TSMT, Noor SFM, Mohamed H. Children’s interaction ability towards multi-touch
gestures. International Journal on Advanced Science Engineering Information Technology.2016;6(6):
875–881.
[7] Vatavu R, Cramariuc G, Schipor DM. Touch interaction for children aged 3 to 6 years: experimental
findings and relationship to motor skills. International Journal of Human-Computer Studies. 2015; 74:
54–76.
[8] Ibharim LFM, Borhan N, Yatim MHM. A field study of understanding child’s knowledge, skills and
interaction towards capacitive touch technology (iPad). 2013 8th International Conference on
Information Technologyin Asia-SmartDevices Trend: Technologising Future Lifestyle,Proceedings of
CITA 2013. 2013: 6–10.
[9] Rogers Y, Sharp H, Preece J. Interaction Design: Beyond Human-Computer Interaction, Wiley. 2011.
[10] Harada S, Wobbrock JO, Landay JA. Beyond speech recognition: improving voice-driven access to
computers. Engineering. 2009: 3–4.
[11] Almeida N, Silva S, Teixeira A. Design and development of speech interaction: a methodology.
International Conference on Human-Computer Interaction. 2014; 370-381.
[12] Kiran P, Mohana HS, Vijaya PA. Human machine interface based on eye wink detection. International
Journal of Informatics and Communication Technology (IJ-ICT). 2013; 2(2): 116-23.
TELKOMNIKA ISSN: 1693-6930 
Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain)
1375
[13] Gossen T, Kotzyba M, Stober S, Andreas N. Voice-controlled search user interfaces for young users.
7th Annual Symposium on Human-Computer Interaction and Information Retrieval. 2013: 2-5.
[14] Kotzyba M, Siegert I, Gossen T, Wendemuth A, Nurnberger A. Exploratory voice-controlled search for
young users: challenges & potential benefits. Proceedings of 7th Annual Symposium on Human-
Computer Interaction and Information Retrieval. 2013.
[15] Lee S, Potamianos A, Narayanan S. Acoustics of children’s speech: developmental changes of
temporal and spectral parameters. The Journal of the Acoustical Society of America. 1999; 105(3):
1455–1468.
[16] Shneiderman B. The limits of speech recognition. Comm. ACM. 2000; 43: 63–65.
[17] Lovato S, Piper AM. Siri, is this you?: understanding young children's interactions with voice input
systems. Proceedings ofthe 14th International Conference on Interaction Design and Children. 2015:
335–338.
[18] Tewari A, Canny J. What did spot hide? a question-answering game for preschool children.
Proceedings ofthe 32nd Annual ACM Conference on Hum an factors in Computing Systems-CHI ’14.
2014: 1807–1816.
[19] Gerosa M, Giuliani D, Narayanan S, Potamianos A. A review of asr technologies for children’s speech.
Proceedings of the 2nd Workshop on Child Computer and Interaction WOCCI 09. 2009: 1–8.
[20] Farantouri V, Potamianos A, Narayanan S. Linguistic analysis of spontaneous children speech.
Proceedings of the Workshop on Child, Computer and Interaction. 2008.
[21] Yildirim S, Narayanan S, Byrd D, Khurana S. Acoustic analysis of preschool children’s speech.
Proceeding of 15th ICPhS. 2003: 949–952.
[22] Hamid BA, Izyan R, Abu A. Analisis akustik ruang vokal kanak-kanak melayu. Jurnal Bahasa. 2011;
11(1): 48-62.
[23] Fridin M. Storytelling by a kindergarten social assistive robot: a tool for constructive learning in
preschool education. Computers & Education. 2014; 70: 53–64.
[24] Kannetis T, Potamianos A. Towards adapting fantasy, curiosity and challenge in multimodal dialogue
systems for preschoolers.Proceedings ofthe 2009 International Conference on Multimodal Interfaces-
ICMI-MLMI ’09. 2009: 39-46.
[25] Hamzah H, Samuel JN. Perkembangan Kanak-Kanak untuk Program Ijazah Sarjana Muda
Perguruan. Kumpulan Budiman Sdn. Bhd. Subang Jaya. 2009.
[26] Mclaughlin MR. Speech and Language Delay in Children. American Family Physician. 2011; 83:
1183–1188.
[27] American Academy of Pediatric Dentistry. Speech and Language Milestones. Pediatric dentistry.
2011; 33(6): 330.
[28] Yopp HK, Yopp RH. Phonological Awareness is Child's Play!. YC Young Children. 2009; 64(1): 1-9.
[29] Lanza JR, Flahive LK. Linguisystems Guide to Communication Milestones.
https://www.linguisystems.com/pdf/Milestonesguide.pdf.
[30] Green J, Nip I. Some Organization Principles in Early Speech Development. Speech Motor Control:
New Developments in Basic and Applied Research. 2010: 171-188.
[31] Gossen T, Hempel J, Nürnberger A. Find it if you can: usability case study of search engines for
young users. Personal and Ubiquitous Computing. 2013; 17(8): 1593–1603.
[32] Sim KC. Speak-As-You-Swipe (SAYS): a multimodal interface combining speech and gesture
keyboard synchronously for continuous mobile text entry. Icmi ’12: Proceedings of the Acm
International Conference on Multimodal Interaction. 2012: 555–560.
[33] Sakamoto D, Komatsu T,Igarashi T. Voice augmented manipulation: using paralinguistic information
to manipulate mobile devices.Proceedings ofthe 15th International Conference on Human-Computer
Interaction with Mobile Devices and Services-MobileHCI ’13. 2013: 69–78.
[34] Cai CJ. Adapting arcade games for learning. CHI ’13 Extended Abstracts on Human Factors in
Computing Systems on-CHI EA ’13. 2013: 2665–2670.
[35] White KF, Lutters WG. Behind the curtain: lessons learned from a wizard of oz field experiment. ACM
SIGGROUP Bulletin. 2003; 24(3): 129-135.
[36] Lazar J, Feng JH, Hoccheiser H. Research Methods In Human-Computer Interaction. John Wiley &
Sons. 2010.
[37] Preece J, Yvonne R, Helen S, David B, Simon H, Tom C. Human-Computer Interaction. Addison-
Wesley Longman Ltd. 1994.
[38] Morgan H. Multimodal children’s e-books help young learners in reading. Early Childhood Education
Journal. 2013; 41(6): 477–483.
[39] Sporka AJ, Felzer T, Kurniawan SH, Poláček O, Haiduk P, MacKenzie IS. CHANTI: predictive text
entry using non-verbal vocal input. Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems-CHI '11. 2011: 2463-2472.
[40] Harada S, Wobbrock JO, Landay JA. Voicedraw: a hands-free voice-driven drawing application for
people with motor impairments. Proceedings of the 9th International ACM SIGACCESS Conference
on Computers and accessibility-Assets '07. 2007: 27-34.

Más contenido relacionado

La actualidad más candente

Using Mobile Devices to Assess Language Learning
Using Mobile Devices to Assess Language LearningUsing Mobile Devices to Assess Language Learning
Using Mobile Devices to Assess Language LearningSamantha Petree
 
From multimodal learning to mimicry learningthe study on the impact of multim...
From multimodal learning to mimicry learningthe study on the impact of multim...From multimodal learning to mimicry learningthe study on the impact of multim...
From multimodal learning to mimicry learningthe study on the impact of multim...Alexander Decker
 
Interface and learning gain
Interface and learning gainInterface and learning gain
Interface and learning gainBhagwan Kamble
 
The effect of an online extensive reading instructional program on jordanian ...
The effect of an online extensive reading instructional program on jordanian ...The effect of an online extensive reading instructional program on jordanian ...
The effect of an online extensive reading instructional program on jordanian ...Alexander Decker
 
DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...
DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...
DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...ijma
 
DDL revisited @ HKU
DDL revisited @ HKUDDL revisited @ HKU
DDL revisited @ HKUZhi Quan
 
Mobile assisted language learning
Mobile assisted language learningMobile assisted language learning
Mobile assisted language learningAbdel-Fattah Adel
 
What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...
What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...
What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...heyoungkim
 
Technology in Foreign language Learning
Technology in Foreign language Learning Technology in Foreign language Learning
Technology in Foreign language Learning Eduardo Herrera
 
Analysis of the strategies used by teachers when implementing the application...
Analysis of the strategies used by teachers when implementing the application...Analysis of the strategies used by teachers when implementing the application...
Analysis of the strategies used by teachers when implementing the application...UNIVERSIDAD MAGISTER (Sitio Oficial)
 
Assignment
AssignmentAssignment
AssignmentSuet Yet
 
COMPUTER USE BY SECONDARY SCHOOL PRINCIPALS
COMPUTER USE BY SECONDARY SCHOOL PRINCIPALSCOMPUTER USE BY SECONDARY SCHOOL PRINCIPALS
COMPUTER USE BY SECONDARY SCHOOL PRINCIPALSsyaabdulrahman
 
Blitzmerker: Learning Idioms with a Mobile Game
Blitzmerker: Learning Idioms with a Mobile GameBlitzmerker: Learning Idioms with a Mobile Game
Blitzmerker: Learning Idioms with a Mobile GameLaila Shoukry
 
20 ideas-for-using-mobile-phones-in-language-classroom-
20 ideas-for-using-mobile-phones-in-language-classroom-20 ideas-for-using-mobile-phones-in-language-classroom-
20 ideas-for-using-mobile-phones-in-language-classroom-Ayat El Qattaa
 

La actualidad más candente (20)

Phone
PhonePhone
Phone
 
Using Mobile Devices to Assess Language Learning
Using Mobile Devices to Assess Language LearningUsing Mobile Devices to Assess Language Learning
Using Mobile Devices to Assess Language Learning
 
From multimodal learning to mimicry learningthe study on the impact of multim...
From multimodal learning to mimicry learningthe study on the impact of multim...From multimodal learning to mimicry learningthe study on the impact of multim...
From multimodal learning to mimicry learningthe study on the impact of multim...
 
Interface and learning gain
Interface and learning gainInterface and learning gain
Interface and learning gain
 
The effect of an online extensive reading instructional program on jordanian ...
The effect of an online extensive reading instructional program on jordanian ...The effect of an online extensive reading instructional program on jordanian ...
The effect of an online extensive reading instructional program on jordanian ...
 
DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...
DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...
DEVELOPING AN INTERACTIVE STORYBOOK APPLICATION ‘JACK AND THE DIRTY SMELLY BE...
 
DDL revisited @ HKU
DDL revisited @ HKUDDL revisited @ HKU
DDL revisited @ HKU
 
Mobile assisted language learning
Mobile assisted language learningMobile assisted language learning
Mobile assisted language learning
 
What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...
What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...
What to Consider for Effective Mobile-Assisted Language Learning: Design Impl...
 
Technology in Foreign language Learning
Technology in Foreign language Learning Technology in Foreign language Learning
Technology in Foreign language Learning
 
Assessment of Teachers - Pupils’ Perceptions on Use of Digital Images in Teac...
Assessment of Teachers - Pupils’ Perceptions on Use of Digital Images in Teac...Assessment of Teachers - Pupils’ Perceptions on Use of Digital Images in Teac...
Assessment of Teachers - Pupils’ Perceptions on Use of Digital Images in Teac...
 
Analysis of the strategies used by teachers when implementing the application...
Analysis of the strategies used by teachers when implementing the application...Analysis of the strategies used by teachers when implementing the application...
Analysis of the strategies used by teachers when implementing the application...
 
Assignment
AssignmentAssignment
Assignment
 
Indrani vedula
Indrani vedulaIndrani vedula
Indrani vedula
 
Mall
MallMall
Mall
 
COMPUTER USE BY SECONDARY SCHOOL PRINCIPALS
COMPUTER USE BY SECONDARY SCHOOL PRINCIPALSCOMPUTER USE BY SECONDARY SCHOOL PRINCIPALS
COMPUTER USE BY SECONDARY SCHOOL PRINCIPALS
 
Blitzmerker: Learning Idioms with a Mobile Game
Blitzmerker: Learning Idioms with a Mobile GameBlitzmerker: Learning Idioms with a Mobile Game
Blitzmerker: Learning Idioms with a Mobile Game
 
Artikel it
Artikel itArtikel it
Artikel it
 
M.a.l.l - description
M.a.l.l - descriptionM.a.l.l - description
M.a.l.l - description
 
20 ideas-for-using-mobile-phones-in-language-classroom-
20 ideas-for-using-mobile-phones-in-language-classroom-20 ideas-for-using-mobile-phones-in-language-classroom-
20 ideas-for-using-mobile-phones-in-language-classroom-
 

Similar a Speech Input as an Alternative Mode to Perform Multi-touch Gestures

EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...
EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...
EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...ijma
 
THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...
THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...
THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...ijma
 
The effect of using multimedia on english skills acquisition
The effect of using multimedia on english skills acquisitionThe effect of using multimedia on english skills acquisition
The effect of using multimedia on english skills acquisitionhusnul_atiyah
 
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLINGROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLINGgerogepatton
 
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLINGROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLINGijaia
 
Robot Based Interactive Game for Teaching Arabic Spelling
Robot Based Interactive Game for Teaching Arabic SpellingRobot Based Interactive Game for Teaching Arabic Spelling
Robot Based Interactive Game for Teaching Arabic Spellinggerogepatton
 
Technology And The Early Childhood Classroom
Technology And The Early Childhood ClassroomTechnology And The Early Childhood Classroom
Technology And The Early Childhood ClassroomTara Vogelsberg
 
Technology And The Early Childhood Classroom
Technology And The Early Childhood ClassroomTechnology And The Early Childhood Classroom
Technology And The Early Childhood ClassroomTara Vogelsberg
 
THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...
THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...
THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...ijma
 
Involving parents in call an empirical study
Involving parents in call  an empirical studyInvolving parents in call  an empirical study
Involving parents in call an empirical studyAlexander Decker
 
Integrating educational technology into teaching
Integrating educational technology into teachingIntegrating educational technology into teaching
Integrating educational technology into teachingBoutkhil Guemide
 
Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...
Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...
Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...TELKOMNIKA JOURNAL
 
Technology in Early Childhood This is a student sample –.docx
Technology in Early Childhood This is a student sample –.docxTechnology in Early Childhood This is a student sample –.docx
Technology in Early Childhood This is a student sample –.docxbradburgess22840
 
The effect of multimodal learning models on language teaching and learning
The effect of multimodal learning models on language teaching and learningThe effect of multimodal learning models on language teaching and learning
The effect of multimodal learning models on language teaching and learningYu-Zhen Liu
 
Second draft exploring the effectiveness and perceptions of computer game bas...
Second draft exploring the effectiveness and perceptions of computer game bas...Second draft exploring the effectiveness and perceptions of computer game bas...
Second draft exploring the effectiveness and perceptions of computer game bas...Ayuni Abdullah
 
Ict's learning and teaching
Ict's learning and teachingIct's learning and teaching
Ict's learning and teachingArifin Abidin
 
An Interactive Educational Environment For Preschool Children
An Interactive Educational Environment For Preschool ChildrenAn Interactive Educational Environment For Preschool Children
An Interactive Educational Environment For Preschool ChildrenLeonard Goudy
 
Wingate article critique summary
Wingate article critique summaryWingate article critique summary
Wingate article critique summaryNicole Wingate
 
Differentiation and Apps: Understanding your students and course design
Differentiation and Apps: Understanding your students and course designDifferentiation and Apps: Understanding your students and course design
Differentiation and Apps: Understanding your students and course designStaci Trekles
 

Similar a Speech Input as an Alternative Mode to Perform Multi-touch Gestures (20)

EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...
EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...
EXPLORING PRESCHOOL CHILDREN’S USER EXPERIENCE IN USING FREE-ROTATE GESTURES ...
 
THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...
THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...
THE ASSESSMENT OF MULTI-TOUCH HAND GESTURES TOWARDS FINE MOTOR SKILLS AMONG P...
 
The effect of using multimedia on english skills acquisition
The effect of using multimedia on english skills acquisitionThe effect of using multimedia on english skills acquisition
The effect of using multimedia on english skills acquisition
 
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLINGROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
 
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLINGROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
ROBOT BASED INTERACTIVE GAME FOR TEACHING ARABIC SPELLING
 
Robot Based Interactive Game for Teaching Arabic Spelling
Robot Based Interactive Game for Teaching Arabic SpellingRobot Based Interactive Game for Teaching Arabic Spelling
Robot Based Interactive Game for Teaching Arabic Spelling
 
Technology And The Early Childhood Classroom
Technology And The Early Childhood ClassroomTechnology And The Early Childhood Classroom
Technology And The Early Childhood Classroom
 
Technology And The Early Childhood Classroom
Technology And The Early Childhood ClassroomTechnology And The Early Childhood Classroom
Technology And The Early Childhood Classroom
 
THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...
THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...
THE DEVELOPMENT OF A DIGITAL STORYBOOK AND AN AUGMENTED REALITY (AR)-BASED PR...
 
Involving parents in call an empirical study
Involving parents in call  an empirical studyInvolving parents in call  an empirical study
Involving parents in call an empirical study
 
Integrating educational technology into teaching
Integrating educational technology into teachingIntegrating educational technology into teaching
Integrating educational technology into teaching
 
Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...
Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...
Effects of the Computer Mediated Communication Interaction on Vocabulary Impr...
 
Assessment ppt
Assessment pptAssessment ppt
Assessment ppt
 
Technology in Early Childhood This is a student sample –.docx
Technology in Early Childhood This is a student sample –.docxTechnology in Early Childhood This is a student sample –.docx
Technology in Early Childhood This is a student sample –.docx
 
The effect of multimodal learning models on language teaching and learning
The effect of multimodal learning models on language teaching and learningThe effect of multimodal learning models on language teaching and learning
The effect of multimodal learning models on language teaching and learning
 
Second draft exploring the effectiveness and perceptions of computer game bas...
Second draft exploring the effectiveness and perceptions of computer game bas...Second draft exploring the effectiveness and perceptions of computer game bas...
Second draft exploring the effectiveness and perceptions of computer game bas...
 
Ict's learning and teaching
Ict's learning and teachingIct's learning and teaching
Ict's learning and teaching
 
An Interactive Educational Environment For Preschool Children
An Interactive Educational Environment For Preschool ChildrenAn Interactive Educational Environment For Preschool Children
An Interactive Educational Environment For Preschool Children
 
Wingate article critique summary
Wingate article critique summaryWingate article critique summary
Wingate article critique summary
 
Differentiation and Apps: Understanding your students and course design
Differentiation and Apps: Understanding your students and course designDifferentiation and Apps: Understanding your students and course design
Differentiation and Apps: Understanding your students and course design
 

Más de TELKOMNIKA JOURNAL

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesTELKOMNIKA JOURNAL
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
 

Más de TELKOMNIKA JOURNAL (20)

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antenna
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fire
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio network
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bands
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertainties
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensor
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networks
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imaging
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
 

Último

Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSrknatarajan
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 

Último (20)

Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 

Speech Input as an Alternative Mode to Perform Multi-touch Gestures

  • 1. TELKOMNIKA, Vol.16, No.3, June 2018, pp. 1367~1375 ISSN: 1693-6930, accredited A by DIKTI, Decree No: 58/DIKTI/Kep/2013 DOI: 10.12928/TELKOMNIKA.v16i3.8417  1367 Received December 23, 2017; Revised March 17, 2018; Accepted April 5, 2018 Speech Input as an Alternative Mode to Perform Multi-touch Gestures Nor Hidayah Hussain*, Tengku Siti Meriam Tengku Wook, Siti Fadzilah Mat Noor, Hazura Mohamed Research Center for Software Technology and Management, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia *Corresponding author, e-mail: nhidayahhussain@gmail.com, fadzilah@ukm.edu.my Abstract Speech is fundamental and the most dominant form of communication. Speech input may facilitate natural interaction between humans and computers. For children, this input mode supports their interaction with application systems.This study addresses speech input as an alternative mode in order to improve multi-touch gesture interactions. Previous studies show that children difficult to perform multi- touch gestures successfully. In fact, multi-touch is parts of basic core gestures that have been adapted to mostof the learning applications.This study uses the Wizard-of-Oz method and posts interview, involving nine preschool children between ages of four to six years old. Results of the study show children’s interactional ability using speech input towards systems and positive feedback received from them regarding the prototype of the systems. The findings from this study highlight the opportunities and challenges in using speech input to increase the success of children’s interaction with multi -touch gestures. Keywords:speech input, multi-touch gestures,preschool children Copyright © 2018 Universitas Ahmad Dahlan. All rights reserved. 1. Introduction The phenomenon of speech mode in touchscreen mobile devices (e.g., Apple iOS Siri) and browser platforms (e.g., Google) have increased the potential of children in exploring and using the speech features provided. It is found that the current trend, children learn to interact with touch screens since infancy [1]. So, it is not surprising children are among the fast and responsive users to new technology. Additionally, technology integration in teaching and learning process in preschool education level in accordance with this Net generation's needs. Such technology tools are the Internet, multimedia presentations and interactive whiteboards use. Besides, the use of latest touch screens technology is widely used for learning purposes recently [2-5]. This includes latest multi-touch technologies such as tablet and smartphones that provide basic core gestures single and multi-touch. Past researches [1],[3], [6-8] show that children manage to perform single touch gesture successfully. This is because single touch such as tap and press is easy to perform since it involves a single index finger and similar to mouse clicking in the desktop environment [9]. Nevertheless, preschool children face problems performing multi-touch gestures that involve more than one finger such as rotation, zoom-in and zoom-out. Children in this age group have limited motor and cognitive skills that are still developing. Small fingers and weaker arms contribute to the fact that their touch input always strays far away from the target object, consequently rendering the system to be unable to recognize them [8]. Moreover, the fact that children's fingers touching an object inaccurately and low coordination of motor and visual skills make their gestures incapable to reach target accuracy [7]. The difficulties of performing multi-touch gestures may cause the ineffective use of learning applications. This will affect their learning objective performance. Therefore, this study proposed speech input as an alternative mode in order to perform multi-touch gestures successfully. The importance of speech input use is to minimize children's manual finger control [10] which is limited due to their motor skills that still develop. In fact, this input promotes natural interaction because it uses their own voice without need any tools [11-12], as well as easy and quick to operate on the system [13-14].
  • 2.  ISSN: 1693-6930 TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375 1368 The next section describes several considerations related to speech input from previous studies. Section 3 list the objectives of this study, followed by section 4 that presents methods used to examine the appropriateness of speech input as an alternative. Section 5 demonstrates the analysis and results of this study. Finally, in the last section, the conclusion and potential future work are summarized. 2. Related Work Speech is a basic, human-to-human mode of communication. Today, technological development has introduced such mode in human computer interaction. This is because speech mode promotes direct manipulation interaction and requires minimum manual control [10]. In addition, interaction using speech could be handled quickly, and besides that, it provides easiness to users especially children. Nevertheless, the development of speech input system involving children is quite challenging since there are differences of speech characteristics between these users and adults [15], in fact each children have different characteristics of speech. However, according to Shneiderman [16], the designer may integrate speech input into the system effectively if they understand the speech characteristics and cognitive processes of the users involved. Thus, several criterias such as children’s interaction ability using speech towards the system, language and speech development of children, children’s cognitive consideration towards speech and the effectiveness of speech input on touch user interfaces should be considered before integration of the speech into the system is to be done. 2.1. Children’s interaction ability using speech input on the system Previous researches have discussed matters related to children’s interaction of speech input. Amongst them were the uses of speech commands involving the children on Siri IOS Apple application [17], question-answering systems [18], web-searching systems [13-14], spoken dialogue systems [19] and computer game [20]. Analysis from those studies shows that young children in the early age (especially four and five years old) were using polite words and gave commands in descriptive sentences to the system [13],[19]. For example, their speech commands began with quite polite words like “Please..” or “Could you..”. Besides, it was found that several children used high-pitch sounds such as screaming when they gave commands to the system. This is due to their excitement in interacting with the system that could actually give feedback and to make sure their commands were received by the system [15],[17],[19]. They always use a variety of words but these words had the same meaning [13],[15],[21]. This is due to their weaknesses in selecting the appropriate word. Nevertheless, there are changes when they become experienced such as six year old children in which they are able to give short and specific commands [13]. Their speech development will fluctuate and become consistent with an increased age and would, in turn, achieve adult-like-speech [15],[22]. Furthermore, children’s feedback on speech commands is shown in the studies by [18] and [23-24]. Children said that they enjoy and felt excited in interaction using speech commands on the system [18],[23-24]. In addition, they expressed their intention to play more games and asked more questions to the system [18]. This is due to the fact that speech interaction fulfills their fantasy and curiosity to interact with the system. Besides, they want to explore the extent of system knowledge based on their questions. Overall, most of the children agree to interact with the speech system in the future [18], [23-24]. Therefore, the children's interaction ability to use speech input needs to be investigated so that the speech integrated to the system will contribute in performing multi-touch gestures successfully. 2.2. Speech and language development of the children According to [25], children in the preschool age are more sensitive to the use of speech and language. Currently, they already know to use simple rules of grammar, such as personal pronouns (I, you, he/she) and direction (outside, inside, up, down). Next, they manage to use 250 (four years old children) up to 14000 words (six years old children). With such vocabulary acquisition, they are able to construct sentences containing four or more words in a sentence [26-27]. Several among them were able to share their experiences and activities at both home and school to other people. Their speech could be understood by the people around them as they got the ability to express most sounds correctly. Moreover, children’s ability to differentiate the word with syllables occurs early in their developmental skills [28] especially in age
  • 3. TELKOMNIKA ISSN: 1693-6930  Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain) 1369 three [29]. There are various interesting activities such as word games, clapping syllables and songs that are taught in kindergartens and preschools to create syllable awareness among children. This is because syllable awareness is part of phonological awareness that becomes a key to determine children’s developmental skills of reading and spelling [28]. Phonological awareness is sensitivity towards the sounds of language and is a difficult part in the children’s reading development. Hence in this study the number of syllables in word commands used by children need to be examined so that the speech commands integrated into the system fulfil their requirement. 2.3. Children’s cognitive consideration towards speech input Shneiderman [16] stated that speech interaction between human-computer and human- human is different since it is related to a cognitive process. Because of this, the speech interaction for humans and computers involve presentations and transferring of information. For preschool children, their limited cognitive skills will affect them in interacting using speech input. According to [19], children tend to pause between words when they talk. This is because they require more time to formulate words in their speech [30]. In order to give speech commands to the system, they need to have sufficient domain of knowledge to produce the appropriate word. However, it is quite difficult for them since they are slow in processing the information and weak in determining the relevant information needed by the system [31]. Therefore, this study will provide a set of appropriate words that could be used by children when they interact with the speech system. They are required to produce their own word commands first so that the tendency of syllables number could be obtained directly from them. They will be assisted with the words set if they do not know the word. 2.4. The effectiveness of speech input on touch user interfaces Previous studies have discussed the effectiveness of speech input. A study by [32] proposed Speak-As-You-Swipe (SAYS) multimodal interface that enables text entry process using voice and gestures on a virtual keyboard on mobile devices. The integration of swipe gestures and voice inputs were an alternative in resolving slow text entry processes on the keyboard. Results of the study show the accuracy of predicted words which was increased by 4% after using the SAYS interface. Apart from that, voice augmented manipulation (VAM) voice- based was proposed by [33]. The aim of this study is to augment user operations for scrolling, zooming and panning in a mobile environment. This technique was proposed to decrease the repeated finger gestures during the interaction of said three operations. Their findings show that this technique helps users to scroll, zoom and pan smoothly without repeating finger gestures. Next, the augmentation of speech input on Tetris arcade games for learning purposes was studied by [34]. She emphasized the concept of a retrieval process that requires users to recall input memory that was previously used. Findings from her study indicated that the use of augmented speech to repeatedly attempt a recall from memory could improve long-term retention for retrieval practice. Meanwhile, there is a study that proposes voice-based control on the search interface for children ages eight to 10 years old [13-14]. Results from the study show that the combination of voice-based control interface and touch could enhance the usability of web search engine for children, and at the same time could solve children who had problems in writing. Based on the improvement of performance of input mode in interactive systems from previous research, the use of speech input as an alternative to multi-touch gestures problems is expected to improve children’s learning process. 3. Objectives of the Study The aim of this study is to examine speech input as an alternative mode in order to solve the issues of difficulty in multi-touch gestures among preschool children. To achieve the purpose of this study, there are two objectives to be attained: a) To verify children's interaction ability using speech input towards prototype of multi- touch gestures. b) To identify number of syllables in a word from children's speech input.
  • 4.  ISSN: 1693-6930 TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375 1370 4. Methodology Two phases are involved in examining the speech alternative mode for multi-touch gestures interaction, namely Wizard-of-Oz experiment and post interview session. Both phases are important to ensure that the data collection is more informative and has quality. 4.1. Wizard-of-Oz experiment Previous To achieve objective A and B, a study in the form of a so-called "Wizard-of- Oz-Experiment" (WoZ) was designed to gather the data of children's speech and interaction styles towards speech-based multi-touch gestures prototype. WoZ is a partially functional prototype simulation that does not exist on the application interface [35-36] and is used to test the limited functionality of the prototype for a final design development. Here, the participant assumes that they interact with a fully functional application, but actually it is controlled by a human operator behind the curtain (called Wizard) [9], [36-37] (Figure 1). In general, four facilitators include the researcher that was involved in this study. One was the Wizard; the researcher will stay next to the participant to give additional instructions and help, and two other facilitators who will entertain other participants stay in the nearby room. Figure 1. Interaction Between Participant and the System Figure 2. Speech-Based Multi-Touch Gestures System This study was carried out at the Multimedia Studio of Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia with nine preschool children. The children were ages four years (3 children), five years (3 children) and six years (3 children) from Emaan Kindy Bangi Avenue kindergarten at Kajang, Selangor. All participants had prior experience using gadgets (smartphones or tablets) and the ability to speak according to the selection by their class teacher. The kindergarten’s consent was obtained before conducting the study. In this study, a checklist form is used as an instrument for data collection. The form is divided into three parts namely; Section A which consists of 20 items (cover two items of participants's demographic and 18 items related to completeness, and time to complete task). Section B contains six items to identify the number of syllables in a word of children's speech. Finally, four items in Section C is related to children's feedback towards prototype systems for post-interview session purpose. The device used for this study was a Dell personal computer's monitor (for display to the participant purpose), speaker, Dell Inspiron 13 7000 series notebook (controlled by the wizard) and a HDMI cable which connects between the monitor and the notebook. A speech- based multi-touch gestures prototype application was developed for this study (Figure 2). The purpose of the prototype was to examine how a participant interact using speech input. A set of appropriate words was provided in order to help the participant in giving commands to the systems. The language medium used in this study is Malay which includes for speech commands to the system as well as provided word sets to participants. This is to ensure that the participant can speak naturally as when they communicate with friends or when at home. The setting, where the study took place, was in a quiet and isolated room from other participants. It is important to create a study environment that allows the participant to interact naturally, full of attention and freely thinking towards the systems [12],[14]. The participant sat on a chair facing the computer screen. Meanwhile, the wizard sat on a chair facing the notebook behind the
  • 5. TELKOMNIKA ISSN: 1693-6930  Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain) 1371 curtain controlling user interface of the systems, captured speech commands from the participant and reactions for each interaction between the participant and systems. The researcher (that stays next to the participant) has recorded each interaction by the participant based on the checklist form. 4.2. Post-interview Each participant was interviewed by the researcher after the WoZ experiment. The purpose is to gain their feedback towards the system. This is important as it becomes an improvement for future system development. Each feedback from the participants was recorded on the checklist form. 4.3. Procedure The researcher gave instructions manually to the participants on the introduction to the prototype system, how to use it, no restrictions to give any word commands, and how a successful task is to be considered. Next, the participants have to follow voice instructions from the system. The first step started off with a pre-interview session by the system related to the participants' demographic. In the next step, the system will inform the task activity that they need to complete and preview how to interact with the system. By using the task list provided, the participants were given 10 minutes to interact with the system using only speech input. They were free to give any word commands in order to perform multi-touch movements to objects. There are six tasks of multi-touch gestures to be completed on two objects by the participants. When the task was successfully completed, the system gave a positive audiovisual feedback. If the participant does not know the word, the researcher gave a clue of the word's commands. If he/she was stuck, the researcher gave set of appropriate words to be selected by the participant. After the experiment, the participants will be interviewed on feedback to the speech system used and recommendations to improve the system in the future. Video and audio recordings were taken during the experiment for analysis data purposes. 4.3.1 Multi-touch gestures’ tasks There are two images on screen; a colour image (left screen) and a target image (right). The colour image represents experiment's object to be studied, while the target image is a real image. The participants are required to make a multi-touch movement to the object (left screen) using only speech commands. There are three multi-touch gestures which are rotation, zoom-in and zoom-out as shown in Figure 3-5. Each gesture has two different objects to be completed, making it all six tasks. 1) Rotation: The participants were required to give any rotation commands (example: “Pu-tar”) to the colour image until the image has an equal position with the target image. Guidance: the red line box outside the colour image will change to blue whenever the image reaches the specified boundary line. The successful gestures will be determined when the colour image changes to grey. 2) Zoom-In: The participants were required to give any scale up commands (example: “Be- sar”) to the colour image until the image has an equal size with the target image. Guidance: the red line box outside the colour image will change to blue whenever the image reaches the specified boundary line. The successful gestures will be determined when the colour image changes to grey. 3) Zoom-Out: The participants were required to give any scale down commands (example: “Ke-cil”) to the colour image until the image has an equal size with the target image. Guidance: the red line box outside the colour image will change to blue whenever the image reaches the specified boundary line. The successful gestures will be determined when the colour image changes to grey.
  • 6.  ISSN: 1693-6930 TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375 1372 Figure 3. Steps in Rotation Task. a) Rotation commands are given to the colour image on the left screen, b) The colour image equals the position of the target image and reaches the blue line, c) Rotation task is completed successfully Figure 4. Steps in Zoom-In Task. a) Scale up commands are given to the colour image, b) The colour image equals the size of the target image and reaches the blue line, c) Zoom-in task is completed successfully Figure 5. Steps in Zoom-Out Task. a) Scale down commands are given to the colour image, b) The colour image equals the size of the target image and reaches the blue line, c) Zoom-out task is completed successfully 5. Results and Analysis Based on the WoZ experiment and post-interview session conducted, the objectives of the study were proven with data findings as follows: 5.1. Children's interaction ability using speech input To achieve objective A is to verify children's interaction ability using speech input towards the system, the participants were required to give commands using their own word to the colour image until the image is equal to the position or size of the target image. Table 1 shows the speech interaction ability of the children before and after given set of words.
  • 7. TELKOMNIKA ISSN: 1693-6930  Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain) 1373 Table 1. Children’s Speech Interaction Ability Before and After Given Set of Words Commands. Rotation Zoom-in Zoom-out Object 1 Object 2 Object 1 Object 2 Object 1 Object 2 Use ow n w ord 2 3 4 4 4 4 Given set of w ords 9 9 9 9 9 9 An initial analysis found that only two and three from nine participants could use their own words for object 1 and 2 in the rotation task without assistance from the researcher. Meanwhile, there were four participants that can give commands using their own words and without assistance from the researcher for zoom-in and zoom-out tasks of both objects. These findings show that there are less than 50% of the participants who could use their own words; in fact they took a long time to think of an appropriate word. In addition, the participants ask assistance from the researcher regarding the words to be used. Yet, after the set of words were provided, about 100% of the participants able to complete all the tasks by their own in the given time. At the end of the WoZ experiment, the participant was interviewed to gain feedback about the prototype system using speech input. Results of the post-interview shows a positive response from the participants on the prototype developed. All nine participants had fun and enjoyed their interaction with the system. They had agreed that interaction using speech is very easy. Next, eight out of nine participants had agreed to interact with that application system if it was introduced in school learning. Meanwhile, only one participant did not want to interact with the application because of exhaustion. From Table 1, the number of participants that could use their own words for rotation task is less than zoom-in and zoom-out tasks. According to [13], children have weaknesses in selecting the appropriate word. This is due to the rotation gestures that require a high level of cognitive skills and involves the complexity of motor skills compared to other multi-touch gestures [3]. Hence, most of the participants do not have the ability to give commands using their own words and seek the adults's assistance to interact with the system if there are no words given. This is because children need time to formulate a word, resulting in their slow interaction with the system [16], [30]. Furthermore, children need practical training since they learn information through several channels namely the visual channel, auditory and kinesthetic [38]. This can be seen after the set of words were given; the participants could increase their interaction ability towards the system. From the analysis and discussion above, children's interaction ability using speech towards multi-touch gestures system were verified. From the findings obtained which is the children’s interaction ability in using speech commands, in addition to the positive feedback on the prototype system, this shows the system's requirement to be developed in the future. 5.2. The number of syllables of speech commands Based on Figure 6, the highest syllables frequency is from five to seven participants used word commands with two syllables. Moreover, not a single participant used a word with one syllable for the rotation task. These results show that children were comfortable with two syllables word when giving commands to the system. As for children in this age group, they knew how to differentiate the syllable of a word since the age of three years old [29]. In fact, syllabic understanding is part of a phonological awareness that should be mastered by children ages four to six years in order to determine their reading and spelling skills [28]. The reason why it is important to gain word syllables number from children is that of the limited findings on speech commands to object in performing touch gestures. Past studies [33], [39-40] only applied a form of non-speech vocalization (e.g pitch or tone of speech sounds) on the touch screen system. Indeed, this study requires speech in spoken word's form compared to non-speech vocalization in order to give commands to screen object in performing multi-touch gestures. Therefore, the actual system development will use two syllable words of command based on the findings obtained.
  • 8.  ISSN: 1693-6930 TELKOMNIKA Vol. 16, No. 3, June 2018: 1367-1375 1374 Figure 6. Frequency of Syllables Number in a Word of Speech Input 6. Conclusion In conclusion, it can be mused that pre-school level children have the ability to interact using speech input on multi-touch gestures system. The positive feedback from these children shows that they were satisfied with the speech input mode available on the system. The use of speech input as an alternative mode for multi-touch gestures is seen to be beneficial for children to overcome their cognitive skill limitations. The findings of this study are important for further development of multimodal interactions using touch and speech modes, yet further research is necessary. Future work includes the development of actual systems and involves more participants. References [1] Aziz NAA, Batmaz F, Stone R, Chung PWH. Selection of touch gestures for children’s applications: repeated experimentto increase reliability. International Journal of Advanced Computer Science and Applications. 2014; 5(4): 97-102. [2] Ibharim LFM, Zaki NAA, Yatim MHM. Touch gesture interaction of preschool children towards games application using touch screen gadjet. Asia-Pacific Journal ofInformation Technology and Multimedia. 2015; 4(1): 47-58. [3] Nacher V, Jaen J, Navarro E, Catala A, González P. Multi-touch gestures for pre-kindergarten children. International Journal of Human-Computer Studies. 2015; 73: 37–51. [4] Yu X, Zhang M, Xue Y, Zhu Z. An exploration of developing multi-touch virtual learning tools for young children. 2010 2nd Int. Conference on Education Technology and Computer (ICETC). 2010; 3: 0-3. [5] McKnight L, Cassidy B. Children's interaction with mobile touch-screen devices: experiences and guidelines for design. International Journal of Mobile Human Computer Interaction. 2010; 2(2): 1-18. [6] Hussain NH, Wook TSMT, Noor SFM, Mohamed H. Children’s interaction ability towards multi-touch gestures. International Journal on Advanced Science Engineering Information Technology.2016;6(6): 875–881. [7] Vatavu R, Cramariuc G, Schipor DM. Touch interaction for children aged 3 to 6 years: experimental findings and relationship to motor skills. International Journal of Human-Computer Studies. 2015; 74: 54–76. [8] Ibharim LFM, Borhan N, Yatim MHM. A field study of understanding child’s knowledge, skills and interaction towards capacitive touch technology (iPad). 2013 8th International Conference on Information Technologyin Asia-SmartDevices Trend: Technologising Future Lifestyle,Proceedings of CITA 2013. 2013: 6–10. [9] Rogers Y, Sharp H, Preece J. Interaction Design: Beyond Human-Computer Interaction, Wiley. 2011. [10] Harada S, Wobbrock JO, Landay JA. Beyond speech recognition: improving voice-driven access to computers. Engineering. 2009: 3–4. [11] Almeida N, Silva S, Teixeira A. Design and development of speech interaction: a methodology. International Conference on Human-Computer Interaction. 2014; 370-381. [12] Kiran P, Mohana HS, Vijaya PA. Human machine interface based on eye wink detection. International Journal of Informatics and Communication Technology (IJ-ICT). 2013; 2(2): 116-23.
  • 9. TELKOMNIKA ISSN: 1693-6930  Speech Input as an Alternative Mode to Perform Multi-touch… (Nor Hidayah Hussain) 1375 [13] Gossen T, Kotzyba M, Stober S, Andreas N. Voice-controlled search user interfaces for young users. 7th Annual Symposium on Human-Computer Interaction and Information Retrieval. 2013: 2-5. [14] Kotzyba M, Siegert I, Gossen T, Wendemuth A, Nurnberger A. Exploratory voice-controlled search for young users: challenges & potential benefits. Proceedings of 7th Annual Symposium on Human- Computer Interaction and Information Retrieval. 2013. [15] Lee S, Potamianos A, Narayanan S. Acoustics of children’s speech: developmental changes of temporal and spectral parameters. The Journal of the Acoustical Society of America. 1999; 105(3): 1455–1468. [16] Shneiderman B. The limits of speech recognition. Comm. ACM. 2000; 43: 63–65. [17] Lovato S, Piper AM. Siri, is this you?: understanding young children's interactions with voice input systems. Proceedings ofthe 14th International Conference on Interaction Design and Children. 2015: 335–338. [18] Tewari A, Canny J. What did spot hide? a question-answering game for preschool children. Proceedings ofthe 32nd Annual ACM Conference on Hum an factors in Computing Systems-CHI ’14. 2014: 1807–1816. [19] Gerosa M, Giuliani D, Narayanan S, Potamianos A. A review of asr technologies for children’s speech. Proceedings of the 2nd Workshop on Child Computer and Interaction WOCCI 09. 2009: 1–8. [20] Farantouri V, Potamianos A, Narayanan S. Linguistic analysis of spontaneous children speech. Proceedings of the Workshop on Child, Computer and Interaction. 2008. [21] Yildirim S, Narayanan S, Byrd D, Khurana S. Acoustic analysis of preschool children’s speech. Proceeding of 15th ICPhS. 2003: 949–952. [22] Hamid BA, Izyan R, Abu A. Analisis akustik ruang vokal kanak-kanak melayu. Jurnal Bahasa. 2011; 11(1): 48-62. [23] Fridin M. Storytelling by a kindergarten social assistive robot: a tool for constructive learning in preschool education. Computers & Education. 2014; 70: 53–64. [24] Kannetis T, Potamianos A. Towards adapting fantasy, curiosity and challenge in multimodal dialogue systems for preschoolers.Proceedings ofthe 2009 International Conference on Multimodal Interfaces- ICMI-MLMI ’09. 2009: 39-46. [25] Hamzah H, Samuel JN. Perkembangan Kanak-Kanak untuk Program Ijazah Sarjana Muda Perguruan. Kumpulan Budiman Sdn. Bhd. Subang Jaya. 2009. [26] Mclaughlin MR. Speech and Language Delay in Children. American Family Physician. 2011; 83: 1183–1188. [27] American Academy of Pediatric Dentistry. Speech and Language Milestones. Pediatric dentistry. 2011; 33(6): 330. [28] Yopp HK, Yopp RH. Phonological Awareness is Child's Play!. YC Young Children. 2009; 64(1): 1-9. [29] Lanza JR, Flahive LK. Linguisystems Guide to Communication Milestones. https://www.linguisystems.com/pdf/Milestonesguide.pdf. [30] Green J, Nip I. Some Organization Principles in Early Speech Development. Speech Motor Control: New Developments in Basic and Applied Research. 2010: 171-188. [31] Gossen T, Hempel J, Nürnberger A. Find it if you can: usability case study of search engines for young users. Personal and Ubiquitous Computing. 2013; 17(8): 1593–1603. [32] Sim KC. Speak-As-You-Swipe (SAYS): a multimodal interface combining speech and gesture keyboard synchronously for continuous mobile text entry. Icmi ’12: Proceedings of the Acm International Conference on Multimodal Interaction. 2012: 555–560. [33] Sakamoto D, Komatsu T,Igarashi T. Voice augmented manipulation: using paralinguistic information to manipulate mobile devices.Proceedings ofthe 15th International Conference on Human-Computer Interaction with Mobile Devices and Services-MobileHCI ’13. 2013: 69–78. [34] Cai CJ. Adapting arcade games for learning. CHI ’13 Extended Abstracts on Human Factors in Computing Systems on-CHI EA ’13. 2013: 2665–2670. [35] White KF, Lutters WG. Behind the curtain: lessons learned from a wizard of oz field experiment. ACM SIGGROUP Bulletin. 2003; 24(3): 129-135. [36] Lazar J, Feng JH, Hoccheiser H. Research Methods In Human-Computer Interaction. John Wiley & Sons. 2010. [37] Preece J, Yvonne R, Helen S, David B, Simon H, Tom C. Human-Computer Interaction. Addison- Wesley Longman Ltd. 1994. [38] Morgan H. Multimodal children’s e-books help young learners in reading. Early Childhood Education Journal. 2013; 41(6): 477–483. [39] Sporka AJ, Felzer T, Kurniawan SH, Poláček O, Haiduk P, MacKenzie IS. CHANTI: predictive text entry using non-verbal vocal input. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems-CHI '11. 2011: 2463-2472. [40] Harada S, Wobbrock JO, Landay JA. Voicedraw: a hands-free voice-driven drawing application for people with motor impairments. Proceedings of the 9th International ACM SIGACCESS Conference on Computers and accessibility-Assets '07. 2007: 27-34.