SlideShare una empresa de Scribd logo
1 de 4
Descargar para leer sin conexión
Advanced Gaze Visualizations for Three-dimensional Virtual Environments
                                         Sophie Stellmach∗ , Lennart Nacke∗∗ , Raimund Dachselt∗
                ∗
                    User Interface & Software Engineering Group ∗∗ Game Systems and Interaction Research Laboratory
                            Faculty of Computer Science                            School of Computing
                     Otto-von-Guericke-Universit¨ t Magdeburg
                                                 a                            Blekinge Institute of Technology
                           D-39106 Magdeburg, Germany                           SE-37225 Ronneby, Sweden
                      stellmach@acm.org, dachselt@acm.org                         lennart.nacke@acm.org


Abstract                                                                                           adapted to challenges implied by 3D space (e.g., occlusion prob-
                                                                                                   lems or varying viewpoints). On the other hand, they do not allow
Gaze visualizations represent an effective way for gaining fast in-                                for quickly determining which parts of a 3D model were viewed the
sights into eye tracking data. Current approaches do not adequately                                longest. After all, this lack of suitable techniques currently results
support eye tracking studies for three-dimensional (3D) virtual en-                                in a time-consuming evaluation of eye tracking studies in 3D VEs.
vironments. Hence, we propose a set of advanced gaze visualiza-                                    Usually, video recordings of displayed content have to be examined
tion techniques for supporting gaze behavior analysis in such envi-                                frame-by-frame, for which gaze data are represented as superim-
ronments. Similar to commonly used gaze visualizations for two-                                    posed gaze plots. Thus, more suitable techniques are required for a
dimensional stimuli (e.g., images and websites), we contribute ad-                                 faster visual gaze analysis in various application domains (e.g., vir-
vanced 3D scan paths and 3D attentional maps. In addition, we                                      tual interactive trainings [Duchowski et al. 2000], social network-
introduce a models of interest timeline depicting viewed models,                                   ing environments, and computer-aided design). Such techniques
which can be used for displaying scan paths in a selected time seg-                                could, for example, assist in quickly examining 3D product design
ment. A prototype toolkit is also discussed which combines an im-                                  features or people’s attentional foci when being confronted with a
plementation of our proposed techniques. Their potential for facil-                                critical situation.
itating eye tracking studies in virtual environments was supported
by a user study among eye tracking and visualization experts.                                      This paper makes a methodological and technological contribution
                                                                                                   to the investigation of visual attention in 3D VEs. To facilitate eye
                                                                                                   tracking studies in 3D VEs, we propose a set of adapted and novel
CR Categories:      H.5.1 [Multimedia Information Systems]:                                        gaze visualization techniques: superimposed 3D scan paths and 3D
Evaluation/Methodology—Information Interfaces and Presentation                                     attentional maps, as well as a models of interest timeline depicting
I.4.8 [Computing Methodologies]: Image Processing and Com-                                         viewed models. We have also developed a gaze analysis tool for
puter Vision—Scene Analysis                                                                        the evaluation of eye tracking studies in static 3D VEs. The tech-
                                                                                                   niques and the tool are described together with related work in the
Keywords: Gaze visualizations, eye tracking, eye movements, at-                                    following section. In Section 3, a survey is discussed which was
tentional maps, scan paths, virtual environments, three-dimensional                                conducted among eye tracking and visualization experts for assess-
                                                                                                   ing the usefulness of our proposed techniques.
1 Introduction
                                                                                                   2 Gaze Visualization Techniques for Three-
For investigating gaze behavior, large sets of numerical data have
to be collected and evaluated, such as eye positions, calculated fix-
                                                                                                     dimensional Virtual Environments
ations and saccades, as well as total fixation times [Ramloll et al.
2004]. Consequently, gaze visualizations have helped detecting,                                    Common gaze visualization techniques for evaluating eye tracking
understanding, and interpreting essential relationships in such mul-                               studies include scan paths and attentional maps but fall short when
tivariate and multidimensional data. Especially superimposed tech-                                 it comes to 3D VEs. For example, the gaze behavior of customers
niques proved useful for imparting how a viewer’s gaze behaved                                     in a virtual 3D store could be evaluated with regard to product and
with respect to an underlying stimulus. Common two-dimensional                                     advertisement placements. Potential research questions could ask
(2D) gaze visualization techniques include scan paths, attentional                                 in which order items were observed or how visual interest is gen-
maps, and areas of interest.                                                                       erally distributed. For answering questions like these, we introduce
                                                                                                   a set of gaze visualization techniques adapted for static 3D VEs.
While traditional diagnostic eye tracking studies have focused on a                                These techniques include superimposed visualizations of 3D scan
person’s gaze behavior while reading texts or observing images,                                    paths and 3D attentional maps, as well as a models of interest time-
new application areas have emerged in connection with the ad-                                      line depicting the order of fixated objects. For them, a 3D scene is
vancement of technology. This includes the perception of digi-                                     assumed with several static models, which users can freely explore
tal content such as websites, user interfaces, digital games, and                                  and for which sample gaze data was collected. Furthermore, we
three-dimensional (3D) virtual environments (VE). However, cur-                                    presume a lack of transparent phenomena (e.g., fog or glass). Our
rent gaze visualization techniques do not adequately support inter-                                approach uses 3D gaze positions which are calculated as intersec-
active visual analysis of 3D VEs. On the one hand, they are not well                               tions between gaze rays and viewed models (see Section 2.4).

Copyright © 2010 by the Association for Computing Machinery, Inc.                                  2.1   Three-dimensional Scan Paths
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the             A common approach for depicting eye tracking data are scan path
first page. Copyrights for components of this work owned by others than ACM must be                visualizations superimposed on a stimulus representation. Scan
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on         paths show the order of eye movements by drawing connected lines
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                                   (saccades) between subsequent gaze positions (fixations). Fixa-
permissions@acm.org.                                                                               tion and saccade plots imply data clustering by grouping infor-
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             109
2.2 Models of Interest Timeline

                                                                               A timeline visualization maps data against time and facilitates find-
                                                                               ing events before, after, or during a given time interval. Thus, it can
                                                                               be used to answer several questions, such as: Has object x been
                                                                               observed repetitively? In what order and for how long are objects
                                                                               looked at? In this regard, the Object vs. Time View [Lessing and
                                                                               Linge 2002] is a series of timelines for depicting Areas of Interest
                                                                               (AOIs). Each AOI is assigned a time row, resulting in an AOI ma-
Figure 1: Two alternative fixation representations, spheres (a) and             trix. With an increasing number of AOIs and shorter fixation times,
cones (b), are presented for 3D scan paths.                                    the table size increases and may hinder a good overview.
                                                                               Therefore, we propose a space-filling models of interest (MOI)
                                                                               timeline (see Figure 3) for a compact illustration that effectively
                                                                               utilizes assigned screen space by omitting voids. We suggest to use
                                                                               the term Model of Interest to describe distinct 3D objects. The MOI
                                                                               timeline gives an overview about a user’s gaze distribution based on
                                                                               viewed models. Each model is labeled with a specific color, which
                                                                               can be manually adapted. By assigning the same value to different
                                                                               objects, semantic groups can be defined, for example, for group-
                                                                               ing similar looking models, object hierarchies or closely arranged
                                                                               items. A legend is displayed for providing an overview of assigned
                                                                               colors (see in the left area of Figure 5).
Figure 2: Two examples of camera paths (viewpoints and viewing
directions) and fixations.                                                      Zooming and selection techniques may aid in making data visible
                                                                               which would otherwise be suppressed due to limited display sizes.
                                                                               Based on TimeZoom [Dachselt and Weiland 2006], the MOI time-
mation into meaningful units (fixations). While fixations indicate               line can be dragged to a desired position to offer horizontal scrolling
aspects attracting an observer’s attention, saccades provide infor-            and supports continuous zooming (e.g., via a zoom slider, see Fig-
mation about how fixations are related to each other. Thereby, fix-              ure 3). For this purpose, the viewing area is divided into a large
ations are usually displayed as circles varying in size depending on           detail area for zooming (see Figure 3) and a small overview area
the fixation duration and saccades as straight lines.                           displaying the entire collection. Displayed scan and camera paths
                                                                               can be filtered with respect to a selected period. For this purpose,
Scan paths are frequently used for static (e.g., images and texts)             navigation markers will appear if the path visualizations are em-
and dynamic (e.g., videos) 2D stimuli. For other stimuli, superim-             ployed (see Figure 3), otherwise the markers are hidden. The user
posed fixation plots are usually applied to recorded videos of the              can simply click and drag the markers to define intervals of inter-
presented content. This often implies a time-consuming frame-by-               est within the timeline. While dragging the timeline, the user can
frame video data analysis. One solution proposed by Ramloll et al.             play back data of interest as defined by this fixed interval. Finally,
[2004] is a fixation net for dynamic 3D non-stereoscopic content,               additional details for a timeline entry (e.g., object identifiers, start
for which gaze positions are mapped onto flattened objects. Rep-                and finish times) can be displayed, when hovering over it with the
resenting binocular scan paths in 3D VRs has been presented by                 mouse cursor. Context menus are triggered by right-clicking on
Duchowski et al. [2000] for which 2D gaze positions and depths                 items, for instance, for changing its color.
(gaze depth paths) are depicted.
In contrast to Duchowski et al., we propose a monocular scan path              2.3 Three-dimensional Attentional Maps
depicting 3D gaze positions as intersections of a gaze ray and a
viewed model. Besides traditional spherical representations (see               An attentional map (or heatmap) is an aggregated representation
Figure 1a), we used conical fixation representations pointing at a              depicting areas of visual attention for primarily static 2D stimuli.
corresponding gaze position (see Figure 1b), since it may integrate            This is substantiated by the fact that an attentional map usually has
additional information about varying camera positions. So, cones               the same dimensions (width and height) as the underlying stimu-
could represent gaze positions (apex), fixation durations (cone’s               lus [Wooding 2002]. Since fixations are merely accumulated, at-
base), viewing directions (cone’s orientation), and viewing dis-               tentional maps do not provide information about gaze sequences,
tances (cone’s height) within one representation. The traditional              but are suitable for indicating areas attracting visual attention over
saccade representation can cause problems in 3D, because saccade               a certain period of time. Visualizing gaze data directly in the 3D
lines may cross through surfaces. A simple solution we propose is              VE enables an aggregated representation of longer observations (in
to maintain the traditional saccade representation with the possibil-          contrast to the traditional frame-by-frame evaluation).
ity for adapting rendering options of 3D models (e.g., wireframe
models or hiding objects) for determining linked fixations.                     We propose 3D attentional maps as superimposed aggregated vi-
                                                                               sualizations for virtual 3D stimuli: projected, object-based, and
It is also important to visualize how the locations and viewing direc-         surface-based attentional representations (see Figure 4). A pro-
tions of the virtual camera have changed during observation. This              jected attentional map is a 2D representation of 3D gaze distribu-
may aid in finding out, for example, if a scene was observed from               tions for selected arbitrary views (e.g. a top view as in Figure 4).
diverse locations. We propose to visualize the camera path with                For an object-based attentional representation one color is mapped
traces pointing at the respective gaze positions. Figure 2 shows               to the surface of each model based on its received visual attention
an example for a camera path for which the camera locations are                (see Figure 4b). This allows for quickly evaluating an object’s vi-
depicted as red lines and viewing directions as straight blue lines.           sual attractiveness while maintaining information about the spatial
Displayed scan and camera paths can be filtered by means of the                 relationship with other models (e.g., two adjacent models have re-
models of interest timeline to provide a better overview.                      ceived high visual attention). Surface-based attentional maps dis-


                                                                         110
Figure 3: An example of the models of interest timeline with selection markers for confining displayed scan and camera paths.




(a) Projected (top view)   (b) Object-based       (c) Surface-based

Figure 4: Advanced attentional maps for the application in three-
dimensional virtual environments.



play aggregated fixation data as heatmaps directly on a model’s sur-
face using a vertex-based mapping (see Figure 4c). This provides
detailed insights into gaze distributions across models’ surfaces, al-
lowing to draw quick conclusions about which model aspects at-
tracted visual interest.                                                       Figure 5: A screenshot from SVEETER illustrating multiple views
                                                                               at a scene. The MOI timeline is shown in the lower area with its
A combination of these techniques offers different levels of detail            legend displayed to the upper left.
for data investigation. While a projected heatmap may give an
overview of the gaze distribution across a scene, an object-based
attentional map can provide information about the models’ visual               3 User Study
attractiveness when zooming in. Finally, the surface-based tech-
nique allows for close examinations of viewed models.                          We have conducted a user study with eye tracking and visualization
                                                                               experts to assess the usefulness of the presented techniques. The
                                                                               method and results are discussed in the following paragraphs.
2.4 Implementation
                                                                               Participants.    Group 1 consisted of 20 eye tracking professionals
                                                                               and researchers, aged between 23 and 52 years (Mean [M] = 34.50).
For the implementation of the presented gaze visualization tech-               Participants in this group rated their general eye tracking knowl-
niques, a virtual 3D scene was required for which we used Mi-                  edge1 higher than average (M = 3.85, Standard Deviation [SD] =
crosoft’s XNA Game Studio 3.0 (based on C#). Our system was                    0.96). Group 2 included 8 local data visualization and computer
confined to static 3D VEs without any transparent phenomena (e.g.,              graphics experts, aged between 25 and 35 years (M = 28.25). This
smoke or semi-transparent textures). Users can freely explore the              group rated their eye tracking knowledge under average (M = 1.25,
scene by moving their camera viewpoints via mouse and keyboard                 SD = 1.09), but their expertise in computational visualization above
controls. In addition, an integration of the Tobii 1750 Eye Tracker            average (M = 3.63, SD = 1.22).
and XNA allowed for logging 3D gaze target positions on models’
surfaces. For this purpose, a 3D collision ray needed to be deter-             Measures.     The usefulness of the gaze visualization techniques
mined based on 2D screen-based gaze positions which are supplied               was investigated in an online survey2 . Each technique was briefly
by the eye tracker. The gaze ray was used to calculate and log its in-         described with screenshots. Respondents were asked to rate their
tersection with virtual objects on the precision level of a polygonal          agreement to statements such as “Cone visualizations are useful for
triangle on a model [M¨ ller and Trumbore 2005]. The processed
                         o                                                     representing fixations in virtual environments.” The qualitative part
data were stored in log files for post-analysis.                                of the survey collected comments about usefulness, improvements,
                                                                               and possible applications of the techniques.
The presented visualization techniques were implemented in a pro-
totype of a gaze analysis software tool: SVEETER (see Figure                   Procedure and Design.       Group 1 had to answer the questions
5). It is based on XNA (for 3D scenes) and Windows Forms to                    based on brief textual and pictorial descriptions of the gaze visual-
benefit from existing interface elements (e.g., buttons and menus).             ization techniques provided in the online survey. For a better un-
SVEETER offers a coherent framework for loading 3D scenes and                  derstanding of the new gaze visualizations, group 2 could use the
corresponding gaze data logs, as well as deploying adapted gaze vi-            SVEETER tool at our University. After welcoming each partici-
sualizations techniques. Multiple views are integrated to look at a
scene from different viewpoints. Context menus offer the possibil-                1 Rated on a Likert-scale from 1 (poor) to 5 (excellent).
ity to apply certain options to each view, such as adapted rendering              2 The survey included 19 Likert scales, from 1 (do not agree at all) to
options. The figures used throughout this paper to illustrate the dif-          5 (extremely agree), and 6 qualitative open questions using LimeSurvey
ferent gaze visualization techniques were created with SVEETER.                (Version 1.80+)



                                                                         111
Figure 6: Agreement ratings from both groups with the corresponding standard deviations.


pant of group 2, a short introduction about visual gaze analysis was            4 Conclusion and Future Work
provided and the tool was briefly presented. Following this, sample
gaze data for a 3D scene and a set of 9 predefined tasks were given              In this paper, we presented a set of advanced gaze visualization
to each individual. Thereby, the aim was systematical acquaintance              techniques for investigating visual attention in 3D VEs. Three-
with the techniques, not an efficiency evaluation. Tasks included,               dimensional scan paths assist in examining sequences of fixations
for example, to find out if particular parts of an object received high          and saccades. Thereby, camera paths provide valuable information
visual attention and if observers changed their viewpoints. After               about how a user navigated through a scene and from which loca-
completing the tasks and familiarizing themselves with the differ-              tions objects have been observed. The models of interest timeline
ent techniques, the visualization experts were asked to fill out the             may help answering, for example, whether an object was viewed at
online survey. On average, each session took about 45 minutes.                  a certain point in time or whether any cyclic viewing behavior ex-
                                                                                isted. Finally, three types of attentional maps were discussed to ex-
Results.    Means and standard deviation values are omitted in this             amine how visual attention is distributed across a scene (projected
paper, since none of the subjective results showed significant differ-           heatmap), among 3D models (object-based attentional map), and
ences between group 1 and group 2. In general, participants agreed              across a model’s surface (surface-based heatmap). A combination
that gaze visualization techniques facilitate eye tracking analysis.            of these techniques was integrated in a toolkit for visual analysis
The detailed results are shown in Figure 6.                                     of gaze data (SVEETER). The survey results from eye tracking and
                                                                                visualization experts indicate the potential usefulness for various
Scan paths are generally useful for studying gaze sequences. In                 application areas.
this context, depicting camera paths is regarded important as well.             Since visual gaze analysis for 3D VEs is still in an early stage, our
Thus, a combination of camera and scan paths is found useful by                 techniques may serve as a solid foundation and provide a basis for
both groups. In contrast, cones were considered only moderately                 further development. This includes representing data from multiple
suitable for representing fixations. Instead, a simple combination               users as well as developing and testing alternative techniques.
of the spherical fixation representations with viewing directions and
positions was preferred. While the motivation for individually de-
picting fixations and saccades was not evident to everybody, it was              References
regarded useful in the overall rating.
                                                                                DACHSELT, R., AND W EILAND , M. 2006. TimeZoom: a flexible
Besides limiting scan and camera paths temporally via the MOI                     detail and context timeline. In CHI ’06: Extended abstracts on
timeline, additional filtering functionality was requested. Partici-               Human factors in computing systems, ACM, 682–687.
pants agreed that the MOI timeline helps to detect gaze patterns.
                                                                                D UCHOWSKI , A. T., S HIVASHANKARAIAH , V., R AWLS , T.,
Group 2 showed great interest in the MOI timeline when testing
                                                                                   G RAMOPADHYE , A. K., M ELLOY, B., AND K ANKI , B. 2000.
SVEETER. Both groups agreed that a zoomable user interface is
                                                                                   Binocular eye tracking in virtual reality for inspection training.
convenient for the timeline. Filtering objects of interest in the time-
                                                                                   In ETRA ’00, ACM, 89–96.
line was rated beneficial. Suggested improvements for the time-
line included substituting color identification with iconic images               L ESSING , S., AND L INGE , L. 2002. IICap: A new environment for
and improved suitability for scenes with many objects by grouping                  eye tracking data analysis. Master’s thesis. University of Lund,
them dynamically or individually.                                                  Sweden.

The SVEETER tool was considered highly useful, combining scan                     ¨
                                                                                M OLLER , T., AND T RUMBORE , B. 2005. Fast, minimum storage
paths, attentional maps, and the MOI timeline. Thus, for evaluat-                 ray/triangle intersection. In SIGGRAPH ’05: ACM SIGGRAPH
ing eye tracking studies in 3D VEs, interviewees could imagine to                 2005 Courses, ACM.
use SVEETER. Multiple views are rated practical together with the               R AMLOLL , R., T REPAGNIER , C., S EBRECHTS , M., AND
ability to assign different visualizations to each view. We observed               B EEDASY, J. 2004. Gaze data visualization tools: opportunities
that a combination of different techniques was often employed by                   and challenges. Eighth International Conference on Information
users testing SVEETER (group 2). Both scan paths and surface-                      Visualisation (July), 173–180.
based maps were, for example, used to investigate observation pat-
terns on a model. Participants frequently faded out objects to gain a           W OODING , D. S. 2002. Fixation maps: quantifying eye-movement
better overview. In addition, we noticed that the MOI timeline was                traces. In ETRA ’02, ACM, 31–36.
often used instead of scan paths.


                                                                          112

Más contenido relacionado

La actualidad más candente

Overview of computer vision and machine learning
Overview of computer vision and machine learningOverview of computer vision and machine learning
Overview of computer vision and machine learningsmckeever
 
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCA
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCAADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCA
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCAIAEME Publication
 
Multimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysisMultimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysisPetteriTeikariPhD
 
Semantic Web powering Enterprise and Web Applications
Semantic Web powering Enterprise and Web ApplicationsSemantic Web powering Enterprise and Web Applications
Semantic Web powering Enterprise and Web ApplicationsAmit Sheth
 
A Review of BSS Based Digital Image Watermarking and Extraction Methods
A Review of BSS Based Digital Image Watermarking and Extraction MethodsA Review of BSS Based Digital Image Watermarking and Extraction Methods
A Review of BSS Based Digital Image Watermarking and Extraction MethodsIOSR Journals
 
thesis_submitted
thesis_submittedthesis_submitted
thesis_submittedAlex Streit
 
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...IRJET Journal
 
Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2prjpublications
 
Lighting design for Startup Offices
Lighting design for Startup OfficesLighting design for Startup Offices
Lighting design for Startup OfficesPetteriTeikariPhD
 
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGE
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGEENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGE
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGEijcsit
 
A New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationA New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationIRJET Journal
 
IRJET- A Survey on Image Retrieval using Machine Learning
IRJET- A Survey on Image Retrieval using Machine LearningIRJET- A Survey on Image Retrieval using Machine Learning
IRJET- A Survey on Image Retrieval using Machine LearningIRJET Journal
 
Deep Learning for Biomedical Unstructured Time Series
Deep Learning for Biomedical  Unstructured Time SeriesDeep Learning for Biomedical  Unstructured Time Series
Deep Learning for Biomedical Unstructured Time SeriesPetteriTeikariPhD
 
Adversarial Multi Scale Features Learning for Person Re Identification
Adversarial Multi Scale Features Learning for Person Re IdentificationAdversarial Multi Scale Features Learning for Person Re Identification
Adversarial Multi Scale Features Learning for Person Re Identificationijtsrd
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Ieee 2012 titles for matlab
Ieee 2012 titles for matlabIeee 2012 titles for matlab
Ieee 2012 titles for matlabMusthafa Tema
 

La actualidad más candente (18)

Overview of computer vision and machine learning
Overview of computer vision and machine learningOverview of computer vision and machine learning
Overview of computer vision and machine learning
 
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCA
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCAADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCA
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCA
 
Multimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysisMultimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysis
 
Cu31632635
Cu31632635Cu31632635
Cu31632635
 
Semantic Web powering Enterprise and Web Applications
Semantic Web powering Enterprise and Web ApplicationsSemantic Web powering Enterprise and Web Applications
Semantic Web powering Enterprise and Web Applications
 
A Review of BSS Based Digital Image Watermarking and Extraction Methods
A Review of BSS Based Digital Image Watermarking and Extraction MethodsA Review of BSS Based Digital Image Watermarking and Extraction Methods
A Review of BSS Based Digital Image Watermarking and Extraction Methods
 
thesis_submitted
thesis_submittedthesis_submitted
thesis_submitted
 
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
 
Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2
 
Lighting design for Startup Offices
Lighting design for Startup OfficesLighting design for Startup Offices
Lighting design for Startup Offices
 
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGE
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGEENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGE
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGE
 
A New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationA New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow Estimation
 
IRJET- A Survey on Image Retrieval using Machine Learning
IRJET- A Survey on Image Retrieval using Machine LearningIRJET- A Survey on Image Retrieval using Machine Learning
IRJET- A Survey on Image Retrieval using Machine Learning
 
Deep Learning for Biomedical Unstructured Time Series
Deep Learning for Biomedical  Unstructured Time SeriesDeep Learning for Biomedical  Unstructured Time Series
Deep Learning for Biomedical Unstructured Time Series
 
Adversarial Multi Scale Features Learning for Person Re Identification
Adversarial Multi Scale Features Learning for Person Re IdentificationAdversarial Multi Scale Features Learning for Person Re Identification
Adversarial Multi Scale Features Learning for Person Re Identification
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
ni2018.pdf
ni2018.pdfni2018.pdf
ni2018.pdf
 
Ieee 2012 titles for matlab
Ieee 2012 titles for matlabIeee 2012 titles for matlab
Ieee 2012 titles for matlab
 

Destacado

Labre 2008 I Believe
Labre 2008 I BelieveLabre 2008 I Believe
Labre 2008 I BelieveMegan Hren
 
C:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di PlasticaC:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di Plasticatilapia69
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
 
Reference Projects KyotoCooling V4 March 2010
Reference Projects KyotoCooling V4 March 2010Reference Projects KyotoCooling V4 March 2010
Reference Projects KyotoCooling V4 March 2010Rimgaudas Baltrunas
 
2014 Stop slavery! Pocheon African Art musuem in South Korea
2014 Stop slavery! Pocheon African Art musuem in South Korea2014 Stop slavery! Pocheon African Art musuem in South Korea
2014 Stop slavery! Pocheon African Art musuem in South KoreaSo-young Son
 
ZFConf 2010: Zend Framework and Doctrine
ZFConf 2010: Zend Framework and DoctrineZFConf 2010: Zend Framework and Doctrine
ZFConf 2010: Zend Framework and DoctrineZFConf Conference
 
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...
Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...guestbe4094
 
Hyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van WouterHyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van Wouterguest2f17d3
 
Un Millón de Botellas Templo Budista
Un Millón de Botellas Templo BudistaUn Millón de Botellas Templo Budista
Un Millón de Botellas Templo BudistaEva Cajigas
 
Anexo ás normas, calendario previo (aprobado)
Anexo ás normas, calendario previo  (aprobado)Anexo ás normas, calendario previo  (aprobado)
Anexo ás normas, calendario previo (aprobado)oscargaliza
 
W270 logical fallacies
W270 logical fallaciesW270 logical fallacies
W270 logical fallaciessfboyle
 
English in the world
English in the worldEnglish in the world
English in the worldFco Alejandro
 
Tools for Learning (51.-60.)
Tools for Learning (51.-60.)Tools for Learning (51.-60.)
Tools for Learning (51.-60.)nazzzy
 

Destacado (20)

ParaEmpezarHowAreYou
ParaEmpezarHowAreYouParaEmpezarHowAreYou
ParaEmpezarHowAreYou
 
Kennady
KennadyKennady
Kennady
 
Labre 2008 I Believe
Labre 2008 I BelieveLabre 2008 I Believe
Labre 2008 I Believe
 
acoooooo
acooooooacoooooo
acoooooo
 
C:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di PlasticaC:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di Plastica
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Reference Projects KyotoCooling V4 March 2010
Reference Projects KyotoCooling V4 March 2010Reference Projects KyotoCooling V4 March 2010
Reference Projects KyotoCooling V4 March 2010
 
2014 Stop slavery! Pocheon African Art musuem in South Korea
2014 Stop slavery! Pocheon African Art musuem in South Korea2014 Stop slavery! Pocheon African Art musuem in South Korea
2014 Stop slavery! Pocheon African Art musuem in South Korea
 
เศรษฐศาสตร์เบื้องต้น
เศรษฐศาสตร์เบื้องต้นเศรษฐศาสตร์เบื้องต้น
เศรษฐศาสตร์เบื้องต้น
 
ZFConf 2010: Zend Framework and Doctrine
ZFConf 2010: Zend Framework and DoctrineZFConf 2010: Zend Framework and Doctrine
ZFConf 2010: Zend Framework and Doctrine
 
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...
Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...
 
Hyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van WouterHyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van Wouter
 
Un Millón de Botellas Templo Budista
Un Millón de Botellas Templo BudistaUn Millón de Botellas Templo Budista
Un Millón de Botellas Templo Budista
 
Advanced Malware Analysis
Advanced Malware AnalysisAdvanced Malware Analysis
Advanced Malware Analysis
 
โครงสร้างสังคม
โครงสร้างสังคมโครงสร้างสังคม
โครงสร้างสังคม
 
สภาพทางสังคมและวัฒนธรรมทวีปโรป
สภาพทางสังคมและวัฒนธรรมทวีปโรปสภาพทางสังคมและวัฒนธรรมทวีปโรป
สภาพทางสังคมและวัฒนธรรมทวีปโรป
 
Anexo ás normas, calendario previo (aprobado)
Anexo ás normas, calendario previo  (aprobado)Anexo ás normas, calendario previo  (aprobado)
Anexo ás normas, calendario previo (aprobado)
 
W270 logical fallacies
W270 logical fallaciesW270 logical fallacies
W270 logical fallacies
 
English in the world
English in the worldEnglish in the world
English in the world
 
Tools for Learning (51.-60.)
Tools for Learning (51.-60.)Tools for Learning (51.-60.)
Tools for Learning (51.-60.)
 

Similar a Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environments

A Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question AnsweringA Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question AnsweringIRJET Journal
 
Object Detection with Computer Vision
Object Detection with Computer VisionObject Detection with Computer Vision
Object Detection with Computer VisionIRJET Journal
 
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...Editor IJMTER
 
IRJET- Deep Learning Techniques for Object Detection
IRJET-  	  Deep Learning Techniques for Object DetectionIRJET-  	  Deep Learning Techniques for Object Detection
IRJET- Deep Learning Techniques for Object DetectionIRJET Journal
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
 
3d object detection and recognition : a review
3d object detection and recognition : a review3d object detection and recognition : a review
3d object detection and recognition : a reviewsyedamashoon
 
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...IRJET Journal
 
IRJET- Criminal Recognization in CCTV Surveillance Video
IRJET-  	  Criminal Recognization in CCTV Surveillance VideoIRJET-  	  Criminal Recognization in CCTV Surveillance Video
IRJET- Criminal Recognization in CCTV Surveillance VideoIRJET Journal
 
Learning View-Model Joint Relevance for 3D Object Retrieval
Learning View-Model Joint Relevance for 3D Object RetrievalLearning View-Model Joint Relevance for 3D Object Retrieval
Learning View-Model Joint Relevance for 3D Object RetrievalProjectsatbangalore
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
 
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...IRJET Journal
 
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...mrgazer
 
Interactively Linked Eye Tracking Visualizations
Interactively Linked Eye Tracking VisualizationsInteractively Linked Eye Tracking Visualizations
Interactively Linked Eye Tracking VisualizationsDQDQ7
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Human Action Recognition Using Deep Learning
Human Action Recognition Using Deep LearningHuman Action Recognition Using Deep Learning
Human Action Recognition Using Deep LearningIRJET Journal
 
Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...
Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...
Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...Nelson J. S. Silva
 
SMU MBA Solved Assignment MK0016
 SMU  MBA Solved Assignment MK0016 SMU  MBA Solved Assignment MK0016
SMU MBA Solved Assignment MK0016Revlon
 
RETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODS
RETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODSRETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODS
RETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODSIRJET Journal
 

Similar a Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environments (20)

A Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question AnsweringA Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question Answering
 
Object Detection with Computer Vision
Object Detection with Computer VisionObject Detection with Computer Vision
Object Detection with Computer Vision
 
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
 
3 video segmentation
3 video segmentation3 video segmentation
3 video segmentation
 
IRJET- Deep Learning Techniques for Object Detection
IRJET-  	  Deep Learning Techniques for Object DetectionIRJET-  	  Deep Learning Techniques for Object Detection
IRJET- Deep Learning Techniques for Object Detection
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
 
3d object detection and recognition : a review
3d object detection and recognition : a review3d object detection and recognition : a review
3d object detection and recognition : a review
 
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...IRJET -  	  A Survey Paper on Efficient Object Detection and Matching using F...
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
 
IRJET- Criminal Recognization in CCTV Surveillance Video
IRJET-  	  Criminal Recognization in CCTV Surveillance VideoIRJET-  	  Criminal Recognization in CCTV Surveillance Video
IRJET- Criminal Recognization in CCTV Surveillance Video
 
Learning View-Model Joint Relevance for 3D Object Retrieval
Learning View-Model Joint Relevance for 3D Object RetrievalLearning View-Model Joint Relevance for 3D Object Retrieval
Learning View-Model Joint Relevance for 3D Object Retrieval
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
 
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
 
Interactively Linked Eye Tracking Visualizations
Interactively Linked Eye Tracking VisualizationsInteractively Linked Eye Tracking Visualizations
Interactively Linked Eye Tracking Visualizations
 
Scientific visualization
Scientific visualizationScientific visualization
Scientific visualization
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Human Action Recognition Using Deep Learning
Human Action Recognition Using Deep LearningHuman Action Recognition Using Deep Learning
Human Action Recognition Using Deep Learning
 
Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...
Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...
Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...
 
SMU MBA Solved Assignment MK0016
 SMU  MBA Solved Assignment MK0016 SMU  MBA Solved Assignment MK0016
SMU MBA Solved Assignment MK0016
 
RETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODS
RETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODSRETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODS
RETINAL IMAGE CLASSIFICATION USING NEURAL NETWORK BASED ON A CNN METHODS
 

Más de Kalle

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneKalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Kalle
 

Más de Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
 

Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environments

  • 1. Advanced Gaze Visualizations for Three-dimensional Virtual Environments Sophie Stellmach∗ , Lennart Nacke∗∗ , Raimund Dachselt∗ ∗ User Interface & Software Engineering Group ∗∗ Game Systems and Interaction Research Laboratory Faculty of Computer Science School of Computing Otto-von-Guericke-Universit¨ t Magdeburg a Blekinge Institute of Technology D-39106 Magdeburg, Germany SE-37225 Ronneby, Sweden stellmach@acm.org, dachselt@acm.org lennart.nacke@acm.org Abstract adapted to challenges implied by 3D space (e.g., occlusion prob- lems or varying viewpoints). On the other hand, they do not allow Gaze visualizations represent an effective way for gaining fast in- for quickly determining which parts of a 3D model were viewed the sights into eye tracking data. Current approaches do not adequately longest. After all, this lack of suitable techniques currently results support eye tracking studies for three-dimensional (3D) virtual en- in a time-consuming evaluation of eye tracking studies in 3D VEs. vironments. Hence, we propose a set of advanced gaze visualiza- Usually, video recordings of displayed content have to be examined tion techniques for supporting gaze behavior analysis in such envi- frame-by-frame, for which gaze data are represented as superim- ronments. Similar to commonly used gaze visualizations for two- posed gaze plots. Thus, more suitable techniques are required for a dimensional stimuli (e.g., images and websites), we contribute ad- faster visual gaze analysis in various application domains (e.g., vir- vanced 3D scan paths and 3D attentional maps. In addition, we tual interactive trainings [Duchowski et al. 2000], social network- introduce a models of interest timeline depicting viewed models, ing environments, and computer-aided design). Such techniques which can be used for displaying scan paths in a selected time seg- could, for example, assist in quickly examining 3D product design ment. A prototype toolkit is also discussed which combines an im- features or people’s attentional foci when being confronted with a plementation of our proposed techniques. Their potential for facil- critical situation. itating eye tracking studies in virtual environments was supported by a user study among eye tracking and visualization experts. This paper makes a methodological and technological contribution to the investigation of visual attention in 3D VEs. To facilitate eye tracking studies in 3D VEs, we propose a set of adapted and novel CR Categories: H.5.1 [Multimedia Information Systems]: gaze visualization techniques: superimposed 3D scan paths and 3D Evaluation/Methodology—Information Interfaces and Presentation attentional maps, as well as a models of interest timeline depicting I.4.8 [Computing Methodologies]: Image Processing and Com- viewed models. We have also developed a gaze analysis tool for puter Vision—Scene Analysis the evaluation of eye tracking studies in static 3D VEs. The tech- niques and the tool are described together with related work in the Keywords: Gaze visualizations, eye tracking, eye movements, at- following section. In Section 3, a survey is discussed which was tentional maps, scan paths, virtual environments, three-dimensional conducted among eye tracking and visualization experts for assess- ing the usefulness of our proposed techniques. 1 Introduction 2 Gaze Visualization Techniques for Three- For investigating gaze behavior, large sets of numerical data have to be collected and evaluated, such as eye positions, calculated fix- dimensional Virtual Environments ations and saccades, as well as total fixation times [Ramloll et al. 2004]. Consequently, gaze visualizations have helped detecting, Common gaze visualization techniques for evaluating eye tracking understanding, and interpreting essential relationships in such mul- studies include scan paths and attentional maps but fall short when tivariate and multidimensional data. Especially superimposed tech- it comes to 3D VEs. For example, the gaze behavior of customers niques proved useful for imparting how a viewer’s gaze behaved in a virtual 3D store could be evaluated with regard to product and with respect to an underlying stimulus. Common two-dimensional advertisement placements. Potential research questions could ask (2D) gaze visualization techniques include scan paths, attentional in which order items were observed or how visual interest is gen- maps, and areas of interest. erally distributed. For answering questions like these, we introduce a set of gaze visualization techniques adapted for static 3D VEs. While traditional diagnostic eye tracking studies have focused on a These techniques include superimposed visualizations of 3D scan person’s gaze behavior while reading texts or observing images, paths and 3D attentional maps, as well as a models of interest time- new application areas have emerged in connection with the ad- line depicting the order of fixated objects. For them, a 3D scene is vancement of technology. This includes the perception of digi- assumed with several static models, which users can freely explore tal content such as websites, user interfaces, digital games, and and for which sample gaze data was collected. Furthermore, we three-dimensional (3D) virtual environments (VE). However, cur- presume a lack of transparent phenomena (e.g., fog or glass). Our rent gaze visualization techniques do not adequately support inter- approach uses 3D gaze positions which are calculated as intersec- active visual analysis of 3D VEs. On the one hand, they are not well tions between gaze rays and viewed models (see Section 2.4). Copyright © 2010 by the Association for Computing Machinery, Inc. 2.1 Three-dimensional Scan Paths Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the A common approach for depicting eye tracking data are scan path first page. Copyrights for components of this work owned by others than ACM must be visualizations superimposed on a stimulus representation. Scan honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on paths show the order of eye movements by drawing connected lines servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail (saccades) between subsequent gaze positions (fixations). Fixa- permissions@acm.org. tion and saccade plots imply data clustering by grouping infor- ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 109
  • 2. 2.2 Models of Interest Timeline A timeline visualization maps data against time and facilitates find- ing events before, after, or during a given time interval. Thus, it can be used to answer several questions, such as: Has object x been observed repetitively? In what order and for how long are objects looked at? In this regard, the Object vs. Time View [Lessing and Linge 2002] is a series of timelines for depicting Areas of Interest (AOIs). Each AOI is assigned a time row, resulting in an AOI ma- Figure 1: Two alternative fixation representations, spheres (a) and trix. With an increasing number of AOIs and shorter fixation times, cones (b), are presented for 3D scan paths. the table size increases and may hinder a good overview. Therefore, we propose a space-filling models of interest (MOI) timeline (see Figure 3) for a compact illustration that effectively utilizes assigned screen space by omitting voids. We suggest to use the term Model of Interest to describe distinct 3D objects. The MOI timeline gives an overview about a user’s gaze distribution based on viewed models. Each model is labeled with a specific color, which can be manually adapted. By assigning the same value to different objects, semantic groups can be defined, for example, for group- ing similar looking models, object hierarchies or closely arranged items. A legend is displayed for providing an overview of assigned colors (see in the left area of Figure 5). Figure 2: Two examples of camera paths (viewpoints and viewing directions) and fixations. Zooming and selection techniques may aid in making data visible which would otherwise be suppressed due to limited display sizes. Based on TimeZoom [Dachselt and Weiland 2006], the MOI time- mation into meaningful units (fixations). While fixations indicate line can be dragged to a desired position to offer horizontal scrolling aspects attracting an observer’s attention, saccades provide infor- and supports continuous zooming (e.g., via a zoom slider, see Fig- mation about how fixations are related to each other. Thereby, fix- ure 3). For this purpose, the viewing area is divided into a large ations are usually displayed as circles varying in size depending on detail area for zooming (see Figure 3) and a small overview area the fixation duration and saccades as straight lines. displaying the entire collection. Displayed scan and camera paths can be filtered with respect to a selected period. For this purpose, Scan paths are frequently used for static (e.g., images and texts) navigation markers will appear if the path visualizations are em- and dynamic (e.g., videos) 2D stimuli. For other stimuli, superim- ployed (see Figure 3), otherwise the markers are hidden. The user posed fixation plots are usually applied to recorded videos of the can simply click and drag the markers to define intervals of inter- presented content. This often implies a time-consuming frame-by- est within the timeline. While dragging the timeline, the user can frame video data analysis. One solution proposed by Ramloll et al. play back data of interest as defined by this fixed interval. Finally, [2004] is a fixation net for dynamic 3D non-stereoscopic content, additional details for a timeline entry (e.g., object identifiers, start for which gaze positions are mapped onto flattened objects. Rep- and finish times) can be displayed, when hovering over it with the resenting binocular scan paths in 3D VRs has been presented by mouse cursor. Context menus are triggered by right-clicking on Duchowski et al. [2000] for which 2D gaze positions and depths items, for instance, for changing its color. (gaze depth paths) are depicted. In contrast to Duchowski et al., we propose a monocular scan path 2.3 Three-dimensional Attentional Maps depicting 3D gaze positions as intersections of a gaze ray and a viewed model. Besides traditional spherical representations (see An attentional map (or heatmap) is an aggregated representation Figure 1a), we used conical fixation representations pointing at a depicting areas of visual attention for primarily static 2D stimuli. corresponding gaze position (see Figure 1b), since it may integrate This is substantiated by the fact that an attentional map usually has additional information about varying camera positions. So, cones the same dimensions (width and height) as the underlying stimu- could represent gaze positions (apex), fixation durations (cone’s lus [Wooding 2002]. Since fixations are merely accumulated, at- base), viewing directions (cone’s orientation), and viewing dis- tentional maps do not provide information about gaze sequences, tances (cone’s height) within one representation. The traditional but are suitable for indicating areas attracting visual attention over saccade representation can cause problems in 3D, because saccade a certain period of time. Visualizing gaze data directly in the 3D lines may cross through surfaces. A simple solution we propose is VE enables an aggregated representation of longer observations (in to maintain the traditional saccade representation with the possibil- contrast to the traditional frame-by-frame evaluation). ity for adapting rendering options of 3D models (e.g., wireframe models or hiding objects) for determining linked fixations. We propose 3D attentional maps as superimposed aggregated vi- sualizations for virtual 3D stimuli: projected, object-based, and It is also important to visualize how the locations and viewing direc- surface-based attentional representations (see Figure 4). A pro- tions of the virtual camera have changed during observation. This jected attentional map is a 2D representation of 3D gaze distribu- may aid in finding out, for example, if a scene was observed from tions for selected arbitrary views (e.g. a top view as in Figure 4). diverse locations. We propose to visualize the camera path with For an object-based attentional representation one color is mapped traces pointing at the respective gaze positions. Figure 2 shows to the surface of each model based on its received visual attention an example for a camera path for which the camera locations are (see Figure 4b). This allows for quickly evaluating an object’s vi- depicted as red lines and viewing directions as straight blue lines. sual attractiveness while maintaining information about the spatial Displayed scan and camera paths can be filtered by means of the relationship with other models (e.g., two adjacent models have re- models of interest timeline to provide a better overview. ceived high visual attention). Surface-based attentional maps dis- 110
  • 3. Figure 3: An example of the models of interest timeline with selection markers for confining displayed scan and camera paths. (a) Projected (top view) (b) Object-based (c) Surface-based Figure 4: Advanced attentional maps for the application in three- dimensional virtual environments. play aggregated fixation data as heatmaps directly on a model’s sur- face using a vertex-based mapping (see Figure 4c). This provides detailed insights into gaze distributions across models’ surfaces, al- lowing to draw quick conclusions about which model aspects at- tracted visual interest. Figure 5: A screenshot from SVEETER illustrating multiple views at a scene. The MOI timeline is shown in the lower area with its A combination of these techniques offers different levels of detail legend displayed to the upper left. for data investigation. While a projected heatmap may give an overview of the gaze distribution across a scene, an object-based attentional map can provide information about the models’ visual 3 User Study attractiveness when zooming in. Finally, the surface-based tech- nique allows for close examinations of viewed models. We have conducted a user study with eye tracking and visualization experts to assess the usefulness of the presented techniques. The method and results are discussed in the following paragraphs. 2.4 Implementation Participants. Group 1 consisted of 20 eye tracking professionals and researchers, aged between 23 and 52 years (Mean [M] = 34.50). For the implementation of the presented gaze visualization tech- Participants in this group rated their general eye tracking knowl- niques, a virtual 3D scene was required for which we used Mi- edge1 higher than average (M = 3.85, Standard Deviation [SD] = crosoft’s XNA Game Studio 3.0 (based on C#). Our system was 0.96). Group 2 included 8 local data visualization and computer confined to static 3D VEs without any transparent phenomena (e.g., graphics experts, aged between 25 and 35 years (M = 28.25). This smoke or semi-transparent textures). Users can freely explore the group rated their eye tracking knowledge under average (M = 1.25, scene by moving their camera viewpoints via mouse and keyboard SD = 1.09), but their expertise in computational visualization above controls. In addition, an integration of the Tobii 1750 Eye Tracker average (M = 3.63, SD = 1.22). and XNA allowed for logging 3D gaze target positions on models’ surfaces. For this purpose, a 3D collision ray needed to be deter- Measures. The usefulness of the gaze visualization techniques mined based on 2D screen-based gaze positions which are supplied was investigated in an online survey2 . Each technique was briefly by the eye tracker. The gaze ray was used to calculate and log its in- described with screenshots. Respondents were asked to rate their tersection with virtual objects on the precision level of a polygonal agreement to statements such as “Cone visualizations are useful for triangle on a model [M¨ ller and Trumbore 2005]. The processed o representing fixations in virtual environments.” The qualitative part data were stored in log files for post-analysis. of the survey collected comments about usefulness, improvements, and possible applications of the techniques. The presented visualization techniques were implemented in a pro- totype of a gaze analysis software tool: SVEETER (see Figure Procedure and Design. Group 1 had to answer the questions 5). It is based on XNA (for 3D scenes) and Windows Forms to based on brief textual and pictorial descriptions of the gaze visual- benefit from existing interface elements (e.g., buttons and menus). ization techniques provided in the online survey. For a better un- SVEETER offers a coherent framework for loading 3D scenes and derstanding of the new gaze visualizations, group 2 could use the corresponding gaze data logs, as well as deploying adapted gaze vi- SVEETER tool at our University. After welcoming each partici- sualizations techniques. Multiple views are integrated to look at a scene from different viewpoints. Context menus offer the possibil- 1 Rated on a Likert-scale from 1 (poor) to 5 (excellent). ity to apply certain options to each view, such as adapted rendering 2 The survey included 19 Likert scales, from 1 (do not agree at all) to options. The figures used throughout this paper to illustrate the dif- 5 (extremely agree), and 6 qualitative open questions using LimeSurvey ferent gaze visualization techniques were created with SVEETER. (Version 1.80+) 111
  • 4. Figure 6: Agreement ratings from both groups with the corresponding standard deviations. pant of group 2, a short introduction about visual gaze analysis was 4 Conclusion and Future Work provided and the tool was briefly presented. Following this, sample gaze data for a 3D scene and a set of 9 predefined tasks were given In this paper, we presented a set of advanced gaze visualization to each individual. Thereby, the aim was systematical acquaintance techniques for investigating visual attention in 3D VEs. Three- with the techniques, not an efficiency evaluation. Tasks included, dimensional scan paths assist in examining sequences of fixations for example, to find out if particular parts of an object received high and saccades. Thereby, camera paths provide valuable information visual attention and if observers changed their viewpoints. After about how a user navigated through a scene and from which loca- completing the tasks and familiarizing themselves with the differ- tions objects have been observed. The models of interest timeline ent techniques, the visualization experts were asked to fill out the may help answering, for example, whether an object was viewed at online survey. On average, each session took about 45 minutes. a certain point in time or whether any cyclic viewing behavior ex- isted. Finally, three types of attentional maps were discussed to ex- Results. Means and standard deviation values are omitted in this amine how visual attention is distributed across a scene (projected paper, since none of the subjective results showed significant differ- heatmap), among 3D models (object-based attentional map), and ences between group 1 and group 2. In general, participants agreed across a model’s surface (surface-based heatmap). A combination that gaze visualization techniques facilitate eye tracking analysis. of these techniques was integrated in a toolkit for visual analysis The detailed results are shown in Figure 6. of gaze data (SVEETER). The survey results from eye tracking and visualization experts indicate the potential usefulness for various Scan paths are generally useful for studying gaze sequences. In application areas. this context, depicting camera paths is regarded important as well. Since visual gaze analysis for 3D VEs is still in an early stage, our Thus, a combination of camera and scan paths is found useful by techniques may serve as a solid foundation and provide a basis for both groups. In contrast, cones were considered only moderately further development. This includes representing data from multiple suitable for representing fixations. Instead, a simple combination users as well as developing and testing alternative techniques. of the spherical fixation representations with viewing directions and positions was preferred. While the motivation for individually de- picting fixations and saccades was not evident to everybody, it was References regarded useful in the overall rating. DACHSELT, R., AND W EILAND , M. 2006. TimeZoom: a flexible Besides limiting scan and camera paths temporally via the MOI detail and context timeline. In CHI ’06: Extended abstracts on timeline, additional filtering functionality was requested. Partici- Human factors in computing systems, ACM, 682–687. pants agreed that the MOI timeline helps to detect gaze patterns. D UCHOWSKI , A. T., S HIVASHANKARAIAH , V., R AWLS , T., Group 2 showed great interest in the MOI timeline when testing G RAMOPADHYE , A. K., M ELLOY, B., AND K ANKI , B. 2000. SVEETER. Both groups agreed that a zoomable user interface is Binocular eye tracking in virtual reality for inspection training. convenient for the timeline. Filtering objects of interest in the time- In ETRA ’00, ACM, 89–96. line was rated beneficial. Suggested improvements for the time- line included substituting color identification with iconic images L ESSING , S., AND L INGE , L. 2002. IICap: A new environment for and improved suitability for scenes with many objects by grouping eye tracking data analysis. Master’s thesis. University of Lund, them dynamically or individually. Sweden. The SVEETER tool was considered highly useful, combining scan ¨ M OLLER , T., AND T RUMBORE , B. 2005. Fast, minimum storage paths, attentional maps, and the MOI timeline. Thus, for evaluat- ray/triangle intersection. In SIGGRAPH ’05: ACM SIGGRAPH ing eye tracking studies in 3D VEs, interviewees could imagine to 2005 Courses, ACM. use SVEETER. Multiple views are rated practical together with the R AMLOLL , R., T REPAGNIER , C., S EBRECHTS , M., AND ability to assign different visualizations to each view. We observed B EEDASY, J. 2004. Gaze data visualization tools: opportunities that a combination of different techniques was often employed by and challenges. Eighth International Conference on Information users testing SVEETER (group 2). Both scan paths and surface- Visualisation (July), 173–180. based maps were, for example, used to investigate observation pat- terns on a model. Participants frequently faded out objects to gain a W OODING , D. S. 2002. Fixation maps: quantifying eye-movement better overview. In addition, we noticed that the MOI timeline was traces. In ETRA ’02, ACM, 31–36. often used instead of scan paths. 112