Gaze visualizations represent an effective way for gaining fast insights into eye tracking data. Current approaches do not adequately support eye tracking studies for three-dimensional (3D) virtual environments. Hence, we propose a set of advanced gaze visualization techniques for supporting gaze behavior analysis in such environments. Similar to commonly used gaze visualizations for twodimensional
stimuli (e.g., images and websites), we contribute advanced 3D scan paths and 3D attentional maps. In addition, we introduce a models of interest timeline depicting viewed models, which can be used for displaying scan paths in a selected time segment. A prototype toolkit is also discussed which combines an implementation of our proposed techniques. Their potential for facilitating eye tracking studies in virtual environments was supported by a user study among eye tracking and visualization experts.
For non-grid 3D images like point clouds and meshes, and inherently graph-based data.
Inherently graph-based data include for example brain connectivity analysis, scientific article citation networks, (social) network analysis, etc.
Alternative download link:
https://www.dropbox.com/s/2o3cofcd6d6e2qt/geometricGraph_deepLearning.pdf?dl=0
With a focus on hardware-centric deep learning, and end-to-end deep learning pipelines for diagnosis including imaging optimization
Alternative download link:
https://www.dropbox.com/s/bmdg2vzp6k9p9pe/portable_medicalDiagnostics_embeddedComputing.pdf?dl=0
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
An Experimental Study into Objective Quality Assessment of Watermarked ImagesCSCJournals
In this paper, we study the quality assessment of watermarked and attacked images using extensive experiments and related analysis. The process of watermarking usually leads to loss of visual quality and therefore it is crucial to estimate the extent of quality degradation and its perceived impact. To this end, we have analyzed the performance of 4 image quality assessment (IQA) metrics – Structural Similarity Index (SSIM), Singular Value Decomposition Metric (M-SVD) and Image Quality Score (IQS) and PSNR on watermarked and attacked images. The watermarked images are obtained by using three different schemes viz., (1) DCT based random number sequence watermarking, (2) DWT based random number sequence watermarking and (3) RBF Neural Network based watermarking. The signed images are attacked by using five different image processing operations. We observe that the metrics behave identically in case of all the three watermarking schemes. An important conclusion of our study is that PSNR is not a suitable metric for IQA as it does not correlate well with the human visual system’s (HVS) perception. It is also found that the M-SVD scatters significantly after embedding the watermark and after attacks as compared to SSIM and IQS. Therefore, it is a less effective quality assessment metric for watermarked and attacked images. In contrast to PSNR and M-SVD, SSIM and IQS exhibit more stable and consistent performance. Their comparison further reveals that except for the case of counterclockwise rotation, IQS relatively scatters less for all other four attacks used in this work. It is concluded that IQS is comparatively more suitable for quality assessment of signed and attacked images.
Shallow introduction for Deep Learning Retinal Image AnalysisPetteriTeikariPhD
Overview of retinal imaging techniques such as fundus photography, optical coherence tomography (OCT) along with future upgrades such as multispectral imaging, OCT angiography, adaptive optics imaging and polarization-sensitive OCT. This is followed by an overview of deep learning image analysis methods suitable to be used with retinal imaging techniques.
Alternative download link: https://www.dropbox.com/s/n01w02cjaf68vbo/retina_deepLearning_pipeline.pdf?dl=0
Detection and Tracking of Objects: A Detailed StudyIJEACS
Detecting and tracking objects are the most widespread and challenging tasks that a surveillance system must achieve to determine expressive events and activities, and automatically interpret and recover video content. An object can be a queue of people, a human, a head or a face. The goal of this article is to state the Detecting and tracking methods, classify them into different categories, and identify new trends, we introduce main trends and provide method to give a perception to fundamental ideas as well as to show their limitations in the object detection and tracking for more effective video analytics.
For non-grid 3D images like point clouds and meshes, and inherently graph-based data.
Inherently graph-based data include for example brain connectivity analysis, scientific article citation networks, (social) network analysis, etc.
Alternative download link:
https://www.dropbox.com/s/2o3cofcd6d6e2qt/geometricGraph_deepLearning.pdf?dl=0
With a focus on hardware-centric deep learning, and end-to-end deep learning pipelines for diagnosis including imaging optimization
Alternative download link:
https://www.dropbox.com/s/bmdg2vzp6k9p9pe/portable_medicalDiagnostics_embeddedComputing.pdf?dl=0
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
An Experimental Study into Objective Quality Assessment of Watermarked ImagesCSCJournals
In this paper, we study the quality assessment of watermarked and attacked images using extensive experiments and related analysis. The process of watermarking usually leads to loss of visual quality and therefore it is crucial to estimate the extent of quality degradation and its perceived impact. To this end, we have analyzed the performance of 4 image quality assessment (IQA) metrics – Structural Similarity Index (SSIM), Singular Value Decomposition Metric (M-SVD) and Image Quality Score (IQS) and PSNR on watermarked and attacked images. The watermarked images are obtained by using three different schemes viz., (1) DCT based random number sequence watermarking, (2) DWT based random number sequence watermarking and (3) RBF Neural Network based watermarking. The signed images are attacked by using five different image processing operations. We observe that the metrics behave identically in case of all the three watermarking schemes. An important conclusion of our study is that PSNR is not a suitable metric for IQA as it does not correlate well with the human visual system’s (HVS) perception. It is also found that the M-SVD scatters significantly after embedding the watermark and after attacks as compared to SSIM and IQS. Therefore, it is a less effective quality assessment metric for watermarked and attacked images. In contrast to PSNR and M-SVD, SSIM and IQS exhibit more stable and consistent performance. Their comparison further reveals that except for the case of counterclockwise rotation, IQS relatively scatters less for all other four attacks used in this work. It is concluded that IQS is comparatively more suitable for quality assessment of signed and attacked images.
Shallow introduction for Deep Learning Retinal Image AnalysisPetteriTeikariPhD
Overview of retinal imaging techniques such as fundus photography, optical coherence tomography (OCT) along with future upgrades such as multispectral imaging, OCT angiography, adaptive optics imaging and polarization-sensitive OCT. This is followed by an overview of deep learning image analysis methods suitable to be used with retinal imaging techniques.
Alternative download link: https://www.dropbox.com/s/n01w02cjaf68vbo/retina_deepLearning_pipeline.pdf?dl=0
Detection and Tracking of Objects: A Detailed StudyIJEACS
Detecting and tracking objects are the most widespread and challenging tasks that a surveillance system must achieve to determine expressive events and activities, and automatically interpret and recover video content. An object can be a queue of people, a human, a head or a face. The goal of this article is to state the Detecting and tracking methods, classify them into different categories, and identify new trends, we introduce main trends and provide method to give a perception to fundamental ideas as well as to show their limitations in the object detection and tracking for more effective video analytics.
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCAIAEME Publication
Face recognition has been a rapidly creating, testing and fascinating area with
respect to consistent applications. The task of face acknowledgment has been viably
asked about lately. With data and information gathering in abundance, there is an
urgent necessity for high security. Face acknowledgment has been a rapidly creating,
testing and interesting area concerning persistent applications. This paper gives a
cutting edge review of critical human face acknowledgment investigate
Multimodal RGB-D+RF-based sensing for human movement analysisPetteriTeikariPhD
Combining RGB-D based computer vision with commodity Wifi for pose estimation and human movement analysis for action recognition.
Think of applications especially in healthcare settings, where existing Wifi Access Point already exist and adding USB Wifi dongles to Raspberry Pi (or dedicated chips) is a very easy way to create "operational awareness" of all your patients.
Alternative download link:
https://www.dropbox.com/s/awkqqfhibesjcb9/multimodal_remote_MovementSensing.pdf?dl=0
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A Review of BSS Based Digital Image Watermarking and Extraction MethodsIOSR Journals
The field of Signal Processing has witnessed the strong emergence of a new technique, the Blind
Signal Processing (BSP) which is based on sound theoretical foundation. An offshoot of the BSP is known as
Blind Source Separation (BSS). This digital signal processing techniques have a wide and varied potential
applications. The term blind is indicative of the fact that both the source signal and the mixing procedures are
unknown. One of the more interesting applications of BSS is in field of image data security/authentication where
digital watermarking is proposed. Watermarking is a promising technique to help protect data security and
intellectual property rights. The plethora digital image watermarking methods are surveyed and discussed here
with their features and limitations. Thus literature survey is presented in two major categories-Digital image
watermarking methods and BSS based techniques in digital image watermarking and extraction
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGEijcsit
This paper proposes an encryption-based image watermarking scheme for medical images using a customized quantization of wavelet coefficient and a crypto system based on the chaotic cipher of Singular Value Decomposition (SVD). In order to spread the robustness of our algorithm and provide extra security, an improved SVD-CHAOS embedding and extraction procedure has been used to scramble the watermark logo in the preprocessing step of the proposed method. In the process of watermark embedding, an R-level discrete wavelet transform was applied to the host image. The high frequency wavelet coefficients are selected to carry these scrambled-watermarks by using adaptive quantization low bit modulation (LBM). The proposed image watermarking method endures entirety attacks and rightly extracts the hidden watermark without significant degradation in the image quality, Thus, when the Peak Signal to Noise Ratio (PSNR) and Normalized Correlation (NC) performance of the proposed algorithm is compared with other related techniques.
Deep Learning for Biomedical Unstructured Time SeriesPetteriTeikariPhD
1D Convolutional neural networks (CNNs) for time series analysis, and inspiration from beyond biomedical field. Short intro for various different steps involved in Time Series Analysis including outlier detection, imputation, denoising, segmentation, classification and forecasting.
Available also from:
https://www.dropbox.com/s/cql2jhrt5mdyxne/timeSeries_deepLearning.pdf?dl=0
Adversarial Multi Scale Features Learning for Person Re Identificationijtsrd
Person re identification Re ID is the task of matching a target person across different cameras, which has drawn extensive attention in computer vision and has become an essential component in the video surveillance system. Pried can be considered as a problem of image retrieval. Existing person re identification methods depend mostly on single scale appearance information. In this work, to address issues, we demonstrate the benefits of a deep model with Multi scale Feature Representation Learning MFRL using Convolutional Neural Networks CNN and Random Batch Feature Mask RBFM is proposed for pre id in this study. The RBFM is enlightened by the drop block and Batch Drop Block BDB dropout based approaches. However, great challenges are being faced in the pre id task. First, in different scenarios, appearance of the same pedestrian changes dramatically by reason of the body misalignment frequently, various background clutters, large variations of camera views and occlusion. Second, in a public space, different pedestrians wear the same or similar clothes. Therefore, the distinctions between different pedestrian images are subtle. These make the topic of pre id a huge challenge. The proposed methods are only performed in the training phase and discarded in the testing phase, thus, enhancing the effectiveness of the model. Our model achieves the state of the art on the popular benchmark datasets including Market 1501, duke mtmc re id and CUHK03. Besides, we conduct a set of ablation experiments to verify the effectiveness of the proposed methods. Mrs. D. Radhika | D. Harini | N. Kirujha | Dr. M. Duraipandiyan | M. Kavya "Adversarial Multi-Scale Features Learning for Person Re-Identification" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42562.pdf Paper URL: https://www.ijtsrd.comengineering/computer-engineering/42562/adversarial-multiscale-features-learning-for-person-reidentification/mrs-d-radhika
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCAIAEME Publication
Face recognition has been a rapidly creating, testing and fascinating area with
respect to consistent applications. The task of face acknowledgment has been viably
asked about lately. With data and information gathering in abundance, there is an
urgent necessity for high security. Face acknowledgment has been a rapidly creating,
testing and interesting area concerning persistent applications. This paper gives a
cutting edge review of critical human face acknowledgment investigate
Multimodal RGB-D+RF-based sensing for human movement analysisPetteriTeikariPhD
Combining RGB-D based computer vision with commodity Wifi for pose estimation and human movement analysis for action recognition.
Think of applications especially in healthcare settings, where existing Wifi Access Point already exist and adding USB Wifi dongles to Raspberry Pi (or dedicated chips) is a very easy way to create "operational awareness" of all your patients.
Alternative download link:
https://www.dropbox.com/s/awkqqfhibesjcb9/multimodal_remote_MovementSensing.pdf?dl=0
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A Review of BSS Based Digital Image Watermarking and Extraction MethodsIOSR Journals
The field of Signal Processing has witnessed the strong emergence of a new technique, the Blind
Signal Processing (BSP) which is based on sound theoretical foundation. An offshoot of the BSP is known as
Blind Source Separation (BSS). This digital signal processing techniques have a wide and varied potential
applications. The term blind is indicative of the fact that both the source signal and the mixing procedures are
unknown. One of the more interesting applications of BSS is in field of image data security/authentication where
digital watermarking is proposed. Watermarking is a promising technique to help protect data security and
intellectual property rights. The plethora digital image watermarking methods are surveyed and discussed here
with their features and limitations. Thus literature survey is presented in two major categories-Digital image
watermarking methods and BSS based techniques in digital image watermarking and extraction
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGEijcsit
This paper proposes an encryption-based image watermarking scheme for medical images using a customized quantization of wavelet coefficient and a crypto system based on the chaotic cipher of Singular Value Decomposition (SVD). In order to spread the robustness of our algorithm and provide extra security, an improved SVD-CHAOS embedding and extraction procedure has been used to scramble the watermark logo in the preprocessing step of the proposed method. In the process of watermark embedding, an R-level discrete wavelet transform was applied to the host image. The high frequency wavelet coefficients are selected to carry these scrambled-watermarks by using adaptive quantization low bit modulation (LBM). The proposed image watermarking method endures entirety attacks and rightly extracts the hidden watermark without significant degradation in the image quality, Thus, when the Peak Signal to Noise Ratio (PSNR) and Normalized Correlation (NC) performance of the proposed algorithm is compared with other related techniques.
Deep Learning for Biomedical Unstructured Time SeriesPetteriTeikariPhD
1D Convolutional neural networks (CNNs) for time series analysis, and inspiration from beyond biomedical field. Short intro for various different steps involved in Time Series Analysis including outlier detection, imputation, denoising, segmentation, classification and forecasting.
Available also from:
https://www.dropbox.com/s/cql2jhrt5mdyxne/timeSeries_deepLearning.pdf?dl=0
Adversarial Multi Scale Features Learning for Person Re Identificationijtsrd
Person re identification Re ID is the task of matching a target person across different cameras, which has drawn extensive attention in computer vision and has become an essential component in the video surveillance system. Pried can be considered as a problem of image retrieval. Existing person re identification methods depend mostly on single scale appearance information. In this work, to address issues, we demonstrate the benefits of a deep model with Multi scale Feature Representation Learning MFRL using Convolutional Neural Networks CNN and Random Batch Feature Mask RBFM is proposed for pre id in this study. The RBFM is enlightened by the drop block and Batch Drop Block BDB dropout based approaches. However, great challenges are being faced in the pre id task. First, in different scenarios, appearance of the same pedestrian changes dramatically by reason of the body misalignment frequently, various background clutters, large variations of camera views and occlusion. Second, in a public space, different pedestrians wear the same or similar clothes. Therefore, the distinctions between different pedestrian images are subtle. These make the topic of pre id a huge challenge. The proposed methods are only performed in the training phase and discarded in the testing phase, thus, enhancing the effectiveness of the model. Our model achieves the state of the art on the popular benchmark datasets including Market 1501, duke mtmc re id and CUHK03. Besides, we conduct a set of ablation experiments to verify the effectiveness of the proposed methods. Mrs. D. Radhika | D. Harini | N. Kirujha | Dr. M. Duraipandiyan | M. Kavya "Adversarial Multi-Scale Features Learning for Person Re-Identification" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42562.pdf Paper URL: https://www.ijtsrd.comengineering/computer-engineering/42562/adversarial-multiscale-features-learning-for-person-reidentification/mrs-d-radhika
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
Eye-tracking has been widely used for research purposes in fields such as linguistics and marketing. However, there are many possibilities of how eye-trackers could be used in other disciplines like physics. A part of physics education research deals with the differences between novices and experts, specifi-cally how each group solves problems. Though there has been a great deal of research about these differences there has been no research that focuses on noticing exactly where experts and no-vices look while solving the problems. Thus, to complement the past research, I have created a new technique called gaze scrib-ing. Subjects wear a head mounted eye-tracker while solving electrical circuit problems on a graphics monitor. I monitor both scan patterns of the subjects and combine that with videotapes of their work while solving the problems. This new technique has yielded new information and elaborated on previous studies.
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...Editor IJMTER
Web mining techniques are used to analyze the web page contents and usage details. Human facial
images are shared in the internet and tagged with additional information. Auto face annotation techniques are used
to annotate facial images automatically. Annotations are used in online photo search and management.
Classification techniques are used to assign the facial annotation. Supervised or semi-supervised machine learning
techniques are used to train the classification models. Facial images with labels are used in the training process.
Noisy and incomplete labels are referred as weak labels. Search-based face annotation (SBFA) is assigned by
mining weakly labeled facial images available on the World Wide Web (WWW). Unsupervised label refinement
(ULR) approach is used for refining the labels of web facial images with machine learning techniques. ULR
scheme is used to enhance the label quality using graph-based and low-rank learning approach. The training phase
is designed with facial image collection, facial feature extraction, feature indexing and label refinement learning
steps. Similar face retrieval and voting based face annotation tasks are carried out under the testing phase.
Clustering-Based Approximation (CBA) algorithm is applied to improve the scalability. Bisecting K-means
clustering based algorithm (BCBA) and divisive clustering based algorithm (DCBA) are used to group up the
facial images. Multi step Gradient Algorithm is used for label refinement process. The web face annotation scheme
is enhanced to improve the label quality with low refinement overhead. Noise reduction is method is integrated
with the label refinement process. Duplicate name removal process is integrated with the system. The indexing
scheme is enhanced with weight values for the labels. Social contextual information is used to manage the query
facial image relevancy issues.
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
Analysis of recordings made by a wearable eye tracker is complicated by video stream synchronization, pupil coordinate mapping, eye movement analysis, and tracking of dynamic Areas Of Interest (AOIs) within the scene. In this paper a semi-automatic system is developed to help automate these processes. Synchronization is accomplished
via side by side video playback control. A deformable eye template and calibration dot marker allow reliable initialization via simple drag and drop as well as a user-friendly way to correct the algorithm when it fails. Specifically, drift may be corrected by nudging the detected pupil center to the appropriate coordinates. In a case study, the impact of surrogate nature views on physiological health and perceived well-being is examined via analysis of gaze over images of nature. A match-moving methodology was developed to track AOIs for this particular application but is applicable toward similar future studies.
In this report, Argus, a tool for generating visualizations for eye tracking data is presented. There are numerous ways to
visually present eye tracking data: heatmaps, scanpath, gaze stripes, eye clouds and AOI transition diagrams to name a few. On top
of that, there are multiple ways to interact with these visualizations like selecting users, stimuli and fixation points to compare these
features between the different visualizations. All of the aforementioned visualizations and interaction techniques are implemented into
this tool. This report describes these visualizations and interactions including their advantages and disadvantages and how they are
used in understanding eye tracking data. Furthermore, the report also looks at the structure of the dataset, how the tool runs on a
server, how data is stored and the design philosophy of the website. Finally, the tool is previewed by means of an application example
and the performance and limitations are discussed.
Computers help us handle and process tons of information data. Most of the time all this data is so dense, it’s almost impossible to understand from just looking at a bunch of numbers. Some of the data could be analyzed by computers, but most of the time there must be somebody, a real thinking person, who shall interpret the data and take conclusions from it to make decisions, analyze. Scientific Visualization is about converting numbers into a representation of reality, something more graphic so that a human being can understand and/or communicate.
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking.
We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.
Leveraging Eye-gaze and Time-series Features to Predict User Interests and Bu...Nelson J. S. Silva
We developed a new concept to improve the efficiency of visual analysis
through visual recommendations. It uses a novel eye-gaze based
recommendation model that aids users in identifying interesting
time-series patterns. Our model combines time-series features and
eye-gaze interests, captured via an eye-tracker. Mouse selections are
also considered. The system provides an overlay visualization with
recommended patterns, and an eye-history graph, that supports
the users in the data exploration process. We conducted an experiment
with 5 tasks where 30 participants explored sensor data of a
wind turbine. This work presents results on pre-attentive features,
and discusses the precision/recall of our model in comparison to
final selections made by users. Our model helps users to efficiently
identify interesting time-series patterns.
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an algorithm is proposed to allow the analyst to set the visual span as a parameter prior to generation of a heat map.
Although the ideas are not novel, the algorithm also indicates how transparency of the heat map can be achieved and how the color gradient can be generated to represent the probability for an object to be observed within the defined visual span. The optional addition of contour lines provides a way to visualize separate intervals in the continuous color map.
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
Relevance feedback (RF) mechanisms are widely adopted in Content-Based Image Retrieval (CBIR) systems to improve image retrieval performance. However, there exist some intrinsic problems: (1) the semantic gap between high-level concepts and low-level features and (2) the subjectivity of human perception of visual contents. The primary focus of this paper is to evaluate the possibility of inferring the relevance of images based on eye movement data. In total, 882 images from 101 categories are viewed by 10 subjects to test the usefulness of implicit RF, where the relevance of each image is known beforehand. A set of measures based on fixations are thoroughly evaluated which include fixation duration, fixation count, and the number of revisits. Finally, the paper proposes a decision tree to predict the user’s input during the image searching tasks. The prediction precision of the decision tree is over 87%, which spreads light on a promising integration of natural eye movement into CBIR systems in the future.
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
The intuitive user interfaces of PCs and PDAs, such as pen display and touch panel, have become widely used in recent times. In this study, we have developed an eye-tracking pen display based on the stereo bright pupil technique. First, the bright pupil camera was developed by examining the arrangement of cameras and LEDs for pen display. Next, the gaze estimation method was proposed for the stereo bright pupil camera, which enables one point calibration. Then, the prototype of the eyetracking pen display was developed. The accuracy of the system was approximately 0.7° on average, which is sufficient for human interaction support. We also developed an eye-tracking tabletop as an application of the proposed stereo bright pupil technique.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
The visual field is the area of space that can be seen when an observer fixates a given point. Many visual capabilities vary with position in the visual field and many diseases result in changes in the visual field. With current technology, it is possible to build very complex real-time visual field simulations that employ gaze-contingent displays. Nevertheless, there are still no established techniques to evaluate such systems. We have developed a method to evaluate a system’s contingency by employing visual blind spot localization as well as foveal fixation. During the experiment, gaze-contingent and static conditions were compared. There was a strong correlation between predicted results and gaze-contingent trials. This evaluation method can also be used with patient populations and for the evaluation of gaze-contingent display systems, when there is need to evaluate a visual field outside of the foveal region.
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
Pie menus offer several features which are advantageous especially for gaze control. Although the optimal number of slices per pie
and of depth layers has already been established for manual control, these values may differ in gaze control due to differences in spatial accuracy and congitive processing. Therefore, we investigated the layout limits for hierarchical pie menu in gaze control. Our user study indicates that providing six slices in multiple depth layers guarantees fast and accurate selections. Moreover, we compared two different methods of selecting a slice. Novices performed well with both, but selecting via selection borders produced better performance for experts than the standard dwell time selection.
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
Eye typing could provide motor disabled people a reliable method of communication given that the text entry speed of current interfaces can be increased to allow for fluent communication. There are two reasons for the relatively slow text entry: dwell time selection requires waiting a certain time, and single character entry limits the maximum entry speed. We adopted a typing interface based on hierarchical pie menus, pEYEwrite [Urbina and Huckauf 2007] and included bigram text entry with one single pie iteration. Therefore, we introduced three different bigram building strategies.
Moreover, we combined dwell time selection with selection by borders, providing an alternative selection method and extra functionality. In a longitudinal study we compared participants performance during character-by-character text entry with bigram entry and with
text entry with bigrams derived by word prediction. Data showed large advantages of the new entry methods over single character text entry in speed and accuracy. Participants preferred selecting by
borders, which allowed them faster selections than the dwell time method.
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
The study of surgeons’ eye movements is an innovative way of assessing skill and situation awareness, in that a comparison of eye movement strategies between expert surgeons and novices may show differences that can be used in training. Our preliminary study compared eye movements of 4 experts and
4 novices performing a simulated gall bladder removal task on a
dummy patient with an audible heartbeat and simulated vital signs displayed on a secondary monitor. We used a head-mounted Locarna PT-Mini eyetracker to record fixation locations during the operation. The results showed that novices concentrated so hard on the surgical
display that they were hardly able to look at the patient’s vital signs, even when heart rate audibly changed during the procedure. In comparison, experts glanced occasionally at the vitals monitor, thus being able to observe the patient condition.
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
The portability of an eye tracking system encourages us to develop a technique for estimating 3D point-of-regard. Unlike conventional methods, which estimate the position in the 2D image coordinates of the mounted camera, such a technique can represent richer gaze information of the human moving in the larger area. In this paper, we propose a method for estimating the 3D point-of-regard and a visualization technique of gaze trajectories under natural head movements for the head-mounted device. We employ visual SLAM technique to estimate head configuration and extract environmental information. Even in cases where the head moves dynamically, the proposed method could obtain 3D point-of-regard. Additionally, gaze trajectories are appropriately overlaid on the scene camera image.
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
Recent advances in high magnification retinal imaging have allowed for visualization of individual retinal photoreceptors, but these systems also suffer from distortions due to fixational eye motion. Algorithms developed to remove these distortions have the added benefit of providing arc second level resolution of the eye movements that produce them. The system also allows for visualization of targets on the retina, allowing for absolute retinal position measures to the level of individual cones. This paper will describe the process used to remove the eye movement artifacts and present analysis of their spectral characteristics. We find a roughly 1/f amplitude spectrum similar to that reported by Findlay (1971) with no evidence for a distinct
tremor component.
Skovsgaard Small Target Selection With Gaze AloneKalle
Accessing the smallest targets in mainstream interfaces using gaze
alone is difficult, but interface tools that effectively increase the size of selectable objects can help. In this paper, we propose a conceptual framework to organize existing tools and guide the development of new tools. We designed a discrete zoom tool and conducted a proof-of-concept experiment to test the potential of the framework and the tool. Our tool was as fast as and more accurate than the currently available two-step magnification tool. Our framework shows potential to guide the design, development, and testing of zoom tools to facilitate the accessibility of mainstream
interfaces for gaze users.
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user’s eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The
user successfully typed on a wall-projected interface using his eye movements.
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
In certain applications such as radiology and imagery analysis, it is important to minimize errors. In this paper we evaluate a structured inspection method that uses eye tracking information as a feedback mechanism to the image inspector. Our two-phase method starts with a free viewing phase during which gaze data is collected. During the next phase, we either segment the image, mask previously seen areas of the image, or combine the two techniques, and repeat the search. We compare the different methods
proposed for the second search phase by evaluating the inspection method using true positive and false negative rates, and subjective workload. Results show that gaze-blocked configurations reduced the subjective workload, and that gaze-blocking without segmentation showed the largest increase in true positive identifications and the largest decrease in false negative identifications of previously unseen objects.
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
This paper describes a study that seeks to explore the correlation between eye movements and the interpretation of geometric shapes. This study is intended to inform the development of an eye tracking interface for computational tools to support and enhance the natural interaction required in creative design. A common criticism of computational design tools is that they do not enable manipulation of designed shapes according to all perceived features. Instead the manipulations afforded are limited by formal structures of shapes. This research examines the potential for eye movement data to be used to recognise and make available for manipulation the perceived features in shapes. The objective of this study was to analyse eye movement data with the intention of recognising moments in which an interpretation of shape is made. Results suggest that fixation duration and saccade amplitude prove to be consistent indicators of shape interpretation.
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
Eye gaze interaction for disabled people is often dealt with by designing ad-hoc interfaces, in which the big size of their elements compensates for both the inaccuracy of eye trackers and the instability of the human eye. Unless solutions for reliable eye cursor control are employed, gaze pointing in ordinary graphical operating environments is a very difficult task. In this paper we present an eye-driven cursor for MS Windows which behaves differently according to the “context”. When the user’s gaze is perceived within the desktop or a folder, the cursor can be discretely shifted from one icon to another. Within an application window or where there are no icons, on the contrary, the cursor can be continuously and precisely moved. Shifts in the four directions (up, down, left, right) occur through dedicated buttons. To increase user awareness of the currently pointed spot on the screen while continuously moving the cursor, a replica of the spot is provided within the active direction button, resulting in improved pointing performance.
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
The purpose of this study is to explore how the viewers’ previous training is related to their aesthetic viewing in various interactions with the form and the context, in relation to apparel design. Berlyne’s two types of exploratory behavior, diversive and specific, provided a theoretical framework to this study. Twenty female subjects (mean age=21, SD=1.089) participated. Twenty model images, posed by a male and a female model, were shown on an eye-tracker screen for 10 seconds each. The findings of this study verified Berlyne’s concepts of visual exploration. One of the different findings from Berlyne’s theory was that the untrained viewers’ visual attention tended to be more significantly focused on peripheral areas of visual interest, compared to the trained viewers, while there was no significant difference on the central, foremost areas of visual interest between the two groups. The overall aesthetic viewing patterns were also identified.
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
We report on the results of a study in which pairs of subjects were involved in spoken dialogues and one of the subjects also operated a simulated vehicle. We estimated the driver’s cognitive load based on pupil size measurements from a remote eye tracker. We compared the cognitive load estimates based on the physiological pupillometric data and driving performance data. The physiological and performance measures show high correspondence suggesting that remote eye tracking might provide reliable driver cognitive load estimation, especially in simulators. We also introduced a new pupillometric cognitive load measure that shows promise in tracking cognitive load changes on time scales of several seconds.
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
To estimate viewer’s contextual understanding, features of their
eye-movements while viewing question statements in response to definition statements, and features of correct and incorrect responses were extracted and compared. Twelve directional features
of eye-movements across a two-dimensional space were created, and these features were compared between correct and incorrect responses. The procedure of estimating the response was developed with Support Vector Machines, using these features. The estimation performance and accuracy were assessed across combinations of features. The number of definition statements, which needed to be memorized to answer the question statements during the experiment, affected the estimation accuracy. These results provide evidence that features of eye-movements during reading statements
can be used as an index of contextual understanding.
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
This paper presents a user-calibration-free method for estimating the point of gaze (POG) on a display accurately with estimation of the horizontal angles between the visual and the optical axes of both eyes. By using one pair of cameras and two light sources, the
optical axis of the eye can be estimated. This estimation is carried
out by using a spherical model of the cornea. The point of intersection
of the optical axis of the eye with the display is termed POA.By detecting the POAs of both the eyes, the POG is approximately estimated as the midpoint of the line joining the POAs of both the eyes on the basis of the binocular eye model; therefore, we can estimate
the horizontal angles between the visual and the optical axes of both the eyes without requiring user calibration. We have developed
a prototype system based on this method using a 19 display with two pairs of stereo cameras. We evaluated the system experimentally with 20 subjects who were at a distance of 600 mm from the display. The result shows that the average of the root-meansquare error (RMSE) of measurement of POG in the display screen coordinate system is 16.55 mm (equivalent to less than 1.58◦).
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Kalle
A novel gaze estimation method based on a novel aspherical model of the cornea is proposed in this paper. The model is a surface of revolution about the optical axis of the eye. The calculation method is explained on the basis of the model. A prototype system for estimating the point of gaze (POG) has been developed using this method. The proposed method has been found to be more accurate than the gaze estimation method based on a spherical model of the
cornea.
2. 2.2 Models of Interest Timeline
A timeline visualization maps data against time and facilitates find-
ing events before, after, or during a given time interval. Thus, it can
be used to answer several questions, such as: Has object x been
observed repetitively? In what order and for how long are objects
looked at? In this regard, the Object vs. Time View [Lessing and
Linge 2002] is a series of timelines for depicting Areas of Interest
(AOIs). Each AOI is assigned a time row, resulting in an AOI ma-
Figure 1: Two alternative fixation representations, spheres (a) and trix. With an increasing number of AOIs and shorter fixation times,
cones (b), are presented for 3D scan paths. the table size increases and may hinder a good overview.
Therefore, we propose a space-filling models of interest (MOI)
timeline (see Figure 3) for a compact illustration that effectively
utilizes assigned screen space by omitting voids. We suggest to use
the term Model of Interest to describe distinct 3D objects. The MOI
timeline gives an overview about a user’s gaze distribution based on
viewed models. Each model is labeled with a specific color, which
can be manually adapted. By assigning the same value to different
objects, semantic groups can be defined, for example, for group-
ing similar looking models, object hierarchies or closely arranged
items. A legend is displayed for providing an overview of assigned
colors (see in the left area of Figure 5).
Figure 2: Two examples of camera paths (viewpoints and viewing
directions) and fixations. Zooming and selection techniques may aid in making data visible
which would otherwise be suppressed due to limited display sizes.
Based on TimeZoom [Dachselt and Weiland 2006], the MOI time-
mation into meaningful units (fixations). While fixations indicate line can be dragged to a desired position to offer horizontal scrolling
aspects attracting an observer’s attention, saccades provide infor- and supports continuous zooming (e.g., via a zoom slider, see Fig-
mation about how fixations are related to each other. Thereby, fix- ure 3). For this purpose, the viewing area is divided into a large
ations are usually displayed as circles varying in size depending on detail area for zooming (see Figure 3) and a small overview area
the fixation duration and saccades as straight lines. displaying the entire collection. Displayed scan and camera paths
can be filtered with respect to a selected period. For this purpose,
Scan paths are frequently used for static (e.g., images and texts) navigation markers will appear if the path visualizations are em-
and dynamic (e.g., videos) 2D stimuli. For other stimuli, superim- ployed (see Figure 3), otherwise the markers are hidden. The user
posed fixation plots are usually applied to recorded videos of the can simply click and drag the markers to define intervals of inter-
presented content. This often implies a time-consuming frame-by- est within the timeline. While dragging the timeline, the user can
frame video data analysis. One solution proposed by Ramloll et al. play back data of interest as defined by this fixed interval. Finally,
[2004] is a fixation net for dynamic 3D non-stereoscopic content, additional details for a timeline entry (e.g., object identifiers, start
for which gaze positions are mapped onto flattened objects. Rep- and finish times) can be displayed, when hovering over it with the
resenting binocular scan paths in 3D VRs has been presented by mouse cursor. Context menus are triggered by right-clicking on
Duchowski et al. [2000] for which 2D gaze positions and depths items, for instance, for changing its color.
(gaze depth paths) are depicted.
In contrast to Duchowski et al., we propose a monocular scan path 2.3 Three-dimensional Attentional Maps
depicting 3D gaze positions as intersections of a gaze ray and a
viewed model. Besides traditional spherical representations (see An attentional map (or heatmap) is an aggregated representation
Figure 1a), we used conical fixation representations pointing at a depicting areas of visual attention for primarily static 2D stimuli.
corresponding gaze position (see Figure 1b), since it may integrate This is substantiated by the fact that an attentional map usually has
additional information about varying camera positions. So, cones the same dimensions (width and height) as the underlying stimu-
could represent gaze positions (apex), fixation durations (cone’s lus [Wooding 2002]. Since fixations are merely accumulated, at-
base), viewing directions (cone’s orientation), and viewing dis- tentional maps do not provide information about gaze sequences,
tances (cone’s height) within one representation. The traditional but are suitable for indicating areas attracting visual attention over
saccade representation can cause problems in 3D, because saccade a certain period of time. Visualizing gaze data directly in the 3D
lines may cross through surfaces. A simple solution we propose is VE enables an aggregated representation of longer observations (in
to maintain the traditional saccade representation with the possibil- contrast to the traditional frame-by-frame evaluation).
ity for adapting rendering options of 3D models (e.g., wireframe
models or hiding objects) for determining linked fixations. We propose 3D attentional maps as superimposed aggregated vi-
sualizations for virtual 3D stimuli: projected, object-based, and
It is also important to visualize how the locations and viewing direc- surface-based attentional representations (see Figure 4). A pro-
tions of the virtual camera have changed during observation. This jected attentional map is a 2D representation of 3D gaze distribu-
may aid in finding out, for example, if a scene was observed from tions for selected arbitrary views (e.g. a top view as in Figure 4).
diverse locations. We propose to visualize the camera path with For an object-based attentional representation one color is mapped
traces pointing at the respective gaze positions. Figure 2 shows to the surface of each model based on its received visual attention
an example for a camera path for which the camera locations are (see Figure 4b). This allows for quickly evaluating an object’s vi-
depicted as red lines and viewing directions as straight blue lines. sual attractiveness while maintaining information about the spatial
Displayed scan and camera paths can be filtered by means of the relationship with other models (e.g., two adjacent models have re-
models of interest timeline to provide a better overview. ceived high visual attention). Surface-based attentional maps dis-
110
3. Figure 3: An example of the models of interest timeline with selection markers for confining displayed scan and camera paths.
(a) Projected (top view) (b) Object-based (c) Surface-based
Figure 4: Advanced attentional maps for the application in three-
dimensional virtual environments.
play aggregated fixation data as heatmaps directly on a model’s sur-
face using a vertex-based mapping (see Figure 4c). This provides
detailed insights into gaze distributions across models’ surfaces, al-
lowing to draw quick conclusions about which model aspects at-
tracted visual interest. Figure 5: A screenshot from SVEETER illustrating multiple views
at a scene. The MOI timeline is shown in the lower area with its
A combination of these techniques offers different levels of detail legend displayed to the upper left.
for data investigation. While a projected heatmap may give an
overview of the gaze distribution across a scene, an object-based
attentional map can provide information about the models’ visual 3 User Study
attractiveness when zooming in. Finally, the surface-based tech-
nique allows for close examinations of viewed models. We have conducted a user study with eye tracking and visualization
experts to assess the usefulness of the presented techniques. The
method and results are discussed in the following paragraphs.
2.4 Implementation
Participants. Group 1 consisted of 20 eye tracking professionals
and researchers, aged between 23 and 52 years (Mean [M] = 34.50).
For the implementation of the presented gaze visualization tech- Participants in this group rated their general eye tracking knowl-
niques, a virtual 3D scene was required for which we used Mi- edge1 higher than average (M = 3.85, Standard Deviation [SD] =
crosoft’s XNA Game Studio 3.0 (based on C#). Our system was 0.96). Group 2 included 8 local data visualization and computer
confined to static 3D VEs without any transparent phenomena (e.g., graphics experts, aged between 25 and 35 years (M = 28.25). This
smoke or semi-transparent textures). Users can freely explore the group rated their eye tracking knowledge under average (M = 1.25,
scene by moving their camera viewpoints via mouse and keyboard SD = 1.09), but their expertise in computational visualization above
controls. In addition, an integration of the Tobii 1750 Eye Tracker average (M = 3.63, SD = 1.22).
and XNA allowed for logging 3D gaze target positions on models’
surfaces. For this purpose, a 3D collision ray needed to be deter- Measures. The usefulness of the gaze visualization techniques
mined based on 2D screen-based gaze positions which are supplied was investigated in an online survey2 . Each technique was briefly
by the eye tracker. The gaze ray was used to calculate and log its in- described with screenshots. Respondents were asked to rate their
tersection with virtual objects on the precision level of a polygonal agreement to statements such as “Cone visualizations are useful for
triangle on a model [M¨ ller and Trumbore 2005]. The processed
o representing fixations in virtual environments.” The qualitative part
data were stored in log files for post-analysis. of the survey collected comments about usefulness, improvements,
and possible applications of the techniques.
The presented visualization techniques were implemented in a pro-
totype of a gaze analysis software tool: SVEETER (see Figure Procedure and Design. Group 1 had to answer the questions
5). It is based on XNA (for 3D scenes) and Windows Forms to based on brief textual and pictorial descriptions of the gaze visual-
benefit from existing interface elements (e.g., buttons and menus). ization techniques provided in the online survey. For a better un-
SVEETER offers a coherent framework for loading 3D scenes and derstanding of the new gaze visualizations, group 2 could use the
corresponding gaze data logs, as well as deploying adapted gaze vi- SVEETER tool at our University. After welcoming each partici-
sualizations techniques. Multiple views are integrated to look at a
scene from different viewpoints. Context menus offer the possibil- 1 Rated on a Likert-scale from 1 (poor) to 5 (excellent).
ity to apply certain options to each view, such as adapted rendering 2 The survey included 19 Likert scales, from 1 (do not agree at all) to
options. The figures used throughout this paper to illustrate the dif- 5 (extremely agree), and 6 qualitative open questions using LimeSurvey
ferent gaze visualization techniques were created with SVEETER. (Version 1.80+)
111
4. Figure 6: Agreement ratings from both groups with the corresponding standard deviations.
pant of group 2, a short introduction about visual gaze analysis was 4 Conclusion and Future Work
provided and the tool was briefly presented. Following this, sample
gaze data for a 3D scene and a set of 9 predefined tasks were given In this paper, we presented a set of advanced gaze visualization
to each individual. Thereby, the aim was systematical acquaintance techniques for investigating visual attention in 3D VEs. Three-
with the techniques, not an efficiency evaluation. Tasks included, dimensional scan paths assist in examining sequences of fixations
for example, to find out if particular parts of an object received high and saccades. Thereby, camera paths provide valuable information
visual attention and if observers changed their viewpoints. After about how a user navigated through a scene and from which loca-
completing the tasks and familiarizing themselves with the differ- tions objects have been observed. The models of interest timeline
ent techniques, the visualization experts were asked to fill out the may help answering, for example, whether an object was viewed at
online survey. On average, each session took about 45 minutes. a certain point in time or whether any cyclic viewing behavior ex-
isted. Finally, three types of attentional maps were discussed to ex-
Results. Means and standard deviation values are omitted in this amine how visual attention is distributed across a scene (projected
paper, since none of the subjective results showed significant differ- heatmap), among 3D models (object-based attentional map), and
ences between group 1 and group 2. In general, participants agreed across a model’s surface (surface-based heatmap). A combination
that gaze visualization techniques facilitate eye tracking analysis. of these techniques was integrated in a toolkit for visual analysis
The detailed results are shown in Figure 6. of gaze data (SVEETER). The survey results from eye tracking and
visualization experts indicate the potential usefulness for various
Scan paths are generally useful for studying gaze sequences. In application areas.
this context, depicting camera paths is regarded important as well. Since visual gaze analysis for 3D VEs is still in an early stage, our
Thus, a combination of camera and scan paths is found useful by techniques may serve as a solid foundation and provide a basis for
both groups. In contrast, cones were considered only moderately further development. This includes representing data from multiple
suitable for representing fixations. Instead, a simple combination users as well as developing and testing alternative techniques.
of the spherical fixation representations with viewing directions and
positions was preferred. While the motivation for individually de-
picting fixations and saccades was not evident to everybody, it was References
regarded useful in the overall rating.
DACHSELT, R., AND W EILAND , M. 2006. TimeZoom: a flexible
Besides limiting scan and camera paths temporally via the MOI detail and context timeline. In CHI ’06: Extended abstracts on
timeline, additional filtering functionality was requested. Partici- Human factors in computing systems, ACM, 682–687.
pants agreed that the MOI timeline helps to detect gaze patterns.
D UCHOWSKI , A. T., S HIVASHANKARAIAH , V., R AWLS , T.,
Group 2 showed great interest in the MOI timeline when testing
G RAMOPADHYE , A. K., M ELLOY, B., AND K ANKI , B. 2000.
SVEETER. Both groups agreed that a zoomable user interface is
Binocular eye tracking in virtual reality for inspection training.
convenient for the timeline. Filtering objects of interest in the time-
In ETRA ’00, ACM, 89–96.
line was rated beneficial. Suggested improvements for the time-
line included substituting color identification with iconic images L ESSING , S., AND L INGE , L. 2002. IICap: A new environment for
and improved suitability for scenes with many objects by grouping eye tracking data analysis. Master’s thesis. University of Lund,
them dynamically or individually. Sweden.
The SVEETER tool was considered highly useful, combining scan ¨
M OLLER , T., AND T RUMBORE , B. 2005. Fast, minimum storage
paths, attentional maps, and the MOI timeline. Thus, for evaluat- ray/triangle intersection. In SIGGRAPH ’05: ACM SIGGRAPH
ing eye tracking studies in 3D VEs, interviewees could imagine to 2005 Courses, ACM.
use SVEETER. Multiple views are rated practical together with the R AMLOLL , R., T REPAGNIER , C., S EBRECHTS , M., AND
ability to assign different visualizations to each view. We observed B EEDASY, J. 2004. Gaze data visualization tools: opportunities
that a combination of different techniques was often employed by and challenges. Eighth International Conference on Information
users testing SVEETER (group 2). Both scan paths and surface- Visualisation (July), 173–180.
based maps were, for example, used to investigate observation pat-
terns on a model. Participants frequently faded out objects to gain a W OODING , D. S. 2002. Fixation maps: quantifying eye-movement
better overview. In addition, we noticed that the MOI timeline was traces. In ETRA ’02, ACM, 31–36.
often used instead of scan paths.
112