SlideShare una empresa de Scribd logo
1 de 4
Descargar para leer sin conexión
Estimating 3D Point-of-regard and Visualizing Gaze Trajectories
                                   under Natural Head Movements
           Kentaro Takemura∗                      Yuji Kohashi        Tsuyoshi Suenaga       Jun Takamatsu                              Tsukasa Ogasawara
                                                           Graduate School of Information Science
                                                        Nara Institute of Science and Technology, Japan


Abstract
The portability of an eye tracking system encourages us to develop
a technique for estimating 3D point-of-regard. Unlike conventional
methods, which estimate the position in the 2D image coordinates
of the mounted camera, such a technique can represent richer gaze
information of the human moving in the larger area. In this pa-
per, we propose a method for estimating the 3D point-of-regard
and a visualization technique of gaze trajectories under natural head
movements for the head-mounted device. We employ visual SLAM
technique to estimate head configuration and extract environmental
information. Even in cases where the head moves dynamically, the
proposed method could obtain 3D point-of-regard. Additionally,
gaze trajectories are appropriately overlaid on the scene camera im-                                                 Figure 1: Eye tracking system
age.

CR Categories: H.5.2 [Information Interfaces and Presentation]:                                    POR in world coordinates were achieved. However, when gaze tra-
User Interfaces—Interaction styles                                                                 jectories are analyzed, a visualization method which encourages us
                                                                                                   to intuitively recognize the observer’s 3D POR is needed.
Keywords: Eye Tracking, 3D Point-of-regard, Visual SLAM                                            In this paper, we propose a method to estimate 3D POR in real-
                                                                                                   time and a visualization of gaze trajectories using augmented reality
                                                                                                   techniques for intuitive understanding. The rest of this paper is
1 Introduction                                                                                     structured as follows: Section 2 describes the system configuration,
                                                                                                   and Section 3 explains the method to estimate 3D POR. In Section
Recently, there have been active developments in portable eye                                      4, we show experimental results and Section 5 concludes this paper.
tracking systems, and Mobile Eye (Applied Science Laboratories)
and EMR-9 (NAC) are examples of them. Researchers are in-
terested in the gaze measurement of a person moving dynami-                                        2 Eye Tracking System for Estimating 3D
cally. When head-mounted eye tracking system is utilized, point-of-                                  Point-of-regard
regard (POR) is generally measured as a point on the image plane.
It is therefore difficult to analyze gaze trajectories quantitatively                               The purpose of this research is to estimate the 3D POR of an ob-
due to head movements. If we employ extra sensor such as a mo-                                     server who moves freely. In general, head pose (position and ori-
tion capture system or a magnetic sensor to estimate the 3D POR,                                   entation) is estimated by an extra sensor such as a motion capture
measuring range would be limited.                                                                  system or a magnetic sensor, but the measuring range is limited.
                                                                                                   The measuring range of magnetic sensor is about 5 [m], and many
Some methods for estimating 3D POR have been proposed to solve                                     cameras are needed for covering wide area in using motion capture
these problems. A head-mounted stereo camera pair has been pro-                                    system. To solve this limitations, we propose to estimate the head
posed [Sumi et al. 2004], but the system configuration differs from                                 pose using a scene camera installed in the eye tracking system.
normal eye tracking system substantially. A method using binoc-
ular view lines was also proposed [Mitsugami et al. 2003], but 3D                                  ViewPoint (Arrington Research) is used as the head-mounted eye
POR was estimated as relative position of observer’s head. When                                    tracking system. Its scene camera is utilized for visibility of POR
the observer’s position change dynamically, it is difficult to record                               on image, so it’s a low-resolution camera. For this research, the
3D POR in world coordinates. A novel method using interest points                                  scene camera was changed to FireFly MV (Point Grey Research)
has been proposed [Munn and Pelz 2008], and the estimation of 3D                                   as a high resolution camera as shown in Figure 1.
   ∗ e-mail:   kenta-ta@is.naist.jp
                                                                                                   2.1 Estimation of Head Pose by Visual SLAM

                                                                                                   Our system has two software modules for estimating 3D POR: one
                                                                                                   for estimating the head pose and one for estimating gaze direction
Copyright © 2010 by the Association for Computing Machinery, Inc.
                                                                                                   in head coordinates. First, we describe the software for estimating
Permission to make digital or hard copies of part or all of this work for personal or              head pose.
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the             Simultaneous localization and mapping (SLAM), which is a tech-
first page. Copyrights for components of this work owned by others than ACM must be                nique for estimating robot pose in unknown environments, have
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on         been developed in robotics research. A mobile robot with a Laser
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                                   Range Finder (LRF) could move in unknown environments using
permissions@acm.org.                                                                               SLAM [Smith and Cheeseman 1986]. Moreover, SLAM using a
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             157
Interest points                                Delaunay triangulation




                                                                                                                                           face direction

                                                                                                                                   2D POR
 Figure 2: Pupil and purkinje illuminated by near-infrared light
                                                                                       Figure 3: Delaunay triangulation using interest points
camera is called Visual SLAM, and the 6 DOF of camera pose can
be estimated. In the case of eye tracking, it is necessary to esti-                                                                         P1
mate the head pose in an unknown environment, thus Visual SLAM
can be utilized. Some techniques for Visual SLAM have been pro-                         P1
                                                                                                                         P2
posed. Among them, MonoSLAM [Davison et al. 2007] and PTAM
[Klein and Murray 2007] are representative examples. Particularly,                                                                                   P
robust estimation was realized by PTAM which utilizes Bundle Ad-
justment, so we applied it to our eye tracking system. In PTAM,                                   P                                         P3
                                                                              P2
each interest point, extracted using FAST corner detector [Rosten
                                                                                         P3
and Drummond 2006], is mapped to 3D coordinates. The camera
pose is also estimated this way. In the case of the head-mounted
eye tracking system, the scene camera moves along with observer’s
head movement, thus computing extrinsic parameters of camera is
                                                                                                 Figure 4: Estimation of 3D point-of-regard
equivalent to estimating the head pose.

2.2 Software for Eye Tracking
                                                                              based on the interest points, by Delaunay triangulation, as shown in
We now describe the software for eye tracking. OpenEyes [Li                   Figure 3. Next, the triangle, which includes the 2D POR, is com-
et al. 2006] is substituted for the API supplied by ViewPoint for             puted to determine the plane where the 3D POR is. All points on a
eye tracking. The two software modules could be merged through                triangle plane can be presented by forming linear combinations of
the use of open source software. It is necessary to get the center of         three points (Eq.1).
the pupil and the corneal reflection to estimate the gaze direction
                                                                                                 P = (1 − u − v)P1 + uP2 + vP3                           (1)
using OpenEyes. An infrared LED is installed in ViewPoint, so
OpenEyes can work on ViewPoint. Figure 2 shows the result using               P,{P1 , P2 , P3 } are the POR and vertices of triangle in the plane
ViewPoint with OpenEyes. Left image is original image captured                coordinates system respectively, and u and v are the ratio of the tri-
by eye camera, and right image shows the result of detecting the              angle areas (P1 ,P,P3 ) and (P1 ,P2 ,P), respectively. When three
pupil and the purkinje.                                                       vertices and 2D POR are given as shown in Eq.2, it is easy to com-
                                                                              pute the values of u, v using Eq.3.
3   Estimation and Visualization of 3D Point-                                  »      –                  »     –     »     –      »       –
    of-regard                                                                      xt                      x1          x2            x3
                                                                                        = (1 − u − v)             +u         +v                 (2)
                                                                                   yt                      y1          y2            y3
In previous research, 3D POR have been estimated using SLAM
[Munn and Pelz 2008]. However, the 3D POR is computed using                        »         –        »                       –−1 »              –
two key frames to search along an epipolar line. In our proposed                        u                 x2 − x1   x3 − x1           xt − x1
                                                                                                 =                                                       (3)
method, the 3D POR can be estimated in real-time without two key                        v                 y2 − y1   y3 − y1           yt − y1
frames. The following details our approach.
                                                                              u, v are points on the plane coordinate system, thus they can be
We assume that the 3D POR is located on a plane, which consists               kept constant for the conversion to 3D coordinates (Eq.4). Then
of three interest points detected by FAST corner detector, for pro-           the 3D POR can be estimated as shown in Figure 4. In PTAM,
jecting the POR from 2D to 3D coordinates. The procedures for                 interest points are mapped to 3D coordinates, thus each point has a
estimating 3D POR are the following three steps:                              3D coordinates.
1. Apply Delaunay triangulation to the interest points in Visual              2       3                2       3     2        3     2       3
                                                                                 Xt                        X1           X2             X3
    SLAM.                                                                     4 Yt 5 = (1 − u − v) 4 Y1 5 + u 4 Y2 5 + v 4 Y3 5 (4)
2. Determine a triangle which includes the 2D POR estimated by                    Zt                       Z1            Z2            Z3
    OpenEyes.
                                                                              If the center of an image is defined as the projected point of head
3. Compute the 3D POR using the triangle position in world coor-              direction, the 3D projected point of head direction can also be es-
    dinates.                                                                  timated. Figure 5 shows the results of our approach, and Figure
                                                                              5(a) shows the interest points extracted from the scene camera im-
3.1 Estimation of 3D Point-of-Regard                                          age. Next, Delaunay triangulation is carried out as shown in Figure
                                                                              5(b). After that, the triangle which includes the 2D POR is deter-
First, a 2D POR estimated by OpenEyes have to be associated with              mined in Figure 5(c). The 3D POR can be estimated in real-time
interest points extracted by PTAM. Therefore, triangles are built up          using these procedures.


                                                                        158
a)                            b)                                                              250          900

                                                                                                               Panel                  Panel

                                                                                                                                      750

                                                                                                                          poster

                                                                                                                      gaze




                                                                                          Panel
              c)                                                                                                    direction
                                                                                                    3D
                                                                                                   POR                               person

                                                                                          50

                                                                                                  Figure 6: Experimental environment




Figure 5: Delaunay triangulation using interest points. a) Interest
points is extracted by FAST feature detector, b) Delaunay triangula-
tion based on interest points, c) Determination of the triangle which                                               POR

includes the 2D POR.


3.2 Gaze Trajectories using Augmented Reality Tech-
    niques

Some visualization techniques for gaze trajectories have been pro-                                   Figure 7: Experimental setup
posed [Wooding 2002]. Most of visualization techniques are ap-
plied to a specific plane such as displays and pictures. How-
ever, when the observer moves freely, gaze trajectories cannot be
                                                                              a)                                                b)
recorded on a specific plane, making it difficult to analyze gaze
trajectories. A method for generating a fixation map, which is suit-            Z
able for time varying situation, have been proposed [Takemura et al.
2003], but it cannot be properly created when the observer moves.                                                          Y
Accordingly, we propose a method which overlays the 3D POR on
the scene camera image using augmented reality techniques. Cam-
era pose can be estimated in each frames using Visual SLAM, thus                      X
the 3D position of interest points and 3D POR can be backprojected                                              Y
on the scene camera image. Therefore, even when the scene camera                                                                          X

moves dynamically, gaze trajectories are kept on the scene camera
image using its 3D position.
                                                                                   Figure 8: Estimated 3D point-of-regard and head position

4        Experiment of Estimating 3D POR
                                                                              and we confirmed that the 3D PORs could be estimated appropri-
4.1 Experimental Condition                                                    ately. As shown in Figure 9, triangles, which include 2D POR and
                                                                              the projected points of head direction, could be computed in real-
An experiment of estimating 3D POR was conducted for confirm-                  time. Additionally, gaze trajectories are backprojected to the scene
ing the feasibility of our proposed method. Figure 6 illustrates              image along with the head pose using interest points. When the
the experimental environment, and gaze trajectories are measured              observer was free to move dynamically, gaze trajectories are kept
when the observer looks at some posters. Additionally the observer            on the scene camera image as shown in Figure 10. Thus, we could
showed his own POR using a laser pointer to confirm 2D POR, as                 follow the observer’s gaze trajectories intuitively. Although we do
shown in Figure 7. To calculate the 2D POR in the scene image,                not have the ground truth of the 3D PORs, the result of overlaid
calibration was held before measurement. The observer is required             3D PORs show focus areas such as title and figures on posters, and
to look at nine points on calibration board, thus the observer cannot         gaze trajectories are correctly kept on the image even when pose of
move during calibration.                                                      the scene camera changed. We could confirm the feasibility of our
                                                                              proposed method through an experiment.
4.2 Experimental Results
                                                                              4.3 Discussion
We were able to estimate the 3D PORs. Figure 8 shows the re-
sult of estimated 3D PORs and head position. The 3D POR values                Our proposed method could estimate 3D PORs in real-time, and
coincide with the actual location of the four posters in Figure 8,            gaze trajectories could be overlaid on scene camera image using


                                                                        159
1)           Triangle which
          includes face direction
                                    2)                      3)                             4)                           5)


                   Triangle which
                 includes 2D POR



                         Figure 9: Snapshots of determining triangles which include the 2D POR by Delaunay triangulation


 1)                                 2)                      3)                             4)                           5)




Figure 10: Snapshots of gaze trajectories using augmented reality techniques. When a head-mounted eye tracking system move without
restriction, gaze trajectories were drawing on environment appropriately. White arrows show the corresponding points on gaze trajectories
between two images.


augmented reality techniques. However, our method has a disad-                 K LEIN , G., AND M URRAY, D. 2007. Parallel tracking and map-
vantage. Our proposed method cannot get the scale of world coor-                  ping for small ar workspaces. In Proceedings of the 6th IEEE
dinates because we employed the Visual SLAM which is a bearing                    and ACM International Symposium on Mixed and Augmented
only SLAM. It is necessary to solve this problem to evaluate the                  Reality, 1–10.
accuracy of the estimated 3D PORs. If a ground truth marker with
known position is used, we can compare the 3D POR to it. How-                  L I , D., BABCOCK , J., AND PARKHURST, D. 2006. openeyes: a
ever, this would limit the initial position, thus we believe that this             low-cost head-mounted eye-tracking solution. In Proceedings of
problem should be solved by another approach. After solving this                   the 2006 symposium on Eye tracking research & applications,
problem, we could compare different gaze trajectories of people.                   95–100.
                                                                               M ITSUGAMI , I., U KITA , N., AND K IDODE , M. 2003. Estima-
5     Conclusion                                                                 tion of 3d gazed position using view lines. In Proceedings of the
                                                                                 12th International Conference on Image Analysis and Process-
                                                                                 ing, 466–471.
In this research, we proposed a method to estimate the 3D POR
even when the observer moves freely. The head pose is estimated                M UNN , S., AND P ELZ , J. 2008. 3d point-of-regard, position
using Visual SLAM, and the 2D POR is associated with interest                    and head orientation from a portable monocular video-based eye
points to estimate the 3D POR. Our method assumes that the 3D                    tracker. In Proceedings of the 2008 symposium on Eye tracking
POR is positioned on a plane which consists of interest points. As a             research & applications, 181–188.
result, the 3D POR can be estimated in real-time. In addition to the
                                                                               ROSTEN , E., AND D RUMMOND , T. 2006. Machine learning for
method of estimating 3D PORs, a visualization technique for gaze
                                                                                 high-speed corner detection. In Proceedings of the 9th European
trajectories using augmented reality technique is realized. There-
                                                                                 Conference on Computer Vision, 430–443.
fore, even when the scene camera moves dynamically, the gaze tra-
jectories could be kept on environments. We confirmed the feasi-                S MITH , R., AND C HEESEMAN , P. 1986. On the representation
bility of our proposed method experimentally.                                     and estimation of spatial uncertainty. The International Journal
                                                                                  of Robotics Research 5, 4, 56–68.
For future work, we need to solve the scale of world coordinates.
We believe that it is not necessary to make a map from scratch every           S UMI , K., S UGIMOTO , A., M ATSUYAMA , T., T ODA , M., AND
time, so localization will be archived using a map that is obtained               T SUKIZAWA , S. 2004. Active wearable vision sensor: recog-
previously.                                                                       nition of human activities and environments. In Proceedings of
                                                                                  IEEE International Conference on Informatics Research for De-
                                                                                  velopment of Knowledge Society Infrastructure, 15–22.
Acknowledgements
                                                                               TAKEMURA , K., I DO , J., M ATSUMOTO , Y., AND O GASAWARA ,
This research was supported by Japan Society for the Promotion of                T. 2003. Drive monitoring system based on non-contact mea-
Science, Grant-in-Aid for Young Scientists B (No.20700117).                      surement system of driver’s focus of visual attention. In Proceed-
                                                                                 ings of the 2003 IEEE Intelligent Vehicles Symposium, 581–586.
                                                                               W OODING , D. 2002. Fixation maps: quantifying eye-movement
References                                                                       traces. In Proceedings of the 2002 symposium on Eye Tracking
                                                                                 Research & Applications, 31–36.
DAVISON , A., R EID , I., M OLTON , N., AND S TASSE , O. 2007.
  Monoslam: Real-time single camera slam. IEEE Transactions
  on Pattern Analysis and Machine Intelligence 29, 6, 1052–1067.


                                                                         160

Más contenido relacionado

La actualidad más candente

Sensors optimized for 3 d digitization
Sensors optimized for 3 d digitizationSensors optimized for 3 d digitization
Sensors optimized for 3 d digitizationBasavaraj Patted
 
Computer generated holography as a generic display technology
Computer generated holography as a generic display technologyComputer generated holography as a generic display technology
Computer generated holography as a generic display technologyPritam Bhansali
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Sensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportSensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportVishnu Prasad
 
Goldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationGoldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationKalle
 
Follow Me Robot Technology
Follow Me Robot TechnologyFollow Me Robot Technology
Follow Me Robot Technologyijsrd.com
 
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...c.choi
 
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOSSHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOScsandit
 
Model User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation SystemModel User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation SystemKalle
 
Virtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panoramaVirtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panoramaijcseit
 
A Novel Approach to Fingerprint Identification Using Gabor Filter-Bank
A Novel Approach to Fingerprint Identification Using Gabor Filter-BankA Novel Approach to Fingerprint Identification Using Gabor Filter-Bank
A Novel Approach to Fingerprint Identification Using Gabor Filter-BankIDES Editor
 
Encrypted sensing of fingerprint image
Encrypted sensing of fingerprint imageEncrypted sensing of fingerprint image
Encrypted sensing of fingerprint imageDhirendraKumar170
 
A Deep Belief Network Approach to Learning Depth from Optical Flow
A Deep Belief Network Approach to Learning Depth from Optical FlowA Deep Belief Network Approach to Learning Depth from Optical Flow
A Deep Belief Network Approach to Learning Depth from Optical FlowReuben Feinman
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
 
Secure System based on Dynamic Features of IRIS Recognition
Secure System based on Dynamic Features of IRIS RecognitionSecure System based on Dynamic Features of IRIS Recognition
Secure System based on Dynamic Features of IRIS Recognitionijsrd.com
 

La actualidad más candente (17)

Sensors optimized for 3 d digitization
Sensors optimized for 3 d digitizationSensors optimized for 3 d digitization
Sensors optimized for 3 d digitization
 
Computer generated holography as a generic display technology
Computer generated holography as a generic display technologyComputer generated holography as a generic display technology
Computer generated holography as a generic display technology
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Sensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportSensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar report
 
Goldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationGoldberg Visual Scanpath Representation
Goldberg Visual Scanpath Representation
 
Follow Me Robot Technology
Follow Me Robot TechnologyFollow Me Robot Technology
Follow Me Robot Technology
 
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
 
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOSSHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
 
Model User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation SystemModel User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation System
 
Virtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panoramaVirtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panorama
 
A Novel Approach to Fingerprint Identification Using Gabor Filter-Bank
A Novel Approach to Fingerprint Identification Using Gabor Filter-BankA Novel Approach to Fingerprint Identification Using Gabor Filter-Bank
A Novel Approach to Fingerprint Identification Using Gabor Filter-Bank
 
50620130101001
5062013010100150620130101001
50620130101001
 
Encrypted sensing of fingerprint image
Encrypted sensing of fingerprint imageEncrypted sensing of fingerprint image
Encrypted sensing of fingerprint image
 
A Deep Belief Network Approach to Learning Depth from Optical Flow
A Deep Belief Network Approach to Learning Depth from Optical FlowA Deep Belief Network Approach to Learning Depth from Optical Flow
A Deep Belief Network Approach to Learning Depth from Optical Flow
 
BAXTER PLAYING POOL
BAXTER PLAYING POOLBAXTER PLAYING POOL
BAXTER PLAYING POOL
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
 
Secure System based on Dynamic Features of IRIS Recognition
Secure System based on Dynamic Features of IRIS RecognitionSecure System based on Dynamic Features of IRIS Recognition
Secure System based on Dynamic Features of IRIS Recognition
 

Destacado

Muhammad yustan Curriculum Vitae
Muhammad yustan Curriculum VitaeMuhammad yustan Curriculum Vitae
Muhammad yustan Curriculum VitaeMuhammad Yustan
 
Doc110339 normas do_x_congreso_do_sindicato_nacional_de_ccoo
Doc110339 normas do_x_congreso_do_sindicato_nacional_de_ccooDoc110339 normas do_x_congreso_do_sindicato_nacional_de_ccoo
Doc110339 normas do_x_congreso_do_sindicato_nacional_de_ccoooscargaliza
 
Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...
Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...
Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...Kalle
 
Handout Max Dekker Inleiding30 11
Handout Max Dekker Inleiding30 11Handout Max Dekker Inleiding30 11
Handout Max Dekker Inleiding30 11guest2f17d3
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
 
Estonia Power Point
Estonia Power PointEstonia Power Point
Estonia Power Pointguestfc17a1
 
הסוזן התוכנית לחטוף את בגין
הסוזן   התוכנית לחטוף את בגיןהסוזן   התוכנית לחטוף את בגין
הסוזן התוכנית לחטוף את בגיןhaimkarel
 
Ikan dimalaysia
Ikan dimalaysiaIkan dimalaysia
Ikan dimalaysiaSMK BAKAI
 
Uso racional de medicamentos
Uso racional de medicamentosUso racional de medicamentos
Uso racional de medicamentosMANUEL RIVERA
 
Acta agrupamento hostelería vigo (emenda)
Acta agrupamento hostelería vigo (emenda)Acta agrupamento hostelería vigo (emenda)
Acta agrupamento hostelería vigo (emenda)oscargaliza
 
הגברת תודעת השימוש באינטרנט באוכלוסיה הנשית
הגברת תודעת השימוש באינטרנט באוכלוסיה הנשיתהגברת תודעת השימוש באינטרנט באוכלוסיה הנשית
הגברת תודעת השימוש באינטרנט באוכלוסיה הנשיתhaimkarel
 

Destacado (20)

Muhammad yustan Curriculum Vitae
Muhammad yustan Curriculum VitaeMuhammad yustan Curriculum Vitae
Muhammad yustan Curriculum Vitae
 
Doc110339 normas do_x_congreso_do_sindicato_nacional_de_ccoo
Doc110339 normas do_x_congreso_do_sindicato_nacional_de_ccooDoc110339 normas do_x_congreso_do_sindicato_nacional_de_ccoo
Doc110339 normas do_x_congreso_do_sindicato_nacional_de_ccoo
 
Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...
Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...
Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...
 
Handout Max Dekker Inleiding30 11
Handout Max Dekker Inleiding30 11Handout Max Dekker Inleiding30 11
Handout Max Dekker Inleiding30 11
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 
μικρασιατική εκστρατεία
μικρασιατική εκστρατείαμικρασιατική εκστρατεία
μικρασιατική εκστρατεία
 
written shot by shot
written shot by shotwritten shot by shot
written shot by shot
 
Terms and condition_rus
Terms and condition_rusTerms and condition_rus
Terms and condition_rus
 
Estonia Power Point
Estonia Power PointEstonia Power Point
Estonia Power Point
 
ลักษณะภูมิภาคของแอฟริกา
ลักษณะภูมิภาคของแอฟริกาลักษณะภูมิภาคของแอฟริกา
ลักษณะภูมิภาคของแอฟริกา
 
הסוזן התוכנית לחטוף את בגין
הסוזן   התוכנית לחטוף את בגיןהסוזן   התוכנית לחטוף את בגין
הסוזן התוכנית לחטוף את בגין
 
ยุโกสลาเวีย
ยุโกสลาเวียยุโกสลาเวีย
ยุโกสลาเวีย
 
Ikan dimalaysia
Ikan dimalaysiaIkan dimalaysia
Ikan dimalaysia
 
Uso racional de medicamentos
Uso racional de medicamentosUso racional de medicamentos
Uso racional de medicamentos
 
Acta agrupamento hostelería vigo (emenda)
Acta agrupamento hostelería vigo (emenda)Acta agrupamento hostelería vigo (emenda)
Acta agrupamento hostelería vigo (emenda)
 
Nossa Rotina 2
Nossa Rotina 2Nossa Rotina 2
Nossa Rotina 2
 
ลักษณะภูมิประเทศ2.1
ลักษณะภูมิประเทศ2.1ลักษณะภูมิประเทศ2.1
ลักษณะภูมิประเทศ2.1
 
הגברת תודעת השימוש באינטרנט באוכלוסיה הנשית
הגברת תודעת השימוש באינטרנט באוכלוסיה הנשיתהגברת תודעת השימוש באינטרנט באוכלוסיה הנשית
הגברת תודעת השימוש באינטרנט באוכלוסיה הנשית
 
การเงิน การธนาคารและการคลัง
การเงิน การธนาคารและการคลังการเงิน การธนาคารและการคลัง
การเงิน การธนาคารและการคลัง
 
Dia
DiaDia
Dia
 

Similar a Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Under Natural Head Movements

Droege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesDroege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesKalle
 
hpe3d_report.pdf
hpe3d_report.pdfhpe3d_report.pdf
hpe3d_report.pdfgsrawat
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment University of Moratuwa
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
 
CVGIP 2010 Part 3
CVGIP 2010 Part 3CVGIP 2010 Part 3
CVGIP 2010 Part 3Cody Liu
 
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021Journal For Research
 
Tiny-YOLO distance measurement and object detection coordination system for t...
Tiny-YOLO distance measurement and object detection coordination system for t...Tiny-YOLO distance measurement and object detection coordination system for t...
Tiny-YOLO distance measurement and object detection coordination system for t...IJECEIAES
 
Real time object tracking and learning using template matching
Real time object tracking and learning using template matchingReal time object tracking and learning using template matching
Real time object tracking and learning using template matchingeSAT Publishing House
 
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...cscpconf
 
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...csandit
 
Pose estimation algorithm for mobile augmented reality based on inertial sen...
Pose estimation algorithm for mobile augmented reality based  on inertial sen...Pose estimation algorithm for mobile augmented reality based  on inertial sen...
Pose estimation algorithm for mobile augmented reality based on inertial sen...IJECEIAES
 
Monocular LSD-SLAM integreation within AR System
Monocular LSD-SLAM integreation within AR SystemMonocular LSD-SLAM integreation within AR System
Monocular LSD-SLAM integreation within AR SystemMarkus Höll
 
Gait analysis report
Gait analysis reportGait analysis report
Gait analysis reportconoranthony
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...ijsrd.com
 
Eye gaze tracking with a web camera
Eye gaze tracking with a web cameraEye gaze tracking with a web camera
Eye gaze tracking with a web camerajpstudcorner
 
Eye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop EnvironmentEye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop Environment1crore projects
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
 
Driver Alertness On Android With Face And Eye Ball Movements
Driver Alertness On Android With Face And Eye Ball MovementsDriver Alertness On Android With Face And Eye Ball Movements
Driver Alertness On Android With Face And Eye Ball MovementsIJRES Journal
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAijcseit
 

Similar a Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Under Natural Head Movements (20)

Droege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesDroege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution Images
 
hpe3d_report.pdf
hpe3d_report.pdfhpe3d_report.pdf
hpe3d_report.pdf
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
CVGIP 2010 Part 3
CVGIP 2010 Part 3CVGIP 2010 Part 3
CVGIP 2010 Part 3
 
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
 
Tiny-YOLO distance measurement and object detection coordination system for t...
Tiny-YOLO distance measurement and object detection coordination system for t...Tiny-YOLO distance measurement and object detection coordination system for t...
Tiny-YOLO distance measurement and object detection coordination system for t...
 
Real time object tracking and learning using template matching
Real time object tracking and learning using template matchingReal time object tracking and learning using template matching
Real time object tracking and learning using template matching
 
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
 
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
 
Pose estimation algorithm for mobile augmented reality based on inertial sen...
Pose estimation algorithm for mobile augmented reality based  on inertial sen...Pose estimation algorithm for mobile augmented reality based  on inertial sen...
Pose estimation algorithm for mobile augmented reality based on inertial sen...
 
Monocular LSD-SLAM integreation within AR System
Monocular LSD-SLAM integreation within AR SystemMonocular LSD-SLAM integreation within AR System
Monocular LSD-SLAM integreation within AR System
 
Gait analysis report
Gait analysis reportGait analysis report
Gait analysis report
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
 
Eye gaze tracking with a web camera
Eye gaze tracking with a web cameraEye gaze tracking with a web camera
Eye gaze tracking with a web camera
 
Eye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop EnvironmentEye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop Environment
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
 
Driver Alertness On Android With Face And Eye Ball Movements
Driver Alertness On Android With Face And Eye Ball MovementsDriver Alertness On Android With Face And Eye Ball Movements
Driver Alertness On Android With Face And Eye Ball Movements
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
 

Más de Kalle

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Kalle
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementKalle
 
Moshnyaga The Use Of Eye Tracking For Pc Energy Management
Moshnyaga The Use Of Eye Tracking For Pc Energy ManagementMoshnyaga The Use Of Eye Tracking For Pc Energy Management
Moshnyaga The Use Of Eye Tracking For Pc Energy ManagementKalle
 

Más de Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head Movement
 
Moshnyaga The Use Of Eye Tracking For Pc Energy Management
Moshnyaga The Use Of Eye Tracking For Pc Energy ManagementMoshnyaga The Use Of Eye Tracking For Pc Energy Management
Moshnyaga The Use Of Eye Tracking For Pc Energy Management
 

Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Under Natural Head Movements

  • 1. Estimating 3D Point-of-regard and Visualizing Gaze Trajectories under Natural Head Movements Kentaro Takemura∗ Yuji Kohashi Tsuyoshi Suenaga Jun Takamatsu Tsukasa Ogasawara Graduate School of Information Science Nara Institute of Science and Technology, Japan Abstract The portability of an eye tracking system encourages us to develop a technique for estimating 3D point-of-regard. Unlike conventional methods, which estimate the position in the 2D image coordinates of the mounted camera, such a technique can represent richer gaze information of the human moving in the larger area. In this pa- per, we propose a method for estimating the 3D point-of-regard and a visualization technique of gaze trajectories under natural head movements for the head-mounted device. We employ visual SLAM technique to estimate head configuration and extract environmental information. Even in cases where the head moves dynamically, the proposed method could obtain 3D point-of-regard. Additionally, gaze trajectories are appropriately overlaid on the scene camera im- Figure 1: Eye tracking system age. CR Categories: H.5.2 [Information Interfaces and Presentation]: POR in world coordinates were achieved. However, when gaze tra- User Interfaces—Interaction styles jectories are analyzed, a visualization method which encourages us to intuitively recognize the observer’s 3D POR is needed. Keywords: Eye Tracking, 3D Point-of-regard, Visual SLAM In this paper, we propose a method to estimate 3D POR in real- time and a visualization of gaze trajectories using augmented reality techniques for intuitive understanding. The rest of this paper is 1 Introduction structured as follows: Section 2 describes the system configuration, and Section 3 explains the method to estimate 3D POR. In Section Recently, there have been active developments in portable eye 4, we show experimental results and Section 5 concludes this paper. tracking systems, and Mobile Eye (Applied Science Laboratories) and EMR-9 (NAC) are examples of them. Researchers are in- terested in the gaze measurement of a person moving dynami- 2 Eye Tracking System for Estimating 3D cally. When head-mounted eye tracking system is utilized, point-of- Point-of-regard regard (POR) is generally measured as a point on the image plane. It is therefore difficult to analyze gaze trajectories quantitatively The purpose of this research is to estimate the 3D POR of an ob- due to head movements. If we employ extra sensor such as a mo- server who moves freely. In general, head pose (position and ori- tion capture system or a magnetic sensor to estimate the 3D POR, entation) is estimated by an extra sensor such as a motion capture measuring range would be limited. system or a magnetic sensor, but the measuring range is limited. The measuring range of magnetic sensor is about 5 [m], and many Some methods for estimating 3D POR have been proposed to solve cameras are needed for covering wide area in using motion capture these problems. A head-mounted stereo camera pair has been pro- system. To solve this limitations, we propose to estimate the head posed [Sumi et al. 2004], but the system configuration differs from pose using a scene camera installed in the eye tracking system. normal eye tracking system substantially. A method using binoc- ular view lines was also proposed [Mitsugami et al. 2003], but 3D ViewPoint (Arrington Research) is used as the head-mounted eye POR was estimated as relative position of observer’s head. When tracking system. Its scene camera is utilized for visibility of POR the observer’s position change dynamically, it is difficult to record on image, so it’s a low-resolution camera. For this research, the 3D POR in world coordinates. A novel method using interest points scene camera was changed to FireFly MV (Point Grey Research) has been proposed [Munn and Pelz 2008], and the estimation of 3D as a high resolution camera as shown in Figure 1. ∗ e-mail: kenta-ta@is.naist.jp 2.1 Estimation of Head Pose by Visual SLAM Our system has two software modules for estimating 3D POR: one for estimating the head pose and one for estimating gaze direction Copyright © 2010 by the Association for Computing Machinery, Inc. in head coordinates. First, we describe the software for estimating Permission to make digital or hard copies of part or all of this work for personal or head pose. classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the Simultaneous localization and mapping (SLAM), which is a tech- first page. Copyrights for components of this work owned by others than ACM must be nique for estimating robot pose in unknown environments, have honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on been developed in robotics research. A mobile robot with a Laser servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail Range Finder (LRF) could move in unknown environments using permissions@acm.org. SLAM [Smith and Cheeseman 1986]. Moreover, SLAM using a ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 157
  • 2. Interest points Delaunay triangulation face direction 2D POR Figure 2: Pupil and purkinje illuminated by near-infrared light Figure 3: Delaunay triangulation using interest points camera is called Visual SLAM, and the 6 DOF of camera pose can be estimated. In the case of eye tracking, it is necessary to esti- P1 mate the head pose in an unknown environment, thus Visual SLAM can be utilized. Some techniques for Visual SLAM have been pro- P1 P2 posed. Among them, MonoSLAM [Davison et al. 2007] and PTAM [Klein and Murray 2007] are representative examples. Particularly, P robust estimation was realized by PTAM which utilizes Bundle Ad- justment, so we applied it to our eye tracking system. In PTAM, P P3 P2 each interest point, extracted using FAST corner detector [Rosten P3 and Drummond 2006], is mapped to 3D coordinates. The camera pose is also estimated this way. In the case of the head-mounted eye tracking system, the scene camera moves along with observer’s head movement, thus computing extrinsic parameters of camera is Figure 4: Estimation of 3D point-of-regard equivalent to estimating the head pose. 2.2 Software for Eye Tracking based on the interest points, by Delaunay triangulation, as shown in We now describe the software for eye tracking. OpenEyes [Li Figure 3. Next, the triangle, which includes the 2D POR, is com- et al. 2006] is substituted for the API supplied by ViewPoint for puted to determine the plane where the 3D POR is. All points on a eye tracking. The two software modules could be merged through triangle plane can be presented by forming linear combinations of the use of open source software. It is necessary to get the center of three points (Eq.1). the pupil and the corneal reflection to estimate the gaze direction P = (1 − u − v)P1 + uP2 + vP3 (1) using OpenEyes. An infrared LED is installed in ViewPoint, so OpenEyes can work on ViewPoint. Figure 2 shows the result using P,{P1 , P2 , P3 } are the POR and vertices of triangle in the plane ViewPoint with OpenEyes. Left image is original image captured coordinates system respectively, and u and v are the ratio of the tri- by eye camera, and right image shows the result of detecting the angle areas (P1 ,P,P3 ) and (P1 ,P2 ,P), respectively. When three pupil and the purkinje. vertices and 2D POR are given as shown in Eq.2, it is easy to com- pute the values of u, v using Eq.3. 3 Estimation and Visualization of 3D Point- » – » – » – » – of-regard xt x1 x2 x3 = (1 − u − v) +u +v (2) yt y1 y2 y3 In previous research, 3D POR have been estimated using SLAM [Munn and Pelz 2008]. However, the 3D POR is computed using » – » –−1 » – two key frames to search along an epipolar line. In our proposed u x2 − x1 x3 − x1 xt − x1 = (3) method, the 3D POR can be estimated in real-time without two key v y2 − y1 y3 − y1 yt − y1 frames. The following details our approach. u, v are points on the plane coordinate system, thus they can be We assume that the 3D POR is located on a plane, which consists kept constant for the conversion to 3D coordinates (Eq.4). Then of three interest points detected by FAST corner detector, for pro- the 3D POR can be estimated as shown in Figure 4. In PTAM, jecting the POR from 2D to 3D coordinates. The procedures for interest points are mapped to 3D coordinates, thus each point has a estimating 3D POR are the following three steps: 3D coordinates. 1. Apply Delaunay triangulation to the interest points in Visual 2 3 2 3 2 3 2 3 Xt X1 X2 X3 SLAM. 4 Yt 5 = (1 − u − v) 4 Y1 5 + u 4 Y2 5 + v 4 Y3 5 (4) 2. Determine a triangle which includes the 2D POR estimated by Zt Z1 Z2 Z3 OpenEyes. If the center of an image is defined as the projected point of head 3. Compute the 3D POR using the triangle position in world coor- direction, the 3D projected point of head direction can also be es- dinates. timated. Figure 5 shows the results of our approach, and Figure 5(a) shows the interest points extracted from the scene camera im- 3.1 Estimation of 3D Point-of-Regard age. Next, Delaunay triangulation is carried out as shown in Figure 5(b). After that, the triangle which includes the 2D POR is deter- First, a 2D POR estimated by OpenEyes have to be associated with mined in Figure 5(c). The 3D POR can be estimated in real-time interest points extracted by PTAM. Therefore, triangles are built up using these procedures. 158
  • 3. a) b) 250 900 Panel Panel 750 poster gaze Panel c) direction 3D POR person 50 Figure 6: Experimental environment Figure 5: Delaunay triangulation using interest points. a) Interest points is extracted by FAST feature detector, b) Delaunay triangula- tion based on interest points, c) Determination of the triangle which POR includes the 2D POR. 3.2 Gaze Trajectories using Augmented Reality Tech- niques Some visualization techniques for gaze trajectories have been pro- Figure 7: Experimental setup posed [Wooding 2002]. Most of visualization techniques are ap- plied to a specific plane such as displays and pictures. How- ever, when the observer moves freely, gaze trajectories cannot be a) b) recorded on a specific plane, making it difficult to analyze gaze trajectories. A method for generating a fixation map, which is suit- Z able for time varying situation, have been proposed [Takemura et al. 2003], but it cannot be properly created when the observer moves. Y Accordingly, we propose a method which overlays the 3D POR on the scene camera image using augmented reality techniques. Cam- era pose can be estimated in each frames using Visual SLAM, thus X the 3D position of interest points and 3D POR can be backprojected Y on the scene camera image. Therefore, even when the scene camera X moves dynamically, gaze trajectories are kept on the scene camera image using its 3D position. Figure 8: Estimated 3D point-of-regard and head position 4 Experiment of Estimating 3D POR and we confirmed that the 3D PORs could be estimated appropri- 4.1 Experimental Condition ately. As shown in Figure 9, triangles, which include 2D POR and the projected points of head direction, could be computed in real- An experiment of estimating 3D POR was conducted for confirm- time. Additionally, gaze trajectories are backprojected to the scene ing the feasibility of our proposed method. Figure 6 illustrates image along with the head pose using interest points. When the the experimental environment, and gaze trajectories are measured observer was free to move dynamically, gaze trajectories are kept when the observer looks at some posters. Additionally the observer on the scene camera image as shown in Figure 10. Thus, we could showed his own POR using a laser pointer to confirm 2D POR, as follow the observer’s gaze trajectories intuitively. Although we do shown in Figure 7. To calculate the 2D POR in the scene image, not have the ground truth of the 3D PORs, the result of overlaid calibration was held before measurement. The observer is required 3D PORs show focus areas such as title and figures on posters, and to look at nine points on calibration board, thus the observer cannot gaze trajectories are correctly kept on the image even when pose of move during calibration. the scene camera changed. We could confirm the feasibility of our proposed method through an experiment. 4.2 Experimental Results 4.3 Discussion We were able to estimate the 3D PORs. Figure 8 shows the re- sult of estimated 3D PORs and head position. The 3D POR values Our proposed method could estimate 3D PORs in real-time, and coincide with the actual location of the four posters in Figure 8, gaze trajectories could be overlaid on scene camera image using 159
  • 4. 1) Triangle which includes face direction 2) 3) 4) 5) Triangle which includes 2D POR Figure 9: Snapshots of determining triangles which include the 2D POR by Delaunay triangulation 1) 2) 3) 4) 5) Figure 10: Snapshots of gaze trajectories using augmented reality techniques. When a head-mounted eye tracking system move without restriction, gaze trajectories were drawing on environment appropriately. White arrows show the corresponding points on gaze trajectories between two images. augmented reality techniques. However, our method has a disad- K LEIN , G., AND M URRAY, D. 2007. Parallel tracking and map- vantage. Our proposed method cannot get the scale of world coor- ping for small ar workspaces. In Proceedings of the 6th IEEE dinates because we employed the Visual SLAM which is a bearing and ACM International Symposium on Mixed and Augmented only SLAM. It is necessary to solve this problem to evaluate the Reality, 1–10. accuracy of the estimated 3D PORs. If a ground truth marker with known position is used, we can compare the 3D POR to it. How- L I , D., BABCOCK , J., AND PARKHURST, D. 2006. openeyes: a ever, this would limit the initial position, thus we believe that this low-cost head-mounted eye-tracking solution. In Proceedings of problem should be solved by another approach. After solving this the 2006 symposium on Eye tracking research & applications, problem, we could compare different gaze trajectories of people. 95–100. M ITSUGAMI , I., U KITA , N., AND K IDODE , M. 2003. Estima- 5 Conclusion tion of 3d gazed position using view lines. In Proceedings of the 12th International Conference on Image Analysis and Process- ing, 466–471. In this research, we proposed a method to estimate the 3D POR even when the observer moves freely. The head pose is estimated M UNN , S., AND P ELZ , J. 2008. 3d point-of-regard, position using Visual SLAM, and the 2D POR is associated with interest and head orientation from a portable monocular video-based eye points to estimate the 3D POR. Our method assumes that the 3D tracker. In Proceedings of the 2008 symposium on Eye tracking POR is positioned on a plane which consists of interest points. As a research & applications, 181–188. result, the 3D POR can be estimated in real-time. In addition to the ROSTEN , E., AND D RUMMOND , T. 2006. Machine learning for method of estimating 3D PORs, a visualization technique for gaze high-speed corner detection. In Proceedings of the 9th European trajectories using augmented reality technique is realized. There- Conference on Computer Vision, 430–443. fore, even when the scene camera moves dynamically, the gaze tra- jectories could be kept on environments. We confirmed the feasi- S MITH , R., AND C HEESEMAN , P. 1986. On the representation bility of our proposed method experimentally. and estimation of spatial uncertainty. The International Journal of Robotics Research 5, 4, 56–68. For future work, we need to solve the scale of world coordinates. We believe that it is not necessary to make a map from scratch every S UMI , K., S UGIMOTO , A., M ATSUYAMA , T., T ODA , M., AND time, so localization will be archived using a map that is obtained T SUKIZAWA , S. 2004. Active wearable vision sensor: recog- previously. nition of human activities and environments. In Proceedings of IEEE International Conference on Informatics Research for De- velopment of Knowledge Society Infrastructure, 15–22. Acknowledgements TAKEMURA , K., I DO , J., M ATSUMOTO , Y., AND O GASAWARA , This research was supported by Japan Society for the Promotion of T. 2003. Drive monitoring system based on non-contact mea- Science, Grant-in-Aid for Young Scientists B (No.20700117). surement system of driver’s focus of visual attention. In Proceed- ings of the 2003 IEEE Intelligent Vehicles Symposium, 581–586. W OODING , D. 2002. Fixation maps: quantifying eye-movement References traces. In Proceedings of the 2002 symposium on Eye Tracking Research & Applications, 31–36. DAVISON , A., R EID , I., M OLTON , N., AND S TASSE , O. 2007. Monoslam: Real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 6, 1052–1067. 160