7. Think aloud: Concurrent VS Retrospective Concurrent: May slow down user, data not representative, not usually done with eye-tracking (Johansen, S. A., & Hansen, J. P. (2006)) Retrospective: User omits/ forgets data
8. Key references Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009). Everyone knows what is interesting: Salient locations which should be fixated. Journal of Vision, 9(11), 1-22. Kara Pernice & Jakob Nielsen (2009) .Eyetracking Methodology: 65 Guidelines for How to Conduct and Evaluate Usability Studies Using Eyetracking Johansen, S. A., & Hansen, J. P. (2006). Do we need eye trackers to tell where people look? In CHI '06 extended abstracts on Human factors in computing systems (pp. 923-928). Montréal, Québec, Canada
19. Why need a better deliverable than heatmap Kara Pernice & Jakob Nielsen (2009)
20. Why need a better deliverable than heatmap Kara Pernice & Jakob Nielsen (2009)
21. Why need a better deliverable than heat-map: Recommendations 30 + 9 = 39 users required for heatmap for representative data (85%) 24% extra users (9 users) required to account for eye-tracking data loss Better watch live eye-tracking and listen to user thinking aloud Good for slow-motion gaze-replay later Heat-maps still can be used, but only as illustration, not as primary data Test with small number of users (6), test more frequently Kara Pernice & Jakob Nielsen (2009)
22. Recommendations “ The way to happiness (or at least a high ROI) is to conserve your budget and invest most of it in discount usability methods. Test a small number of users in each study and rely on qualitative analysis and your own insight instead of chasing overly expensive quantitative data. The money you save can be spent on running many more studies. The two most fruitful things to test are your competitors’ sites and more versions of your own site. Use iterative design to try out a bigger range of design possibilities, polishing the usability as you go, instead of blowing your entire budget on one big study." Kara Pernice & Jakob Nielsen (2009)
23. Where did you look at: Do we need eye-trackers Johansen, S. A., & Hansen, J. P. (2006).
24. Where did you look at: Do we need eye-trackers 10 users, 17 Web designers, 8 web pages Self-reported gaze pattern (by user), predicted gaze-pattern (by web designer) Users could reliably remember 70% of the web elements they had actually seen Web designers could only predict 46% of the elements typically seen (squint-test) No difference between simple and complex webpages in number of remembered items Users were not good at remembering Area of Interest (AOI) sequence Memory difference between logo and other web elements Johansen, S. A., & Hansen, J. P. (2006).
25. Comments Users repeated the eye-movements, Web designers used paper User might have thought: “Better look at things I could recall” N-gram analysis, Levensthein distance 16 (SD = 12.8) Johansen, S. A., & Hansen, J. P. (2006).
26. What to ask What are the most interesting points? Where would you look at to do this? Where did you look at? Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009) Johansen, S. A., & Hansen, J. P. (2006).
27. Scan-path: Saliency model and goal-dependence Foulsham, T., & Underwood, G. (2008). What can saliency models predict about eye movements? D. norton & L. Stark (1971). Scan-path theory: Top-down recapitualtion
39. Variant: Task-Interest point test: Blurred (Mark five probable points where the price could be located) 3X 1X 5X 2X 4X
40. Variant: Task-Interest point test: Unblurred (Mark five points where you would look at to get the price) 3X 1X 5X 2X 4X
41. Interest point based usability tests: Recommendations based on hypotheses Sequence: blurred-squint tests before unblurred-unsquinted tests, Retrospective Tasks: Generic exploratory interest point test before task-centric test. Questions: “Look freely” “Where did you look” “Where would you look to do this” “What are the most interesting points” Key report: based on interest point plot (qualitative, formative, 5 user) rather than heat-map (quantitative, summative, 30 user) Implementation: Static-Web app Dynamic-URL, Provision for Area of Interest (AOI)
42. Statistical comparison Between subjects ET vs. interest-point-plot (IP) ET vs. interest-point-map (IM) IP vs. IM (Number of user) Squinted vs. Unsquinted Exploratory vs. Task-centric