SlideShare una empresa de Scribd logo
1 de 8
Descargar para leer sin conexión
Eye Tracking Within the Packaging Design Workflow:
            Interaction with Physical and Virtual Shelves

                        Chip Tonkin                                  Andrew D. Ouzts                     Andrew T. Duchowski
                     Clemson University                               Clemson University                     Clemson University
                     tonkin@clemson.edu                               aouzts@g.clemson.edu                 duchowski@clemson.edu




ABSTRACT                                                                            searching for a product on a store shelf. Companies such as
Measuring consumers’ overt visual attention through eye                             Kraft Foods, PepsiCo, and Unilever regularly employ this
tracking is a useful method of assessing a package design’s                         technology in the development of new packaging and retail
impact on likely buyer purchase patterns. To preserve eco-                          strategies, however, their methodologies and results are not
logical validity, subjects should remain immersed in a shop-                        available to the public [17]. For this approach to be useful
ping context throughout the entire study. Immersion can be                          to and to be widely adopted by industry, it needs to be an
achieved through proper priming, environmental cues, and                            integral part of the design process that is incorporated as a
visual stimuli. While a complete physical store offers the                           feedback loop attached to the creative step in the workflow
most realistic environment, the use of projectors in creating                       (as opposed to a research study performed post facto). The
a virtual environment is desirable for efficiency, cost, and                          goal is similar to performance simulation developed for eval-
flexibility reasons. Results are presented from a study com-                         uation of package design (similar to shelf life, sustainability,
paring consumers’ visual behavior in the presence of either                         and shipping capability), and it follows that one would also
virtual or physical shelving through eye movement perfor-                           want to accurately predict its consumer response.
mance and process metrics and their subjective impressions.                            The challenge is developing the workflow, equipment pa-
Analysis suggests a difference in visual search performance                          rameters, and environment that provides a realistic approx-
between environments even though the perceived difference                            imation to the retail shopping experience. It must be usable
is negligible.                                                                      for a wide variety of products, remain cost effective, and
                                                                                    provide meaningful results. Most of the eye tracking equip-
                                                                                    ment and the bulk of published research suggest use of an
Categories and Subject Descriptors                                                  all-in-one monitor system. This is sufficient for testing web
J.4 [Computer Applications]: Social and Behavioral Sci.                             usability and responses to printed ads and promotions be-
                                                                                    cause the size of the test stimuli readily fit the screens and
1. INTRODUCTION                                                                     the seating position of the subjects is not unnatural. How-
                                                                                    ever, this is not a realistic shopping simulation.
   The shift in consumer behavior over the past two decades
                                                                                       Alternatives to the all-in-one desktop system include self-
is forcing dramatic changes in the way products are designed,
                                                                                    standing eye tracking modules that can be used in front of
packaged and marketed. Presently 70% of consumer pur-
                                                                                    physical samples or a projector screen. These have been
chase choices are made at the shelf, 85% are made without
                                                                                    used for retail studies, but the requirement of standing per-
even picking up a competitive item, and 90% are made af-
                                                                                    fectly still in front of the shelf or projection screen makes this
ter looking at just the front face of the packaging [4]. Thus
                                                                                    setup less than ideal for simulating the retail shopping ex-
package design clearly plays a significant role in determin-
                                                                                    perience. To better simulate the shopping experience, con-
ing the success of a product, but as Clement points out,
                                                                                    sumers should be allowed to “wander up and down the aisles”
currently accepted design methodology understates this im-
                                                                                    as they would do in reality.
pact and does not include any objective method of assess-
                                                                                       Obviously the most realistic consumer experience study
ing the product’s visual impact on buying decisions. Eye
                                                                                    ought to be performed in an actual retail store, and in
tracking is a means of quantifying an observer’s overt vi-
                                                                                    some instances this may be possible, but the logistics of
sual attention, and can be used to evaluate and compare
                                                                                    regularly rearranging a store to meet experimental condi-
visual search patterns of individuals in a variety of situa-
                                                                                    tions, and controlling many other real-world variables make
tions. This approach is readily applicable to the measure-
                                                                                    this prospect infeasible for the majority of typical controlled
ment of consumer behaviour in a retail environment when
                                                                                    studies.
                                                                                       The CUshop consumer experience lab (depicted in Fig-
                                                                                    ure 1) will simulate browsing freedom within a realistic en-
Permission to make digital or hard copies of all or part of this work for           vironment. The lab is designed as a fully self-contained
personal or classroom use is granted without fee provided that copies are           environment with sliding glass doors, re-configurable shelv-
not made or distributed for profit or commercial advantage and that copies           ing, a refrigerated section, and appropriate signage and win-
bear this notice and the full citation on the first page. To copy otherwise, to      dow treatments to create a realistic consumer shelf simu-
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
                                                                                    lation. While it is understood that the physical environ-
NGCA ’11 Karlskrona, Sweden                                                         ment is ideal, the lab will contain equipment to run studies
Copyright 2011 ACM 978-1-4503-0680-5/11/05 ...$10.00.
Figure 1: Architectural renderings of the CUshop consumer experience laboratory (to be completed June
2011). The winning design was developed as part of the Spring 2011 Creative Inquiry class led by A. Hurley.


within projected virtual environments because projection of-     tion is key in the buying decision [16, 12]. Buying decisions
fers cheaper and faster setup while preserving a high level      are based on a combination of brand recognition and what
of stimulus control. The purpose of this study is to evalu-      Chandon et al. coined “visual equity” [2]. This term refers
ate consumers’ visual behavior and subjective impressions        to the incremental consideration given to items that attract
as they perform a product search task when encountering          a buyer’s attention so that while a consumer enters with a
either a virtual projected or physical shelf unit.               certain amount of “memory equity” related to their needs
   The use of projectors to simulate an otherwise expensive      and understanding of brand value, this can be changed at
or difficult environment is not unusual. Flight and driving        the point of decision by what catches their attention. Eye
simulators have been used successfully for training and re-      tracking is a good tool for measuring this effect, as Johansen
search purposes, and while they are not able to achieve com-     and Hansen discovered during webpage navigation, noting
plete physical or photo-realism, they have served as viable      that individual recollection of what attracted attention and
predictors of behavior in various situations [15]. This paper    what order things were seen in were not nearly as accurate
compares and contrasts the impact of a virtual store shelf       as a record of recorded eye movements [6].
on consumer behavior to gauge the level of realism afforded.         Because shoppers may not remember what they saw or
Although patterns of information acquisition when viewing        perhaps because they may not be willing to honestly di-
an image are expected to be similar to those when viewing        vulge their reasons for their decisions, the practice of mea-
the real environment, the general level of performance in ob-    suring “brand recall” as typically done in marketing stud-
ject memory tests has been shown to be better in the latter      ies is largely meaningless. According to Chandon et al.,
[8]. In this paper visual is search is examined between the      brand recall is overwhelmingly driven by brand familiarity
real and virtual to test for performance differences, if any.     and, oddly, eye fixations on products within a given mar-
                                                                 ket segment can enhance brand recall for the target product
                                                                 whether it is present in the study or not [2]. They found
2. BACKGROUND                                                    that major brands tended to inhibit the recollection of mi-
   Russo and LeClerc describe the consumer’s product se-         nor brands while conversely the viewing of minor brands
lection process in three stages [13]. The first, dubbed “Ori-     tended to enhance the recall of major brands.
entation”, relies on the ability to evaluate overall patterns,      Young makes the point that the most important factor
colors, and shapes in the scene. Once an interesting area        in achieving applicable results is that the consumer must
is identified, the consumer transitions to “Evaluation”, in       be kept in a shopping context [22]. He stated that “when a
which the focus is on a small number of items. At this stage     shopper is removed from this context, she often leaves be-
information is likely processed much more intently and in a      hind the shopping mindset and, instead, takes on an art di-
serial fashion [4]. The last phase of the process defined by      rector’s aesthetic mentality.” He compares this to a “beauty
Russo and LeClerc is the “Verification” stage. This is the        contest” in which the most aesthetically pleasing package
point at which the consumer verifies that the product meets       tends to win (this is not typically the attribute that actu-
their needs, makes pricing comparisons, and garners assur-       ally decides purchase decisions at the shelf). This lack of
ance that it was the right choice. Currently, the consumer’s     realism has been a significant problem (noted or otherwise)
rationale for making product selection decisions are stud-       in practically all of the consumer shelf studies thus far.
ied using standard interview techniques, focus groups, and          Russo and LeClerc noted that the mean decision time in
observation, but being able to objectively determine when,       their experiment was well above industry norm (30 seconds
how long, and what attracts attention could give much more       vs. 12 seconds) and gave several likely experimental setup
precise and actionable information than the softer, subjec-      reasons for their subjects’ slower behavior [13]. In review-
tive responses typically garnered from a focus group [21].       ing these findings along with those of other experiments,
   Assuming recorded visual attention follows the fovea [7],     Clement found that they presented serious validity problems
eye tracking can be a valuable tool for assessing consumer       because they were in laboratory experiments that poorly
attention in shopping environments [17]. In cases where          simulated real-world conditions [4]. Subjects were sitting in
there is clear brand recognition, attention-catching packag-     chairs looking at pictures of packages or viewing relatively
ing (imagery, colors, etc.) have less impact, but when brand     small projected images that were not accurate for size or
is not a major consideration, packaging that captures atten-     visual angle.
4.0 m


          2.5 m




                                        1.25 m




                                        0.75 m




   Figure 2: Physical and virtual viewing dimensions with example participant searching for target item.


   Even if subjects are shown an accurate picture with ob-          3. METHODOLOGY
jects taking up the same amount of visual angle as a real              The effect of physical or virtual environment was mea-
life product, performance on spatial tasks significantly im-         sured on performance (visual search), process (eye move-
proves as the image becomes larger [14]. This supports the          ments), and subjective measures (i.e., the feeling of presence
notion that peripheral vision is used to improve search capa-       within each environment and preference). The main task
bility. When searching, the consumer visually perceives the         was search for a target item, with the main experimental
environment (i.e., the store shelf) in toto and in parallel, ori-   factor consisting of environment type.
enting selective attention [4]. This can be compared to one’s          Stimulus. Two shelving environments were created for
ability to hear and “feel” the overall surrounding sounds or        the experiment. The physical shelf was a 3.6 m (141 ) Aisle
selectively listening to a particular voice in a crowd, but not     made with a Gondola 0.6 m (23 ) base system, constituting
do both at the same time [3]. The size of the visual field is        a 2 m (78 ) tall shelving system with four 0.4 m (16 ) deep
therefore likely to influence visual search, e.g., a small field is   upper shelves (this was used store shelving removed from
likely to restrict parafoveal preview benefit. Construction of       a major US retailer). The shelf was populated with real
a visually realistic shopping environment is likely to matter       physical cereal boxes with two fabricated cereal brands used
to any studies of shoppers’ visual behavior.                        as search targets.
   Factors such as visual realism and the visual field size need        The virtual environment was a snapshot of the physical
to be considered in the CUshop design and methodology. To           shelf projected on a wall. The image was captured by a
achieve ecological validity, we believe it is necessary for the     Canon EOS Rebel T1i 500D camera mounted on a tripod
laboratory to look and feel like a shopping environment, the        approximating the eye-level of an average-height US adult
eye tracking equipment must be unobtrusive and flexible,             (1.7 m (67 ) [11]). The image was then corrected for geo-
and the task be structured and primed in such a way that            metrical distortion caused by the lens, cropped, and resam-
the subject carries the mindset of a consumer during the            pled to achieve pixel dimensions of 2560×800, and displayed
length of the study.                                                across two Epson BrightLink 450 WI projectors, chosen for
   To our knowledge, thus far all consumer-related eye track-       their brightness and short throw distance which eliminated
ing studies have been restricted to environments where the          shadow interference when standing in front of the display.
visual display was projected on a screen. Studies most sim-            In both physical and virtual presentations of the cereal
ilar to ours include those of Lundberg, Whitney et al., and         shelf, care was taken to present the participant with the
Chandon et al. [9, 18, 3]. Lundberg proposed development            same apparent view. In both instances the environment
of the Packaging Media Lab, in which an eye tracker would           measured 4.0×1.25 m (160 ×49 ) at an elevation of 0.75 m
be used while a shopper viewed a shelf of products pro-             (30 ) off the ground, as sketched in Figure 2. In both phys-
jected on a screen. The lab was eventually designed by The          ical and virtual search tasks, participants stood centered at
Packaging Arena, Ltd., and built within the Bergvik shop-           a distance of 2.5 m (98 ) from either display.
ping centre in Karlstad, Sweden. Whitney et al. constructed            The stimuli (see Figure 3) used as search targets were
the Balance NAVE Automatic Virtual Environment consist-             cereal boxes made especially for this study to preclude fa-
ing of three back-projected screens providing a wide field of        miliarity with the products. Artificial cereal boxes were cre-
view projection-based system. Their purpose was not to              ated to ensure that they could not have been known a priori
test shopping decisions per se, rather, it was to test the ef-      to any of the participants. Each box measured 22×28 cm
fect of navigation through the VR grocery environment on            (8.5 ×11 ) and matched the dimensions of a box on the pro-
participants with and without vestibular dysfunction (no eye        jector wall. Figure 4 shows one of the physical cereal boxes
tracker was used). Chandon et al. used an eye tracker when          matching the dimensions of its projected counterpart.
looking at planograms (shown on a single 4 ×5 screen, 80           Yellow and black price tags, visible in Figure 4, were also
away from the viewer) to test the influence of number of             artificially created for this study and displayed below ev-
shelf facings and position.                                         ery distinct cereal box. Tobii’s infra-red (IR) markers were
   In the present study, we measure the differences in vi-           placed atop the darker portions of the price tags in an effort
sual behavior between a virtual environment and its physical        to blend their appearance.
counterpart from which the virtual is derived.
Figure 3: Artificial cereal boxes designed and con-                Figure 4: Physical cereal box held against its coun-
structed specifically for the experiment.                          terpart projected in the virtual environment.


                                                                     Participants. The study recruited 42 participants re-
   Apparatus. Eye movements were captured using Tobii             cruited from Packaging Science and Computer Science clas-
Glasses, a head-mounted eye tracking system resembling a          ses. Ten participants were excluded from analysis due to cal-
pair of glasses (see Figure 5(a)). The tracker is monocular       ibration issues (specifically we found that calibration points
(right eye only), sampling at 30 Hz with 56◦ × 40◦ record-        on the left side of the grid were difficult for these participants
ing visual angle. The Tobii Glasses were used in conjunction      to fixate; a possible consequence of the monocular nature of
with two other pieces of hardware: the Recording Assistant        the Tobii Glasses). Four additional participants were ex-
and IR markers. The Recording Assistant is a small de-            cluded for incorrectly performing the task on at least one
vice (4.7 ×3.1 ×1.1 ) that attaches to the glasses and is   trial—data showed post facto that these participants never
used to both calibrate the eye tracker and store recorded         fixated the target box, their data could thus be considered
eye movement and video data on a mini-SD card. IR mark-           off-target or erroneous. Analysis therefore considered only
ers (see Figure 5(b)) are used to delineate an Area of Anal-      successful trials, consisting of data captured from 28 partici-
ysis (AOA), a plane determined by the placement of 4 or           pants (18 male, 14 female). These participants’ ages ranged
more IR markers, similar in concept to an Area/Region of          from 20 to 42 (median 22).
Interest (A/ROI) commonly used in eye tracking research to           Procedure. Before starting the experiment, participants
delineate sections of stimulus within which filtered eye move-     were asked to fill out a basic demographic questionnaire
ments, i.e., fixations, are counted. The difference between an      (gender, age, use and type of corrective lenses, etc.). They
AOA and an AOI is that an AOA exists in physical space and        were then walked to an unmarked, white wall for the cal-
is required for data aggregation when the glasses are used.       ibration process. Participants stood 1 m (39 ) from the
An IR marker serves this function only when attached to an        wall and underwent the 9-point calibration procedure. Ten
IR marker holder; otherwise, it works in calibration mode         participants could not achieve a satisfactory calibration and
and emits a visible (green) light for calibration.                were thanked for their participation and dismissed.
   Calibration. Calibration using the Tobii Glasses is some-         Next, participants were given instructions for their first
what different from traditional calibration procedures em-
ployed with table-mounted, fixed, or more commonly known
as “remote” eye trackers. To calibrate the glasses, an IR
marker is used in calibration mode. The experimenter first
asks the participant to stand at a distance of 1 m from a flat,
vertical surface (e.g., a wall) and begins the calibration pro-
cess using the Recording Assistant. The Recording Assistant
then displays a 3×3 grid of points to the experimenter, who
must position the IR marker at each corresponding point on
the wall. During this process, the participant is instructed
to hold their head steady and follow the green light emitted
by the IR marker with their eyes.
   Experimental Design. The experiment consisted of a 2
(environment) × 2 (box type) × 2 (box placement) design.
The environment was either the physical or virtual cereal
shelf, the box type included two versions of a cereal box
(Figure 3), and box placement featured the target box at
one of two locations (left vs. right). A center target position                          (a)                        (b)
was avoided it is likely to be fixated first [20].
   Each participant performed two trials, with environment
and box type reversed in the second trial, counterbalancing       Figure 5: (a) Tobii Glasses, Recording Assistant,
trial combinations.                                               and (b) IR marker. Courtesy of Tobii Technology.
task. If their task was the physical space task, the researcher   mersion and involvement subscales and three from the sen-
showed the participant one of the two target boxes. The par-      sory fidelity and interface quality subscales. All questions
ticipant was told that their task would be to find this box        were administered along a 7-point Likert scale. Questions
on a physical shelf and verbally announce its price. They         relating to non-visual senses were omitted.
were given as much time as desired to examine their target
box in as much detail as they wished (no participant spent
more than 30 s). The participant was also shown examples
                                                                  4. RESULTS
of the price tags’ appearance. They were then told the lo-           Eye movement data in the form of numbers of fixations
cation of the physical shelf, and asked to walk directly to a     and time to first fixation of the target AOI were exported
marker on the ground (2.5 m (98 ) from the stimulus) be-        from Tobii Studio for analysis with R [1].
fore looking up at the shelf. When ready, they were asked            A repeated-measures three-way ANOVA of time to first
to look straight ahead so the glasses could auto-adjust for       fixation revealed significance of the main effect of environ-
recording to begin. Finally, the experimenter walked with         ment (F(1,27) = 22.77, p  0.01). No other significant
the participant to the shelf and recorded eye movements un-       effects (of box type or placement) were detected (see Fig-
til the participant announced the price of the object. The        ures 6(a) and 6(b)).
physical shelving area was concealed from the participant            A repeated-measures three-way ANOVA of the number
prior to this task, to avoid preview benefit.                      of fixations prior to the first fixation on the target also re-
   For the virtual space task, a similar procedure was fol-       vealed significance of the main effect of environment (F(1,27)
lowed, with the only difference being that the participant         = 16.56, p  0.01) but not of box type or placement (see
was walked to a marker 2.5 m (98 ) from a projector wall,       Figures 6(c) and 6(d)).
and the image on the projector was changed from a blank              Results from the modified Witmer-Singer Presence Ques-
image to the stimulus image when the participant was ready.       tionnaire were analyzed following Madathil and Greenstein’s
   After the first task, the participant was given a custom-       analytical approach, by first computing the mean responses
tailored Witmer-Singer presence questionnaire [19]. The           of questions related to each of the four subscales used and
participant was given the option to remove the glasses while      then comparing differences between each of these means (of
they took the questionnaire if they felt uncomfortable wear-      means) via a Welch two-sample t-test between physical and
ing them. Those who chose to remove them had to repeat            virtual trials [10]. No significant differences were observed
the calibration procedure before the second task; however,        between the means of any of the four subscales tested (see
only one participant elected to do so. Participants were          Table 1 and Figure 9). A trend toward higher perceived
then given their second task, with the same instructions.         fidelity appears to point toward the physical environment,
After completion of the second task, they were again given        but, on average, the effect is negligible. Furthermore, modal
the presence questionnaire, but told that it referred only to     responses to the subjective post-experiment questionnaire
their experience in the second task (be it physical or vir-       show neutral preferential attitudes to either of the physical
tual). Finally, the participant was given a post-experiment       or virtual (projector) tasks (see Table 2).
questionnaire to collect subjective information (e.g., com-
fort) and any comments related to the study.                      5. DISCUSSION
   Search in the environments was counterbalanced such that
                                                                     Results indicate that the physical environment afforded
half the participants searched within the physical environ-
                                                                  significantly faster search performance than the virtual pro-
ment first and half first searched in the virtual. Position of
                                                                  jected image. The eye tracking data provides clear evidence
the target box was also counterbalanced so that one quarter
                                                                  of the discrepancy in performance: because the number of
of the trials contained the target at left, another quarter at
                                                                  fixations generally coincides with time taken to complete vi-
right and vice versa (corresponding images of the physical
                                                                  sual search, it is clear that participants took longer in the
environment were used in the virtual projection).
                                                                  virtual environment because they had to issue a larger num-
                                                                  ber of fixations. This is visualized in Figure 8 and shows the
3.1    Dependent Measures                                         reason for the difference in time to task completion which
   Eye Tracking Metrics. The primary metric of interest           might not have been evident had this been measured with a
was time to first fixation on the target box. This metric ef-       stopwatch (eye tracking data provides clear evidence of ac-
fectively measures time to task completion, or performance        tive visual search—participants were not simply daydream-
of the task. Additionally, we measured the number of fix-          ing or staring at a fixed point).
ations prior to the first fixation on target. We considered,           Eye movement data also suggests that individuals may
but rejected, other eye tracking metrics such as fixation du-      have approached the search task in a fundamentally differ-
ration. In this type of visual search task, a participant’s       ent way over the projected image. Heatmap visualizations
eye movements typically consist mostly of saccades until the      of aggregated scanpaths are shown in Figure 7. Note that
target is found. After the target is found, the number or du-     the heavily fixated regions in the four corners represent the
ration of fixations on it give us no further information—we        possible locations of the boxes—the image chosen for the
were mainly interested if the time to location of the target      visualization is one of the layouts used in the experiment,
differed between environment types.                                it is used in Figure 7 as a representative for visualization
   Presence Questionnaire. A presence questionnaire,              of aggregate data from all trials. In the virtual environ-
based on Witmer and Singer’s version 3.0, tailored to the         ment, it appears that most viewers may have begun their
present experiment, was used to gauge participants’ sub-          search near the center, but there is no such obvious trend in
jective impressions of both environments, specifically along       the physical environment. What is particularly interesting
four subscales: immersion, involvement, sensory fidelity, and      about this result is that Chandon et al. found that objects
interface quality. Four questions were chosen from the im-        located near the center of the “shelf” can be seen more often
Table 1: Mean responses to the tailored Witmer-Singer presence questionnaire, marked on a 7-point Likert
scale with 1 indicating most negative agreement and 7 indicating most positive agreement to the given
question regarding experiences in either virtual or physical environment.
    #                                                Question                                                  Physical      Virtual
                                                                                                             environment   environment
                                                                   Involvement
    1.    My interactions with the shelving environment seemed natural.                                          6.1            4.6
    3.    The visual aspects of the environment involved me.                                                     5.8            5.1
    8.    I was able to completely survey or search the environment using vision.                                6.4            6.2
    11.   I felt involved in the search task.                                                                    6.0            5.8
                                                                              group means (means of means)       6.1            5.4
                                                                    Immersion
    2.    All my senses were completely engaged.                                                                 4.6            3.8
    4.    I was completely aware of events occurring in the real world around me.                                5.9            5.1
    6.    The information coming from my visual sense felt inconsistent or disconnected.                         2.5            3.2
    12.   I was distracted by display devices.                                                                   2.9            3.2
                                                                              group means (means of means)       4.0            3.8
                                                                 Sensory Fidelity
    7.    My experiences with the shelving system seemed consistent with my real-world experience.               5.9            4.6
    9.    I felt that I was able to examine objects closely.                                                     5.4            4.9
    10.   I felt that I was able to examine objects from multiple viewpoints.                                    4.5            3.9
                                                                              group means (means of means)       5.3            4.5
                                                                 Interface Quality
    5.    I was completely aware of any display and control devices.                                             5.9            5.1
    13.   Visual display quality interfered or distracted me from completing my task.                            2.2            3.0
    14.   I was able to concentrate on the search task and not on the devices used to perform the task.          6.1            5.2
                                                                              group means (means of means)       4.5            4.6


Table 2: Modal responses to subjective post-experiment questions, marked on a 7-point Likert scale with 1
indicating strong disagreement and 7 indicating strong agreement.
           #                                                  Question                                                 mode
           1.   The eye tracking glasses felt comfortable.                                                              6
           2.   The eye tracking glasses distracted me and hindered my ability to perform my tasks.                     1
           3.   I preferred the projector search task to the physical search task.                                      4
           4.   I understood what was expected of me in each task.                                                      7
           5.   I preferred the physical search task to the projector search task.                                      4


but not actually considered (for purchase) in corresponding             home cinema projector), but these projectors are usually
percentages. Their finding did not fit with other data that               “long-throw” projectors and would cause shadow interfer-
suggested that attention correlates fairly well with consider-          ence problems in the CUshop virtual shopping experience
ation. Since they did not use an actual shelf in their study            being constructed.
(only a projected image), they speculated that this occurred               What is curious in our study is the lack of perceived differ-
because people might tend to orient their attention to the              ences in response to post-task presence and post-experiment
center of an image during a transition increasing the number            preference questionnaires. Figure 9 summarizes the data
of fixations in the area (as is seen in Figures 7(b) and 8(b)).          found in Table 1 and shows that while the physical envi-
Our findings suggest that this might not occur as consis-                ronment appears to have been rated slightly higher in terms
tently in physical environments.                                        of the presence subscales, the differences, along with modal
   A key reason for the observed difference in visual search             responses to preference, are negligible. The projected image
performance may be the fidelity of the projected scene. Al-              may have failed to provide either physical realism (in which
though we were careful to control for apparent image size,              the image provides the same visual stimulation as scene) or
the projected image clearly differs from its projected coun-             photo-realism (in which the image produces the same visual
terpart. The projectors offer relatively poor brightness and             response as the scene), but the image may have contained
contrast reproduction of the physical scene. The physical               sufficient functional realism (in which the images provides
scene is much richer in terms of visual elements (color gamut,          the same visual information) [5] to perform the task, albeit
contrast, and visual depth). The human eye can perceive                 consistently more slowly (note that our data analysis per-
a very high dynamic range contrast ratio, e.g., 100,000:1,              tains to all successful trials).
with static perception of about 10,000:1 at any given time.
The projectors’ lumens rating of 2,500 and contrast ratio
of 2,000:11 may have impeded visual search in comparison                6. CONCLUSION
to what was seen in the physical environment. Projectors                  Results were presented from a study comparing consumers’
are available with greater contrast ratios and spatial resolu-          visual behavior when searching for an item located on a vir-
tion (e.g., 12,000:1, 1080p high-definition of the PowerLite             tual or physical shelf. These indicate that the physical en-
                                                                        vironment afforded significantly faster search performance
1
    http://www.epson.com/brightlink                                     than the virtual projected image. Eye tracking data corrob-
Time to 1st fixation on target
                                   10

      Time (in seconds; with SE)
                                   8


                                   6


                                   4

                                                                                                                                                        (a) Physical environment
                                   2


                                   0
                                                    Physical                  Virtual
                                                                Environment

                                                               (a)

                                                     Time to 1st fixation on target
                                   10
                                                                                                                                                        (b) Virtual environment
      Time (in seconds; with SE)




                                   8


                                   6                                                           Figure 7: Heatmaps (all participants) in either env.

                                   4


                                   2


                                   0
                                            Left          Right       Left          Right
                                                   Physical                 Virtual
                                                   Target Placement × Environment

                                                               (b)

                                            No. of fixations prior to 1st fixation on target                                                            (a) Physical environment
                                   200
      Fixation Count (with SE)




                                   150



                                   100



                                    50
                                                                                                                                                        (b) Virtual environment
                                        0
                                                     Physical                 Virtual
                                                                Environment
                                                                                               Figure 8: Scanpaths (all participants) in either env.
                                                               (c)

                                            No. of fixations prior to 1st fixation on target                                                                      Presence metrics
                                   200                                                                                                       7
                                                                                                     Likert score means of means (with SE)




                                                                                                                                                                                     physical
                                                                                                                                                                                       virtual
                                                                                                                                             6
      Fixation Count (with SE)




                                   150
                                                                                                                                             5

                                   100                                                                                                       4

                                                                                                                                             3
                                    50
                                                                                                                                             2

                                                                                                                                             1
                                        0
                                             Left          Right       Left         Right                                                    0
                                                    Physical                Virtual                                                              Involvement   Immersion     Sensory         Interface
                                                    Target Placement × Environment                                                                                           Fidelity         Quality

                                                               (d)
                                                                                                   Figure 9: Results: presence questionnaire.
Figure 6: Results: performance and process metrics.
orates this finding by indicating a significantly larger number             Extended abstracts on human factors in computing
of fixations made over the virtual shelf.                                  systems, pages 923–928, New York, NY, 2006. ACM.
   One reason for the observed difference in visual search per-      [7]   A. F. Kramer and J. S. McCarley. Oculomotor
formance may be due to the poor fidelity of the projected                  Behaviour as a Reflection of Attention and Memory
scene in comparison to the physical shelf. It is possible that            Processes: Neural Mechanisms and Applications to
the projectors’ relatively low contrast ratio impeded visual              Human Factors. Theoretical Issues in Ergonomics
search. Better projectors and more photo-realistic simula-                Science, 4(1–2):21–55, 2003.
tions may improve congruence of eye movement metrics, but           [8]   M. F. Land and B. W. Tatler. Looking and Acting:
one must also consider the overall environment in which the               Vision and Eye Movements in Natural Behavior.
participant is immersed. Advancements in other forms of                   Oxford University Press, New York, NY, 2009.
simulation (automotive and flight for instance) have come            [9]   E. Lundberg. Packaging Media Lab: A proposal to a
not from improvements in visual quality (e.g., resolution,                Packaging Evaluation Environment for Conducting
contrast), but from an expanded field of view, realistic mo-               Consumer Studies. Master’s thesis, Uppsala
tion, and sound. Although visual fidelity will continue to                 University, Uppsala, Sweden, Sep. 2004.
play a significant role in the shopping simulation, the re-         [10]   K. C. Madathil and J. S. Greenstein. Synchronous
maining senses must also be addressed. We believe construc-               Remote Usability Testing – A New Approach
tion of a physical space filled with tactile objects, rich visual          Facilitated by Virtual Worlds. In CHI ’11: Proceedings
elements, and sounds, through which participants navigate,                of the SIGCHI Conference on Human Factors in
will go a long way toward mitigating the sense of standing                Computing Systems, New York, 2011. ACM Press.
in front of a projection screen.
                                                                   [11]   M. A. McDowell, C. D. Fryar, C. L. Ogden, and K. M.
   Physical shelves offer a step closer towards physical real-
                                                                          Flegal. Anthropometric Reference Data for Children
ism, but they are costly to set up and to stock. If there are
                                                                          and Adults: United States, 2003–2006. Technical
sufficient resources, such shelves offer better ecological valid-
                                                                          report, National Health Statistics, 2008.
ity. However, the lack of a perceived difference between the
environments suggests that projected replicas may be suf-          [12]   U. Orth and K. Malkewitz. Holistic package design
ficient for consumer testing (e.g., visual search) since they              and consumer brand impressions. Journal of
provide as much visual information. Our findings suggest                   Marketing, 72:64–81, 2008.
that virtual presentation of the stimulus offers a viable al-       [13]   E. J. Russo and F. LeClerc. An eye-fixation analysis of
ternative to a physical mock-up so long as one maintains                  choice processes for consumer nondurables. Journal of
awareness of the potential effect on performance in relation               Consumer Research, 21(2):274, 1994.
to performance in the field. If the effect is consistent, how-       [14]   D. S. Tan, D. Gergle, P. G. Scupelli, and R. Pausch.
ever, then relative measurements of performance within the                Physically large displays improve performance on
virtual environments are still likely to be valid.                        spatial tasks. In ACM Transactions on Computer -
                                                                          Human Interaction, volume 13, pages 71–99, New
                                                                          York, NY, 2006. ACM.
Acknowledgments
                                                                   [15]   J. Tornros. Driving behavior in a real and simulated
We would like to thank Andrew Hurley and his Spring 2011                  road tunnel – A validation study. Accident Analysis
creative inquiry team for their help in designing and brand-              and Prevention, 30(4):497–503, 1998.
ing the CUshop consumer experience lab. We are also grate-         [16]   R. Underwood, N. Klein, and R. Burke. Packaging
ful to Harris A. Smith for his generous support of the lab.               communication: Attentional effects of product
                                                                          imagery. Journal of Product and Brand Packaging,
7. REFERENCES                                                             10(7):403–422(20), 2001.
 [1] J. Baron and Y. Li. Notes on the use of R for                 [17]   M. Wedel and R. Pieters. A review of eye-tracking
     psychology experiments and questionnaires. Online                    research in marketing. In Review of Marketing
     Notes, 09 November 2007. URL: http://www.psych.                      Research. Emerald Group, Bingley, UK, 2008.
     upenn.edu/~baron/rpsych/rpsych.html.                          [18]   S. L. Whitney, P. J. Sparto, L. F. Hodges, S. V. Babu,
 [2] P. Chandon, J. W. Hutchinson, E. T. Bradlow, and                     J. M. Furman, and M. S. Redfern. Responses to a
     S. Young. Visual Marketing: From Attention to                        Virtual Reality Grocery Store in Persons with and
     Action. Lawrence Erlbaum Assoc., Mahwah, NJ, 2007.                   without Vestibular Dysfunction. CyberPsychology 
 [3] P. Chandon, J. W. Hutchinson, E. T. Bradlow, and                     Behavior, 9(2), 2006.
     S. Young. Does in-store marketing work? Effects of             [19]   B. G. Witmer and M. J. Singer. Measuring Presence
     the number and position of shelf facings on brand                    in Virtual Environments: A Presence Questionnaire.
     attention and evaluation at the point of purchase.                   Presence, 7(3):225–240, June 1998.
     Journal of Marketing, 73:1–17, 2009.                          [20]   D. S. Wooding. Fixation Maps: Quantifying
 [4] J. Clement. Visual influence on in-store buying                       Eye-Movement Traces. In ETRA ’02: Proceedings of
     decisions: an eye-track experiment on the visual                     the 2002 Symposium on Eye Tracking Research 
     influence of packaging design. Journal of Marketing                   Applications, pages 31–36, New York, NY, 2002. ACM.
     Management, 23(9):917 — 928, 2007.                            [21]   S. Young. Packaging design, consumer research, and
 [5] J. A. Ferwerda. Three Varieties of Realism in                        business strategy: The march toward accountability.
     Computer Graphics. In Human Vision and Electronic                    Design Management Journal, 10(3):10–14, 2002.
     Imaging, pages 290–297, Bellingham, WA, 2003. SPIE.           [22]   S. Young. Five principles for effective paging research.
 [6] S. A. Johansen and J. P. Hansen. Do we need eye                      Brand Packaging, 18(1):24–26, 2005.
     trackers to tell where people look? In CHI’06

Más contenido relacionado

Similar a Eye Tracking Shelf Simulation Study Compares Virtual vs. Physical Shelves

A Fluid Situation - EMDT (Nov/Dec 2011)
A Fluid Situation - EMDT (Nov/Dec 2011)A Fluid Situation - EMDT (Nov/Dec 2011)
A Fluid Situation - EMDT (Nov/Dec 2011)Team Consulting Ltd
 
Owning the product by owning the user experience
Owning the product by owning the user experienceOwning the product by owning the user experience
Owning the product by owning the user experienceMark Notess
 
Note on Tool to Measure Complexity
Note on Tool to Measure Complexity Note on Tool to Measure Complexity
Note on Tool to Measure Complexity John Thomas
 
Creating a Product Roadmap - Product Strategy Series
Creating a Product Roadmap - Product Strategy SeriesCreating a Product Roadmap - Product Strategy Series
Creating a Product Roadmap - Product Strategy SeriesMike Biggs GAICD
 
Open object project_report
Open object project_reportOpen object project_report
Open object project_reportJessi Baker
 
Predictably Irrational Customers
Predictably Irrational CustomersPredictably Irrational Customers
Predictably Irrational CustomersEric Siegmann
 
Why User Research is must in Product Development
Why User Research is must in Product DevelopmentWhy User Research is must in Product Development
Why User Research is must in Product DevelopmentPuneet Arora
 
Principles of display design
Principles of display designPrinciples of display design
Principles of display designManuswath K.B
 
Usability Primer
Usability  PrimerUsability  Primer
Usability PrimerRavi Shyam
 
Achieving Proactive Spend Management Capabilities (Zycus White Paper)
Achieving Proactive Spend Management Capabilities (Zycus White Paper)Achieving Proactive Spend Management Capabilities (Zycus White Paper)
Achieving Proactive Spend Management Capabilities (Zycus White Paper)Jon Hansen
 
UPA Israel event 2011 – Omer Tsimhoni
UPA Israel event 2011 – Omer TsimhoniUPA Israel event 2011 – Omer Tsimhoni
UPA Israel event 2011 – Omer TsimhoniOhad Inbar
 
ASSESSING THE QUALITY OF USABILTY
ASSESSING THE QUALITY OF USABILTYASSESSING THE QUALITY OF USABILTY
ASSESSING THE QUALITY OF USABILTYRadhika Dilip Kale
 
Mac 10 research_mistakes
Mac 10 research_mistakesMac 10 research_mistakes
Mac 10 research_mistakesjamesdavidfoley
 
User-Centred Design: Solutions that Walk the Walk
User-Centred Design: Solutions that Walk the WalkUser-Centred Design: Solutions that Walk the Walk
User-Centred Design: Solutions that Walk the WalkIIBA UK Chapter
 
Adaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging MarketsAdaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging MarketsMoonSoup, Inc.
 
Adaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging MarketsAdaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging MarketsMoonSoup, Inc.
 

Similar a Eye Tracking Shelf Simulation Study Compares Virtual vs. Physical Shelves (20)

Behavioural Economics
Behavioural EconomicsBehavioural Economics
Behavioural Economics
 
A Fluid Situation - EMDT (Nov/Dec 2011)
A Fluid Situation - EMDT (Nov/Dec 2011)A Fluid Situation - EMDT (Nov/Dec 2011)
A Fluid Situation - EMDT (Nov/Dec 2011)
 
Usability testing
Usability testingUsability testing
Usability testing
 
Immersive Research Toolbox
Immersive Research ToolboxImmersive Research Toolbox
Immersive Research Toolbox
 
Owning the product by owning the user experience
Owning the product by owning the user experienceOwning the product by owning the user experience
Owning the product by owning the user experience
 
Note on Tool to Measure Complexity
Note on Tool to Measure Complexity Note on Tool to Measure Complexity
Note on Tool to Measure Complexity
 
Creating a Product Roadmap - Product Strategy Series
Creating a Product Roadmap - Product Strategy SeriesCreating a Product Roadmap - Product Strategy Series
Creating a Product Roadmap - Product Strategy Series
 
Open object project_report
Open object project_reportOpen object project_report
Open object project_report
 
Predictably Irrational Customers
Predictably Irrational CustomersPredictably Irrational Customers
Predictably Irrational Customers
 
Why User Research is must in Product Development
Why User Research is must in Product DevelopmentWhy User Research is must in Product Development
Why User Research is must in Product Development
 
Principles of display design
Principles of display designPrinciples of display design
Principles of display design
 
Usability Primer
Usability  PrimerUsability  Primer
Usability Primer
 
Achieving Proactive Spend Management Capabilities (Zycus White Paper)
Achieving Proactive Spend Management Capabilities (Zycus White Paper)Achieving Proactive Spend Management Capabilities (Zycus White Paper)
Achieving Proactive Spend Management Capabilities (Zycus White Paper)
 
UPA Israel event 2011 – Omer Tsimhoni
UPA Israel event 2011 – Omer TsimhoniUPA Israel event 2011 – Omer Tsimhoni
UPA Israel event 2011 – Omer Tsimhoni
 
ASSESSING THE QUALITY OF USABILTY
ASSESSING THE QUALITY OF USABILTYASSESSING THE QUALITY OF USABILTY
ASSESSING THE QUALITY OF USABILTY
 
Mac 10 research_mistakes
Mac 10 research_mistakesMac 10 research_mistakes
Mac 10 research_mistakes
 
User-Centred Design: Solutions that Walk the Walk
User-Centred Design: Solutions that Walk the WalkUser-Centred Design: Solutions that Walk the Walk
User-Centred Design: Solutions that Walk the Walk
 
Adaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging MarketsAdaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging Markets
 
Adaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging MarketsAdaptation Of Innovation Models To Emerging Markets
Adaptation Of Innovation Models To Emerging Markets
 
Approaching sustainability
Approaching sustainabilityApproaching sustainability
Approaching sustainability
 

Más de mrgazer

Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...
Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...
Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...mrgazer
 
Van der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawingVan der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawingmrgazer
 
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...mrgazer
 
Paulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bedPaulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bedmrgazer
 
Spakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsSpakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsmrgazer
 
Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...mrgazer
 
Mardanbegi.2011.mobile gaze based screen interaction in 3 d environments
Mardanbegi.2011.mobile gaze based screen interaction in 3 d environmentsMardanbegi.2011.mobile gaze based screen interaction in 3 d environments
Mardanbegi.2011.mobile gaze based screen interaction in 3 d environmentsmrgazer
 
Koesling.2011.towards intelligent user interfaces anticipating actions in com...
Koesling.2011.towards intelligent user interfaces anticipating actions in com...Koesling.2011.towards intelligent user interfaces anticipating actions in com...
Koesling.2011.towards intelligent user interfaces anticipating actions in com...mrgazer
 
Skovsgaard.2011.evaluation of a remote webcam based eye tracker
Skovsgaard.2011.evaluation of a remote webcam based eye trackerSkovsgaard.2011.evaluation of a remote webcam based eye tracker
Skovsgaard.2011.evaluation of a remote webcam based eye trackermrgazer
 
Engelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrievalEngelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrievalmrgazer
 

Más de mrgazer (10)

Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...
Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...
Yamamoto.2011.hyakunin eyesshu a tabletop hyakunin-isshu game with computer o...
 
Van der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawingVan der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawing
 
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
 
Paulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bedPaulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bed
 
Spakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsSpakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithms
 
Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...
 
Mardanbegi.2011.mobile gaze based screen interaction in 3 d environments
Mardanbegi.2011.mobile gaze based screen interaction in 3 d environmentsMardanbegi.2011.mobile gaze based screen interaction in 3 d environments
Mardanbegi.2011.mobile gaze based screen interaction in 3 d environments
 
Koesling.2011.towards intelligent user interfaces anticipating actions in com...
Koesling.2011.towards intelligent user interfaces anticipating actions in com...Koesling.2011.towards intelligent user interfaces anticipating actions in com...
Koesling.2011.towards intelligent user interfaces anticipating actions in com...
 
Skovsgaard.2011.evaluation of a remote webcam based eye tracker
Skovsgaard.2011.evaluation of a remote webcam based eye trackerSkovsgaard.2011.evaluation of a remote webcam based eye tracker
Skovsgaard.2011.evaluation of a remote webcam based eye tracker
 
Engelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrievalEngelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrieval
 

Último

4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sectoritnewsafrica
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...Karmanjay Verma
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesManik S Magar
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkPixlogix Infotech
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Landscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdfLandscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdfAarwolf Industries LLC
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Mark Simos
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Karmanjay Verma
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Kuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialKuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialJoão Esperancinha
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Kaya Weers
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Nikki Chapple
 

Último (20)

4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App Framework
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Landscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdfLandscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdf
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Kuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialKuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorial
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
 

Eye Tracking Shelf Simulation Study Compares Virtual vs. Physical Shelves

  • 1. Eye Tracking Within the Packaging Design Workflow: Interaction with Physical and Virtual Shelves Chip Tonkin Andrew D. Ouzts Andrew T. Duchowski Clemson University Clemson University Clemson University tonkin@clemson.edu aouzts@g.clemson.edu duchowski@clemson.edu ABSTRACT searching for a product on a store shelf. Companies such as Measuring consumers’ overt visual attention through eye Kraft Foods, PepsiCo, and Unilever regularly employ this tracking is a useful method of assessing a package design’s technology in the development of new packaging and retail impact on likely buyer purchase patterns. To preserve eco- strategies, however, their methodologies and results are not logical validity, subjects should remain immersed in a shop- available to the public [17]. For this approach to be useful ping context throughout the entire study. Immersion can be to and to be widely adopted by industry, it needs to be an achieved through proper priming, environmental cues, and integral part of the design process that is incorporated as a visual stimuli. While a complete physical store offers the feedback loop attached to the creative step in the workflow most realistic environment, the use of projectors in creating (as opposed to a research study performed post facto). The a virtual environment is desirable for efficiency, cost, and goal is similar to performance simulation developed for eval- flexibility reasons. Results are presented from a study com- uation of package design (similar to shelf life, sustainability, paring consumers’ visual behavior in the presence of either and shipping capability), and it follows that one would also virtual or physical shelving through eye movement perfor- want to accurately predict its consumer response. mance and process metrics and their subjective impressions. The challenge is developing the workflow, equipment pa- Analysis suggests a difference in visual search performance rameters, and environment that provides a realistic approx- between environments even though the perceived difference imation to the retail shopping experience. It must be usable is negligible. for a wide variety of products, remain cost effective, and provide meaningful results. Most of the eye tracking equip- ment and the bulk of published research suggest use of an Categories and Subject Descriptors all-in-one monitor system. This is sufficient for testing web J.4 [Computer Applications]: Social and Behavioral Sci. usability and responses to printed ads and promotions be- cause the size of the test stimuli readily fit the screens and 1. INTRODUCTION the seating position of the subjects is not unnatural. How- ever, this is not a realistic shopping simulation. The shift in consumer behavior over the past two decades Alternatives to the all-in-one desktop system include self- is forcing dramatic changes in the way products are designed, standing eye tracking modules that can be used in front of packaged and marketed. Presently 70% of consumer pur- physical samples or a projector screen. These have been chase choices are made at the shelf, 85% are made without used for retail studies, but the requirement of standing per- even picking up a competitive item, and 90% are made af- fectly still in front of the shelf or projection screen makes this ter looking at just the front face of the packaging [4]. Thus setup less than ideal for simulating the retail shopping ex- package design clearly plays a significant role in determin- perience. To better simulate the shopping experience, con- ing the success of a product, but as Clement points out, sumers should be allowed to “wander up and down the aisles” currently accepted design methodology understates this im- as they would do in reality. pact and does not include any objective method of assess- Obviously the most realistic consumer experience study ing the product’s visual impact on buying decisions. Eye ought to be performed in an actual retail store, and in tracking is a means of quantifying an observer’s overt vi- some instances this may be possible, but the logistics of sual attention, and can be used to evaluate and compare regularly rearranging a store to meet experimental condi- visual search patterns of individuals in a variety of situa- tions, and controlling many other real-world variables make tions. This approach is readily applicable to the measure- this prospect infeasible for the majority of typical controlled ment of consumer behaviour in a retail environment when studies. The CUshop consumer experience lab (depicted in Fig- ure 1) will simulate browsing freedom within a realistic en- Permission to make digital or hard copies of all or part of this work for vironment. The lab is designed as a fully self-contained personal or classroom use is granted without fee provided that copies are environment with sliding glass doors, re-configurable shelv- not made or distributed for profit or commercial advantage and that copies ing, a refrigerated section, and appropriate signage and win- bear this notice and the full citation on the first page. To copy otherwise, to dow treatments to create a realistic consumer shelf simu- republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. lation. While it is understood that the physical environ- NGCA ’11 Karlskrona, Sweden ment is ideal, the lab will contain equipment to run studies Copyright 2011 ACM 978-1-4503-0680-5/11/05 ...$10.00.
  • 2. Figure 1: Architectural renderings of the CUshop consumer experience laboratory (to be completed June 2011). The winning design was developed as part of the Spring 2011 Creative Inquiry class led by A. Hurley. within projected virtual environments because projection of- tion is key in the buying decision [16, 12]. Buying decisions fers cheaper and faster setup while preserving a high level are based on a combination of brand recognition and what of stimulus control. The purpose of this study is to evalu- Chandon et al. coined “visual equity” [2]. This term refers ate consumers’ visual behavior and subjective impressions to the incremental consideration given to items that attract as they perform a product search task when encountering a buyer’s attention so that while a consumer enters with a either a virtual projected or physical shelf unit. certain amount of “memory equity” related to their needs The use of projectors to simulate an otherwise expensive and understanding of brand value, this can be changed at or difficult environment is not unusual. Flight and driving the point of decision by what catches their attention. Eye simulators have been used successfully for training and re- tracking is a good tool for measuring this effect, as Johansen search purposes, and while they are not able to achieve com- and Hansen discovered during webpage navigation, noting plete physical or photo-realism, they have served as viable that individual recollection of what attracted attention and predictors of behavior in various situations [15]. This paper what order things were seen in were not nearly as accurate compares and contrasts the impact of a virtual store shelf as a record of recorded eye movements [6]. on consumer behavior to gauge the level of realism afforded. Because shoppers may not remember what they saw or Although patterns of information acquisition when viewing perhaps because they may not be willing to honestly di- an image are expected to be similar to those when viewing vulge their reasons for their decisions, the practice of mea- the real environment, the general level of performance in ob- suring “brand recall” as typically done in marketing stud- ject memory tests has been shown to be better in the latter ies is largely meaningless. According to Chandon et al., [8]. In this paper visual is search is examined between the brand recall is overwhelmingly driven by brand familiarity real and virtual to test for performance differences, if any. and, oddly, eye fixations on products within a given mar- ket segment can enhance brand recall for the target product whether it is present in the study or not [2]. They found 2. BACKGROUND that major brands tended to inhibit the recollection of mi- Russo and LeClerc describe the consumer’s product se- nor brands while conversely the viewing of minor brands lection process in three stages [13]. The first, dubbed “Ori- tended to enhance the recall of major brands. entation”, relies on the ability to evaluate overall patterns, Young makes the point that the most important factor colors, and shapes in the scene. Once an interesting area in achieving applicable results is that the consumer must is identified, the consumer transitions to “Evaluation”, in be kept in a shopping context [22]. He stated that “when a which the focus is on a small number of items. At this stage shopper is removed from this context, she often leaves be- information is likely processed much more intently and in a hind the shopping mindset and, instead, takes on an art di- serial fashion [4]. The last phase of the process defined by rector’s aesthetic mentality.” He compares this to a “beauty Russo and LeClerc is the “Verification” stage. This is the contest” in which the most aesthetically pleasing package point at which the consumer verifies that the product meets tends to win (this is not typically the attribute that actu- their needs, makes pricing comparisons, and garners assur- ally decides purchase decisions at the shelf). This lack of ance that it was the right choice. Currently, the consumer’s realism has been a significant problem (noted or otherwise) rationale for making product selection decisions are stud- in practically all of the consumer shelf studies thus far. ied using standard interview techniques, focus groups, and Russo and LeClerc noted that the mean decision time in observation, but being able to objectively determine when, their experiment was well above industry norm (30 seconds how long, and what attracts attention could give much more vs. 12 seconds) and gave several likely experimental setup precise and actionable information than the softer, subjec- reasons for their subjects’ slower behavior [13]. In review- tive responses typically garnered from a focus group [21]. ing these findings along with those of other experiments, Assuming recorded visual attention follows the fovea [7], Clement found that they presented serious validity problems eye tracking can be a valuable tool for assessing consumer because they were in laboratory experiments that poorly attention in shopping environments [17]. In cases where simulated real-world conditions [4]. Subjects were sitting in there is clear brand recognition, attention-catching packag- chairs looking at pictures of packages or viewing relatively ing (imagery, colors, etc.) have less impact, but when brand small projected images that were not accurate for size or is not a major consideration, packaging that captures atten- visual angle.
  • 3. 4.0 m 2.5 m 1.25 m 0.75 m Figure 2: Physical and virtual viewing dimensions with example participant searching for target item. Even if subjects are shown an accurate picture with ob- 3. METHODOLOGY jects taking up the same amount of visual angle as a real The effect of physical or virtual environment was mea- life product, performance on spatial tasks significantly im- sured on performance (visual search), process (eye move- proves as the image becomes larger [14]. This supports the ments), and subjective measures (i.e., the feeling of presence notion that peripheral vision is used to improve search capa- within each environment and preference). The main task bility. When searching, the consumer visually perceives the was search for a target item, with the main experimental environment (i.e., the store shelf) in toto and in parallel, ori- factor consisting of environment type. enting selective attention [4]. This can be compared to one’s Stimulus. Two shelving environments were created for ability to hear and “feel” the overall surrounding sounds or the experiment. The physical shelf was a 3.6 m (141 ) Aisle selectively listening to a particular voice in a crowd, but not made with a Gondola 0.6 m (23 ) base system, constituting do both at the same time [3]. The size of the visual field is a 2 m (78 ) tall shelving system with four 0.4 m (16 ) deep therefore likely to influence visual search, e.g., a small field is upper shelves (this was used store shelving removed from likely to restrict parafoveal preview benefit. Construction of a major US retailer). The shelf was populated with real a visually realistic shopping environment is likely to matter physical cereal boxes with two fabricated cereal brands used to any studies of shoppers’ visual behavior. as search targets. Factors such as visual realism and the visual field size need The virtual environment was a snapshot of the physical to be considered in the CUshop design and methodology. To shelf projected on a wall. The image was captured by a achieve ecological validity, we believe it is necessary for the Canon EOS Rebel T1i 500D camera mounted on a tripod laboratory to look and feel like a shopping environment, the approximating the eye-level of an average-height US adult eye tracking equipment must be unobtrusive and flexible, (1.7 m (67 ) [11]). The image was then corrected for geo- and the task be structured and primed in such a way that metrical distortion caused by the lens, cropped, and resam- the subject carries the mindset of a consumer during the pled to achieve pixel dimensions of 2560×800, and displayed length of the study. across two Epson BrightLink 450 WI projectors, chosen for To our knowledge, thus far all consumer-related eye track- their brightness and short throw distance which eliminated ing studies have been restricted to environments where the shadow interference when standing in front of the display. visual display was projected on a screen. Studies most sim- In both physical and virtual presentations of the cereal ilar to ours include those of Lundberg, Whitney et al., and shelf, care was taken to present the participant with the Chandon et al. [9, 18, 3]. Lundberg proposed development same apparent view. In both instances the environment of the Packaging Media Lab, in which an eye tracker would measured 4.0×1.25 m (160 ×49 ) at an elevation of 0.75 m be used while a shopper viewed a shelf of products pro- (30 ) off the ground, as sketched in Figure 2. In both phys- jected on a screen. The lab was eventually designed by The ical and virtual search tasks, participants stood centered at Packaging Arena, Ltd., and built within the Bergvik shop- a distance of 2.5 m (98 ) from either display. ping centre in Karlstad, Sweden. Whitney et al. constructed The stimuli (see Figure 3) used as search targets were the Balance NAVE Automatic Virtual Environment consist- cereal boxes made especially for this study to preclude fa- ing of three back-projected screens providing a wide field of miliarity with the products. Artificial cereal boxes were cre- view projection-based system. Their purpose was not to ated to ensure that they could not have been known a priori test shopping decisions per se, rather, it was to test the ef- to any of the participants. Each box measured 22×28 cm fect of navigation through the VR grocery environment on (8.5 ×11 ) and matched the dimensions of a box on the pro- participants with and without vestibular dysfunction (no eye jector wall. Figure 4 shows one of the physical cereal boxes tracker was used). Chandon et al. used an eye tracker when matching the dimensions of its projected counterpart. looking at planograms (shown on a single 4 ×5 screen, 80 Yellow and black price tags, visible in Figure 4, were also away from the viewer) to test the influence of number of artificially created for this study and displayed below ev- shelf facings and position. ery distinct cereal box. Tobii’s infra-red (IR) markers were In the present study, we measure the differences in vi- placed atop the darker portions of the price tags in an effort sual behavior between a virtual environment and its physical to blend their appearance. counterpart from which the virtual is derived.
  • 4. Figure 3: Artificial cereal boxes designed and con- Figure 4: Physical cereal box held against its coun- structed specifically for the experiment. terpart projected in the virtual environment. Participants. The study recruited 42 participants re- Apparatus. Eye movements were captured using Tobii cruited from Packaging Science and Computer Science clas- Glasses, a head-mounted eye tracking system resembling a ses. Ten participants were excluded from analysis due to cal- pair of glasses (see Figure 5(a)). The tracker is monocular ibration issues (specifically we found that calibration points (right eye only), sampling at 30 Hz with 56◦ × 40◦ record- on the left side of the grid were difficult for these participants ing visual angle. The Tobii Glasses were used in conjunction to fixate; a possible consequence of the monocular nature of with two other pieces of hardware: the Recording Assistant the Tobii Glasses). Four additional participants were ex- and IR markers. The Recording Assistant is a small de- cluded for incorrectly performing the task on at least one vice (4.7 ×3.1 ×1.1 ) that attaches to the glasses and is trial—data showed post facto that these participants never used to both calibrate the eye tracker and store recorded fixated the target box, their data could thus be considered eye movement and video data on a mini-SD card. IR mark- off-target or erroneous. Analysis therefore considered only ers (see Figure 5(b)) are used to delineate an Area of Anal- successful trials, consisting of data captured from 28 partici- ysis (AOA), a plane determined by the placement of 4 or pants (18 male, 14 female). These participants’ ages ranged more IR markers, similar in concept to an Area/Region of from 20 to 42 (median 22). Interest (A/ROI) commonly used in eye tracking research to Procedure. Before starting the experiment, participants delineate sections of stimulus within which filtered eye move- were asked to fill out a basic demographic questionnaire ments, i.e., fixations, are counted. The difference between an (gender, age, use and type of corrective lenses, etc.). They AOA and an AOI is that an AOA exists in physical space and were then walked to an unmarked, white wall for the cal- is required for data aggregation when the glasses are used. ibration process. Participants stood 1 m (39 ) from the An IR marker serves this function only when attached to an wall and underwent the 9-point calibration procedure. Ten IR marker holder; otherwise, it works in calibration mode participants could not achieve a satisfactory calibration and and emits a visible (green) light for calibration. were thanked for their participation and dismissed. Calibration. Calibration using the Tobii Glasses is some- Next, participants were given instructions for their first what different from traditional calibration procedures em- ployed with table-mounted, fixed, or more commonly known as “remote” eye trackers. To calibrate the glasses, an IR marker is used in calibration mode. The experimenter first asks the participant to stand at a distance of 1 m from a flat, vertical surface (e.g., a wall) and begins the calibration pro- cess using the Recording Assistant. The Recording Assistant then displays a 3×3 grid of points to the experimenter, who must position the IR marker at each corresponding point on the wall. During this process, the participant is instructed to hold their head steady and follow the green light emitted by the IR marker with their eyes. Experimental Design. The experiment consisted of a 2 (environment) × 2 (box type) × 2 (box placement) design. The environment was either the physical or virtual cereal shelf, the box type included two versions of a cereal box (Figure 3), and box placement featured the target box at one of two locations (left vs. right). A center target position (a) (b) was avoided it is likely to be fixated first [20]. Each participant performed two trials, with environment and box type reversed in the second trial, counterbalancing Figure 5: (a) Tobii Glasses, Recording Assistant, trial combinations. and (b) IR marker. Courtesy of Tobii Technology.
  • 5. task. If their task was the physical space task, the researcher mersion and involvement subscales and three from the sen- showed the participant one of the two target boxes. The par- sory fidelity and interface quality subscales. All questions ticipant was told that their task would be to find this box were administered along a 7-point Likert scale. Questions on a physical shelf and verbally announce its price. They relating to non-visual senses were omitted. were given as much time as desired to examine their target box in as much detail as they wished (no participant spent more than 30 s). The participant was also shown examples 4. RESULTS of the price tags’ appearance. They were then told the lo- Eye movement data in the form of numbers of fixations cation of the physical shelf, and asked to walk directly to a and time to first fixation of the target AOI were exported marker on the ground (2.5 m (98 ) from the stimulus) be- from Tobii Studio for analysis with R [1]. fore looking up at the shelf. When ready, they were asked A repeated-measures three-way ANOVA of time to first to look straight ahead so the glasses could auto-adjust for fixation revealed significance of the main effect of environ- recording to begin. Finally, the experimenter walked with ment (F(1,27) = 22.77, p 0.01). No other significant the participant to the shelf and recorded eye movements un- effects (of box type or placement) were detected (see Fig- til the participant announced the price of the object. The ures 6(a) and 6(b)). physical shelving area was concealed from the participant A repeated-measures three-way ANOVA of the number prior to this task, to avoid preview benefit. of fixations prior to the first fixation on the target also re- For the virtual space task, a similar procedure was fol- vealed significance of the main effect of environment (F(1,27) lowed, with the only difference being that the participant = 16.56, p 0.01) but not of box type or placement (see was walked to a marker 2.5 m (98 ) from a projector wall, Figures 6(c) and 6(d)). and the image on the projector was changed from a blank Results from the modified Witmer-Singer Presence Ques- image to the stimulus image when the participant was ready. tionnaire were analyzed following Madathil and Greenstein’s After the first task, the participant was given a custom- analytical approach, by first computing the mean responses tailored Witmer-Singer presence questionnaire [19]. The of questions related to each of the four subscales used and participant was given the option to remove the glasses while then comparing differences between each of these means (of they took the questionnaire if they felt uncomfortable wear- means) via a Welch two-sample t-test between physical and ing them. Those who chose to remove them had to repeat virtual trials [10]. No significant differences were observed the calibration procedure before the second task; however, between the means of any of the four subscales tested (see only one participant elected to do so. Participants were Table 1 and Figure 9). A trend toward higher perceived then given their second task, with the same instructions. fidelity appears to point toward the physical environment, After completion of the second task, they were again given but, on average, the effect is negligible. Furthermore, modal the presence questionnaire, but told that it referred only to responses to the subjective post-experiment questionnaire their experience in the second task (be it physical or vir- show neutral preferential attitudes to either of the physical tual). Finally, the participant was given a post-experiment or virtual (projector) tasks (see Table 2). questionnaire to collect subjective information (e.g., com- fort) and any comments related to the study. 5. DISCUSSION Search in the environments was counterbalanced such that Results indicate that the physical environment afforded half the participants searched within the physical environ- significantly faster search performance than the virtual pro- ment first and half first searched in the virtual. Position of jected image. The eye tracking data provides clear evidence the target box was also counterbalanced so that one quarter of the discrepancy in performance: because the number of of the trials contained the target at left, another quarter at fixations generally coincides with time taken to complete vi- right and vice versa (corresponding images of the physical sual search, it is clear that participants took longer in the environment were used in the virtual projection). virtual environment because they had to issue a larger num- ber of fixations. This is visualized in Figure 8 and shows the 3.1 Dependent Measures reason for the difference in time to task completion which Eye Tracking Metrics. The primary metric of interest might not have been evident had this been measured with a was time to first fixation on the target box. This metric ef- stopwatch (eye tracking data provides clear evidence of ac- fectively measures time to task completion, or performance tive visual search—participants were not simply daydream- of the task. Additionally, we measured the number of fix- ing or staring at a fixed point). ations prior to the first fixation on target. We considered, Eye movement data also suggests that individuals may but rejected, other eye tracking metrics such as fixation du- have approached the search task in a fundamentally differ- ration. In this type of visual search task, a participant’s ent way over the projected image. Heatmap visualizations eye movements typically consist mostly of saccades until the of aggregated scanpaths are shown in Figure 7. Note that target is found. After the target is found, the number or du- the heavily fixated regions in the four corners represent the ration of fixations on it give us no further information—we possible locations of the boxes—the image chosen for the were mainly interested if the time to location of the target visualization is one of the layouts used in the experiment, differed between environment types. it is used in Figure 7 as a representative for visualization Presence Questionnaire. A presence questionnaire, of aggregate data from all trials. In the virtual environ- based on Witmer and Singer’s version 3.0, tailored to the ment, it appears that most viewers may have begun their present experiment, was used to gauge participants’ sub- search near the center, but there is no such obvious trend in jective impressions of both environments, specifically along the physical environment. What is particularly interesting four subscales: immersion, involvement, sensory fidelity, and about this result is that Chandon et al. found that objects interface quality. Four questions were chosen from the im- located near the center of the “shelf” can be seen more often
  • 6. Table 1: Mean responses to the tailored Witmer-Singer presence questionnaire, marked on a 7-point Likert scale with 1 indicating most negative agreement and 7 indicating most positive agreement to the given question regarding experiences in either virtual or physical environment. # Question Physical Virtual environment environment Involvement 1. My interactions with the shelving environment seemed natural. 6.1 4.6 3. The visual aspects of the environment involved me. 5.8 5.1 8. I was able to completely survey or search the environment using vision. 6.4 6.2 11. I felt involved in the search task. 6.0 5.8 group means (means of means) 6.1 5.4 Immersion 2. All my senses were completely engaged. 4.6 3.8 4. I was completely aware of events occurring in the real world around me. 5.9 5.1 6. The information coming from my visual sense felt inconsistent or disconnected. 2.5 3.2 12. I was distracted by display devices. 2.9 3.2 group means (means of means) 4.0 3.8 Sensory Fidelity 7. My experiences with the shelving system seemed consistent with my real-world experience. 5.9 4.6 9. I felt that I was able to examine objects closely. 5.4 4.9 10. I felt that I was able to examine objects from multiple viewpoints. 4.5 3.9 group means (means of means) 5.3 4.5 Interface Quality 5. I was completely aware of any display and control devices. 5.9 5.1 13. Visual display quality interfered or distracted me from completing my task. 2.2 3.0 14. I was able to concentrate on the search task and not on the devices used to perform the task. 6.1 5.2 group means (means of means) 4.5 4.6 Table 2: Modal responses to subjective post-experiment questions, marked on a 7-point Likert scale with 1 indicating strong disagreement and 7 indicating strong agreement. # Question mode 1. The eye tracking glasses felt comfortable. 6 2. The eye tracking glasses distracted me and hindered my ability to perform my tasks. 1 3. I preferred the projector search task to the physical search task. 4 4. I understood what was expected of me in each task. 7 5. I preferred the physical search task to the projector search task. 4 but not actually considered (for purchase) in corresponding home cinema projector), but these projectors are usually percentages. Their finding did not fit with other data that “long-throw” projectors and would cause shadow interfer- suggested that attention correlates fairly well with consider- ence problems in the CUshop virtual shopping experience ation. Since they did not use an actual shelf in their study being constructed. (only a projected image), they speculated that this occurred What is curious in our study is the lack of perceived differ- because people might tend to orient their attention to the ences in response to post-task presence and post-experiment center of an image during a transition increasing the number preference questionnaires. Figure 9 summarizes the data of fixations in the area (as is seen in Figures 7(b) and 8(b)). found in Table 1 and shows that while the physical envi- Our findings suggest that this might not occur as consis- ronment appears to have been rated slightly higher in terms tently in physical environments. of the presence subscales, the differences, along with modal A key reason for the observed difference in visual search responses to preference, are negligible. The projected image performance may be the fidelity of the projected scene. Al- may have failed to provide either physical realism (in which though we were careful to control for apparent image size, the image provides the same visual stimulation as scene) or the projected image clearly differs from its projected coun- photo-realism (in which the image produces the same visual terpart. The projectors offer relatively poor brightness and response as the scene), but the image may have contained contrast reproduction of the physical scene. The physical sufficient functional realism (in which the images provides scene is much richer in terms of visual elements (color gamut, the same visual information) [5] to perform the task, albeit contrast, and visual depth). The human eye can perceive consistently more slowly (note that our data analysis per- a very high dynamic range contrast ratio, e.g., 100,000:1, tains to all successful trials). with static perception of about 10,000:1 at any given time. The projectors’ lumens rating of 2,500 and contrast ratio of 2,000:11 may have impeded visual search in comparison 6. CONCLUSION to what was seen in the physical environment. Projectors Results were presented from a study comparing consumers’ are available with greater contrast ratios and spatial resolu- visual behavior when searching for an item located on a vir- tion (e.g., 12,000:1, 1080p high-definition of the PowerLite tual or physical shelf. These indicate that the physical en- vironment afforded significantly faster search performance 1 http://www.epson.com/brightlink than the virtual projected image. Eye tracking data corrob-
  • 7. Time to 1st fixation on target 10 Time (in seconds; with SE) 8 6 4 (a) Physical environment 2 0 Physical Virtual Environment (a) Time to 1st fixation on target 10 (b) Virtual environment Time (in seconds; with SE) 8 6 Figure 7: Heatmaps (all participants) in either env. 4 2 0 Left Right Left Right Physical Virtual Target Placement × Environment (b) No. of fixations prior to 1st fixation on target (a) Physical environment 200 Fixation Count (with SE) 150 100 50 (b) Virtual environment 0 Physical Virtual Environment Figure 8: Scanpaths (all participants) in either env. (c) No. of fixations prior to 1st fixation on target Presence metrics 200 7 Likert score means of means (with SE) physical virtual 6 Fixation Count (with SE) 150 5 100 4 3 50 2 1 0 Left Right Left Right 0 Physical Virtual Involvement Immersion Sensory Interface Target Placement × Environment Fidelity Quality (d) Figure 9: Results: presence questionnaire. Figure 6: Results: performance and process metrics.
  • 8. orates this finding by indicating a significantly larger number Extended abstracts on human factors in computing of fixations made over the virtual shelf. systems, pages 923–928, New York, NY, 2006. ACM. One reason for the observed difference in visual search per- [7] A. F. Kramer and J. S. McCarley. Oculomotor formance may be due to the poor fidelity of the projected Behaviour as a Reflection of Attention and Memory scene in comparison to the physical shelf. It is possible that Processes: Neural Mechanisms and Applications to the projectors’ relatively low contrast ratio impeded visual Human Factors. Theoretical Issues in Ergonomics search. Better projectors and more photo-realistic simula- Science, 4(1–2):21–55, 2003. tions may improve congruence of eye movement metrics, but [8] M. F. Land and B. W. Tatler. Looking and Acting: one must also consider the overall environment in which the Vision and Eye Movements in Natural Behavior. participant is immersed. Advancements in other forms of Oxford University Press, New York, NY, 2009. simulation (automotive and flight for instance) have come [9] E. Lundberg. Packaging Media Lab: A proposal to a not from improvements in visual quality (e.g., resolution, Packaging Evaluation Environment for Conducting contrast), but from an expanded field of view, realistic mo- Consumer Studies. Master’s thesis, Uppsala tion, and sound. Although visual fidelity will continue to University, Uppsala, Sweden, Sep. 2004. play a significant role in the shopping simulation, the re- [10] K. C. Madathil and J. S. Greenstein. Synchronous maining senses must also be addressed. We believe construc- Remote Usability Testing – A New Approach tion of a physical space filled with tactile objects, rich visual Facilitated by Virtual Worlds. In CHI ’11: Proceedings elements, and sounds, through which participants navigate, of the SIGCHI Conference on Human Factors in will go a long way toward mitigating the sense of standing Computing Systems, New York, 2011. ACM Press. in front of a projection screen. [11] M. A. McDowell, C. D. Fryar, C. L. Ogden, and K. M. Physical shelves offer a step closer towards physical real- Flegal. Anthropometric Reference Data for Children ism, but they are costly to set up and to stock. If there are and Adults: United States, 2003–2006. Technical sufficient resources, such shelves offer better ecological valid- report, National Health Statistics, 2008. ity. However, the lack of a perceived difference between the environments suggests that projected replicas may be suf- [12] U. Orth and K. Malkewitz. Holistic package design ficient for consumer testing (e.g., visual search) since they and consumer brand impressions. Journal of provide as much visual information. Our findings suggest Marketing, 72:64–81, 2008. that virtual presentation of the stimulus offers a viable al- [13] E. J. Russo and F. LeClerc. An eye-fixation analysis of ternative to a physical mock-up so long as one maintains choice processes for consumer nondurables. Journal of awareness of the potential effect on performance in relation Consumer Research, 21(2):274, 1994. to performance in the field. If the effect is consistent, how- [14] D. S. Tan, D. Gergle, P. G. Scupelli, and R. Pausch. ever, then relative measurements of performance within the Physically large displays improve performance on virtual environments are still likely to be valid. spatial tasks. In ACM Transactions on Computer - Human Interaction, volume 13, pages 71–99, New York, NY, 2006. ACM. Acknowledgments [15] J. Tornros. Driving behavior in a real and simulated We would like to thank Andrew Hurley and his Spring 2011 road tunnel – A validation study. Accident Analysis creative inquiry team for their help in designing and brand- and Prevention, 30(4):497–503, 1998. ing the CUshop consumer experience lab. We are also grate- [16] R. Underwood, N. Klein, and R. Burke. Packaging ful to Harris A. Smith for his generous support of the lab. communication: Attentional effects of product imagery. Journal of Product and Brand Packaging, 7. REFERENCES 10(7):403–422(20), 2001. [1] J. Baron and Y. Li. Notes on the use of R for [17] M. Wedel and R. Pieters. A review of eye-tracking psychology experiments and questionnaires. Online research in marketing. In Review of Marketing Notes, 09 November 2007. URL: http://www.psych. Research. Emerald Group, Bingley, UK, 2008. upenn.edu/~baron/rpsych/rpsych.html. [18] S. L. Whitney, P. J. Sparto, L. F. Hodges, S. V. Babu, [2] P. Chandon, J. W. Hutchinson, E. T. Bradlow, and J. M. Furman, and M. S. Redfern. Responses to a S. Young. Visual Marketing: From Attention to Virtual Reality Grocery Store in Persons with and Action. Lawrence Erlbaum Assoc., Mahwah, NJ, 2007. without Vestibular Dysfunction. CyberPsychology [3] P. Chandon, J. W. Hutchinson, E. T. Bradlow, and Behavior, 9(2), 2006. S. Young. Does in-store marketing work? Effects of [19] B. G. Witmer and M. J. Singer. Measuring Presence the number and position of shelf facings on brand in Virtual Environments: A Presence Questionnaire. attention and evaluation at the point of purchase. Presence, 7(3):225–240, June 1998. Journal of Marketing, 73:1–17, 2009. [20] D. S. Wooding. Fixation Maps: Quantifying [4] J. Clement. Visual influence on in-store buying Eye-Movement Traces. In ETRA ’02: Proceedings of decisions: an eye-track experiment on the visual the 2002 Symposium on Eye Tracking Research influence of packaging design. Journal of Marketing Applications, pages 31–36, New York, NY, 2002. ACM. Management, 23(9):917 — 928, 2007. [21] S. Young. Packaging design, consumer research, and [5] J. A. Ferwerda. Three Varieties of Realism in business strategy: The march toward accountability. Computer Graphics. In Human Vision and Electronic Design Management Journal, 10(3):10–14, 2002. Imaging, pages 290–297, Bellingham, WA, 2003. SPIE. [22] S. Young. Five principles for effective paging research. [6] S. A. Johansen and J. P. Hansen. Do we need eye Brand Packaging, 18(1):24–26, 2005. trackers to tell where people look? In CHI’06