4. Related work
▪ Online Travel Agencies reshaped the ecosystem [1]
▪ …and eWOM has a strong bias/influence on online decision making [2,4,5]
▪ …as well as to predict business performance [3]
▪ From a CS perspective eWOM is a main ingredient for algorithmic decision support
mechanism like RS
▪ Experiments in psychology literature[6] revealed users with different decision
making styles
[1] Xiang, Z. et al. Information technology and consumer behavior in travel and tourism: Insights from travel planning using the internet. JRCS 2015
[2] Ulrike Gretzel and Kyung Hyan Yoo. Use and impact of online travel reviews. Information and Communication Technologies in Tourism 2008
[3] Xie, K. et al. The business value of online consumer reviews and management response to hotel performance. IJHM 2014
[4] Xiang, Z. et al. A comparative analysis of major online review platforms: Implications for social media analytics inhospitality and tourism. TM 2017
[5] Xie, H. et al. Consumers responses to ambivalent online hotel reviews: The role of perceived source credibility and predecisional disposition. IJHM 2011
[6] Schwartz, B. et al. Maximizing versus satisficing: Happiness is a matter of choice. JPSP 2002
4
5. 5
The goal of this research is
determine how rating summary
statistics are guiding users’
choices in the online scenario
… in order to develop more
efficient algorithms.
Research Goals
6. Decomposing rating
summaries
We consider them to be
multi-attribute objects:
▪ Number of ratings
▪ Mean of the ratings
▪ Bimodality
▪ Variance
▪ Skewness
▪ Origin of Ratings 6
7. Decision Making on Multi-attribute Items
▪ Non-Compensatory Strategies [1]:
▫ Compare items based on one attribute
▫ Perform intra-dimensional comparisons
▫ Perform less comparisons
▪ Compensatory Strategies [1]:
▫ All attributes meet a minimum requirement
▫ Multiple inter-dimensional comparisons
▫ Spend more time on items
Eye movement is an indicator of the screening of the choices [2].
7[1] John W Payne. Task complexity and contingentprocessing in decision making: An information search and protocol analysis. Organizational Behavior and Human
Performance, 1976
[2] Jacob L. Orquin and Simone Mueller Loose. Attention and choice: A review on eye movements in decision making. Acta Psychologica 2013
8. Decision making strategies
▪ Interpersonal differences
▪ Satisficer / Maximizers [1]
▪ Three subdimensions[2]:
▫ Decision Difficulty
▫ Alternative Search
▫ High Standards
8
[1] Herbert A Simon. A behavioral model of rational choice. The quarterly journal of economics, 1955
[2] Schwartz et al., 2002, Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality and Social Psychology 1983
Herbert Simon
9. Earlier Work
▪ We run a set of 3 experiments to understand trade-off
mechanisms between decision strategies
▪ Decomposing rating summaries: Different types of explanations,
Number of ratings, Mean of the ratings, Variance, Skewness
▪ Respondents:
▫ Relied highly on the mean rating
▫ Non linear influence of overall Number of Ratings
▫ Variance and skewness remain largely unnoticed
▫ Maximizers vs. Satisficers display different preferences
9
[1] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perception of Rating Summary Statistics. UMAP ‘18
[2] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perceptionof Collaborative Explanation Styles. CBI ’18
[3] Coba L., Zanker M., Rook L., Symeonidis P.: Decision Making Strategies Differ in the Presenceof Collaborative Explanations: Two Conjoint Studies. IUI
’19
11. Conjoint experiment to
quantify users’ preferences
Ranking based Conjoint Methodology:
▪ Used in product
design/development
▪ Items can be seen as a bundle of
attributes
▪ Goal to identify the utility
contribution of each attribute of
the rating summary statistics
separately
11
12. Data
▪ Data driven levels [1]
▪ J-shaped [2]
▪ Bimodality coefficient[3]:
12
[1] Markus Zanker and Martin Schoberegger. An empirical study on the persuasiveness
of fact-based explanations for recommender systems. RecSys 2014
[2] Hu N, Zhang J, Pavlou PA (2009) Overcomingthe J-shaped distribution of product reviews.
Commun ACM
[3] Pfister R, Schwarz KA, Janczyk M, Dale R, Freeman JB (2013) Good things peak in pairs: a note on
the bimodality coefficient. Front Psychol
13. Design
▪ Full-factorial design with:
▫ 2 levels of the
Number of rating
▫ 3 levels of Mean
▫ 3 levels of Bimodality
▪ 3 screens with 6 items to
rank
13
14. Additive utility model
Different attributes contribute independently to the overall utility.
The perceived utility of an item/profile is determined as:
𝑢 = 𝑥𝑖 𝛽 + 𝜀
𝑥𝑖 vector characterizing a profile i,
𝛽 vector with (unknown) preferences for each attribute level,
𝜀 is the residual error.
Respondents are supposed to select the alternative with, in their
eyes, maximal utility u.
14
15. Eye-tracking Metrics
▪ Area of Interest (AOI)[1]
▪ Fixation times
▫ Geometrical mean [2]
▪ Revisits [3]
15
[1]Kenneth Holmqvist, Marcus Nystroom, Richard Andersson, Richard Dewhurst,
Halszka Jarodzka, and Joost Van De Weijer. Eye tracking. A comprehensive guide to
methods and measures. Oxford University Press, 2011.
[2] Jeff Sauro and James R. Lewis. Average task times in usability tests. CHI’10
[3] John W Payne. Task complexity and contingent processing in decision making: An
information search and protocol analysis. Organizational Behavior and Human
Performance, 1976
19. Non - compensatory
strategy
▪ Compare items based on one
attribute
▪ Perform intra-dimensional
comparisons
▪ Perform less comparisons
19
20. Compensatory strategy
▪ All attributes meet a minimum
requirement
▪ Multiple inter-dimensional
comparison
▪ Spend more time on items
20
21. Max vs. Sat: Time spent on items
21
Geometrical mean of the time spent on item (confidence level of 95%), median split on
decision difficulty sub-scale.
22. Max vs. Sat: Revisits
22
Mean number of revisits per item(confidence level of 95%), median split on decision difficulty
sub-scale.
25. Conclusions
▪ Maximizers and satisficers expose different decision making behavior
▫ Choice is dominated by mean and number of ratings
▫ Bimodality showed no significant influence
▫ Compensatory vs. non compensatory
▪ Rating summaries influence/bias users’ choice
▫ Not considered when interpreting implicit user feedback
▪ Our results indicate that more aspects need to be considered to optimize
recommendations based on explainability/persuasiveness
25