2. Human-Computer Interaction group
recommender systems – visualization – intelligent user interfaces
Learning analytics
Media
consumption
Research Information
Systems
Wellness
& health
Augment prof. Katrien Verbert
ARIA
prof. Adalberto
Simeone
Computer
Graphics
prof. Phil Dutré
Language
Intelligence &
Information
Retrieval
prof. Sien Moens
3. Augment/HCI team
Robin De Croon
Postdoc researcher
Katrien Verbert
Associate Professor
Francisco Gutiérrez
PhD researcher
Tom Broos
PhD researcher
Martijn Millecamp
PhD researcher
Sven Charleer
Postdoc researcher
Nyi Nyi Htun
Postdoc researcher
Houda Lamqaddam
PhD researcher
Yucheng Jin
PhD researcher
Oscar Alvarado
PhD researcher
http://augment.cs.kuleuven.be/
Diego Rojo Carcia
PhD researcher
5. Interactive recommender systems
Core objectives:
• Explaining recommendations to increase user trust and acceptance
• Enable users to interact with the recommendation process
8. Interactive recommender systems
¤ Transparency: explaining the rational of recommendations
¤ User control: closing the gap between browse and search
¤ Diversity – novelty
¤ Cold start
¤ Context-aware interfaces
8
He, C., Parra, D. and Verbert, K., 2016. Interactive recommender systems: A survey
of the state of the art and future research challenges and opportunities. Expert
Systems with Applications, 56, pp.9-27.
9. Flexible interaction with RecSys
Research visit
¤ Host: Carnegie Mellon
University & University of
Pittsburg
¤ Collaboration: John Stamper,
Peter Brusilovsky, Denis Parra
¤ Period: April 2012 – June 2012
Second post-doctoral
fellowship FWO
¤ host university: KU Leuven,
Belgium
¤ supervisor: Erik Duval
¤ period: Oct 2012 – Sept 2015
9
10. Overview research topics
10
2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018
Learning Analytics - Media Consumption – Research Information Systems - Healthcare
13. Contributions
¤ new approach to support exploration, transparency
and controllability
¤ recommender systems are shown as agents
¤ in parallel to real users and tags
¤ users can interrelate entities to find items
¤ evaluation study that assesses
¤ effectiveness
¤ probability of item selection
13
Verbert, K., Parra, D., Brusilovsky, P., & Duval, E. (2013). Visualizing recommendations
to support exploration, transparency and controllability. In Proceedings of the IUI
2013 international conference on Intelligent user interfaces (pp. 351-362). ACM.
15. Results of studies 1 & 2
¤ Effectiveness: #
bookmarked items /
#explorations
¤ Effectiveness increases with
intersections of more
entities
¤ Effectiveness wasn’t
affected in the field study
(study 2)
¤ … but exploration
distribution was affected
15
Average effectiveness
Total number of explorations
Verbert, K., Parra, D., & Brusilovsky, P. (2016). Agents vs. users: Visual recommendation of research
talks with multiple dimension of relevance. ACM Transactions on Interactive Intelligent Systems
(TIIS), 6(2), 11.
17. Three user studies
¤ Study 1:
¤ Within-subjects study with 20 users
¤ baseline: exploration of recommendations in CN3
¤ Second condition: exploration of recommendations in IEx
¤ Data from two conferences EC-TEL 2014, EC-TEL 2015
¤ Study 2:
¤ Field study at Digital Humanities conference
¤ + 1000 participants, less technically oriented
¤ Study 3:
¤ Field study at IUI conference
¤ Smaller scale, technical audience
17
19. Subjective feedback
Questionnaire results with statistical significance. Differences between
the aspects “Fun” and “Choice satisfaction” were not significant after
the Bonferroni-Holm correction.
19
20. Study 2: Digital Humanities
20
¤ 39 users, less technically oriented
¤ Mean age: 38 years; SD: 10; female: 11
¤ Data from DH conference: +1000 participants
25. Study 1 vs Study 2 vs Study 3
¤ Overall combinations of users and agents (“augmented
agents”) were used in all three studies
¤ Precision scores significantly higher for augmented agents in
study 1 and study 3
¤ Participants of study 2 (Digital Humanities)
¤ more interested in content perspective
¤ Rated several dimensions lower (use intention, fun, information
sufficiency, control)
25
Cardoso, B., Sedrakyan, G., Gutiérrez, F., Parra, D., Brusilovsky, P., & Verbert, K. (2018). IntersectionExplorer, a
multi-perspective approach for exploring recommendations. International Journal of Human-Computer Studies.
26. Overview research topics
26
2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018
Learning analytics - Media Consumption – Research Information Systems - Healthcare
28. Personal characteristics
Need for cognition
•Measurement of the tendency for an individual to engage in, and enjoy, effortful cognitive
activities
•Measured by test of Cacioppo et al. [1984]
Visualisation literacy
•Measurement of the ability to interpret and make meaning from information presented in the form
of images and graphs
•Measured by test of Boy et al. [2014]
Locus of control (LOC)
•Measurement of the extent to which people believe they have power over events in their lives
•Measured by test of Rotter et al. [1966]
Visual working memory
•Measurement of the ability to recall visual patterns [Tintarev and Mastoff, 2016]
•Measured by Corsi block-tapping test
Musical experience
•Measurement of the ability to engage with music in a flexible, effective and nuanced way
[Müllensiefen et al., 2014]
•Measured using the Goldsmiths Musical Sophistication Index (Gold-MSI)
Tech savviness
•Measured by confidence in trying out new technology 28
29. User study
¤ Within-subjects design: 105 participants recruited with Amazon Mechanical Turk
¤ Baseline version (without explanations) compared with explanation interface
¤ Pre-study questionnaire for all personal characteristics
¤ Task: Based on a chosen scenario for creating a play-list, explore songs and
rate all songs in the final playlist
¤ Post-study questionnaire:
¤ Recommender effectiveness
¤ Trust
¤ Good understanding
¤ Use intentions
¤ Novelty
¤ Satisfaction
¤ Confidence
31. Design implications
¤ Explanations should be personalised for different groups of
end-users.
¤ Users should be able to choose whether or not they want to
see explanations.
¤ Explanation components should be flexible enough to present
varying levels of details depending on a user’s preference.
31
32. User control
Users tend to be more satisfied when they have control over
how recommender systems produce suggestions for them
(Konstan and Riedl, 2012)
Control recommendations
Douban FM
Control user profile
Spotify
Control algorithm parameters
TasteWeights
34. Different levels of user control
34
Level
Recommender
components
Controls
low
Recommendations
(REC)
Rating, removing, and
sorting
medium User profile (PRO)
Select which user profile
data will be considered by
the recommender
high
Algorithm parameters
(PAR)
Modify the weight of
different parameters
Jin, Y., Tintarev, N., & Verbert, K. (2018, September). Effects of personal characteristics on music
recommender systems with different levels of controllability. In Proceedings of the 12th ACM Conference
on Recommender Systems (pp. 13-21). ACM.
35. User profile (PRO) Algorithm parameters (PAR) Recommendations (REC)
8 control settings
No control
REC
PAR
PRO
REC*PRO
REC*PAR
PRO*PAR
REC*PRO*PAR
36. Evaluation method
¤ Between-subjects – 240 participants recruited with AMT
¤ Independent variable: settings of user control
¤ 2x2x2 factorial design
¤ Dependent variables:
¤ Acceptance (ratings)
¤ Cognitive load (NASA-TLX), Musical Sophistication, Visual Memory
¤ Framework Knijnenburg et al. [2012]
39. Results
¤ Main effects: from REC to PRO to PAR → higher cognitive
load
¤ Two-way interaction: does not necessarily result in higher
cognitive load. Adding an additional control component
to PAR increases the acceptance. PRO*PAR has less
cognitive load than PRO and PAR
¤ High Musical Sofistication leads to higher quality, and
thereby result in higher acceptance
39
40. 40
Simple vs more advanced
Millecamp, M., Htun, N. N., Jin, Y., & Verbert, K. (2018, July). Controlling Spotify
recommendations: effects of personal characteristics on music recommender user
Interfaces. In Proceedings of the 26th Conference on User Modeling, Adaptation and
Personalization (pp. 101-109). ACM.
49. Augmented reality
49
Gutiérrez, Francisco, Htun, Nyi Nyi, Charleer, Sven, De Croon, Robin, Verbert,
Katrien (2019) Designing Augmented Reality Applications for Personal
Health Decision-Making. Proceedings of HICSS-52. (to appear)
50. Tangible Algorithms
¤ Study with Netflix users
¤ Semiotic inspection
¤ Design workshop
¤ Interviews
¤ Abstract representations
¤ Archetype
representations
50
Alvarado, O., Geerts, D. and Verbert, K. Towards Tangible Algorithms: Exploring
Algorithmic Experience with Users’ Profiling Representations. Will be submitted to DIS
2019.
56. References
¤ Boy, J., Rensink, R. A., Bertini, E., & Fekete, J. D. (2014). A principled way of assessing visualization
literacy. IEEE transactions on visualization and computer graphics, 20(12), 1963-1972.
¤ Cacioppo, J.T., Petty, R.E. and Feng Kao, C., 1984. The efficient assessment of need for cognition.
Journal of personality assessment, 48(3), pp.306-307.
¤ B. P. Knijnenburg, M. C. Willemsen, Z. Gantner, H. Soncu, and C. Newell. Explaining the user
experience of recommender systems. User Modeling and User-Adapted Interaction, 22(4-5):441–504,
2012.
¤ Konstan, J.A. and Riedl, J., 2012. Recommender systems: from algorithms to user experience. User
modeling and user-adapted interaction, 22(1-2), pp.101-123.
¤ Müllensiefen, D., Gingras, B., Musil, J., & Stewart, L. (2014). The musicality of non-musicians: an index
for assessing musical sophistication in the general population. PloS one, 9(2), e89642.
¤ Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement.
Psychological monographs: General and applied, 80(1), 1.
¤ Tintarev, N., & Masthoff, J. (2016). Effects of Individual Differences in Working Memory on Plan
Presentational Choices. Frontiers in psychology, 7, 1793.