The mobile applications industry experiences an unprecedented high growth, developers working in this context face a fierce competition in acquiring and retaining users.
They have to quickly implement new features and fix bugs, or risks losing their users to the competition. To achieve this goal they must closely monitor and analyze the user feedback they receive in form of reviews. However, successful apps can receive up to several thousands of reviews per day, manually analysing each of them is a time consuming task. To help developers deal with the large amount of available data, we manually analyzed the text of 1566 user reviews and defined a high and low level taxonomy containing mobile specific categories (e.g. performance, resources, battery, memory, etc.) highly relevant for developers during the planning of maintenance and evolution activities. Then we built the User Request Referencer (URR) prototype, using Machine Learning and Information Retrieval techniques, to automatically classify reviews according to our taxonomy and recommend for a particular review what are the source code files that need to be modified to handle the issue described in the user review. We evaluated our approach through an empirical study involving the reviews and code of 39 mobile applications. Our results show a high precision and recall of URR in organising reviews according to the defined taxonomy
Analyzing Reviews and Code of Mobile Apps for Better Release Planning
1. Analyzing Reviews and Code
of Mobile Apps for
Better Release Planning
Adelina Ciurumelea, Andreas Schaufenbühl,
Sebastiano Panichella, Harald C. Gall
software evolution & architecture lab
4. 4
The number of reviews is large compared
to the available development resources.
5. 5
• reviews contain valuable
feedback directly from the
users
• users often report bugs, user
experience and request
features
• the review content influences
the number of downloads
Importance of reviews
7. 7
BUG FEATURE REQUEST
“Release planning of mobile apps based on user reviews”
L. Villarroel, G. Bavota, B. Russo, R. Oliveto, and M. Di Penta
OTHER
8. 8
BUGFEATURE REQUEST
• the developer has to manually analyse the unstructured groups of
reviews, understand what they talk about and extract actionable change
tasks
• what does a particular cluster talk about? Does it talk about the UI or
about the performance of the app, etc.?
9. 9
What are the mobile specific topics
users talk about in their reviews?
12. 12
Hmmm...
Mm No…
This is IT
Nope Nopity nope
Sucks Way to many errors
0 stars Garbage.
problem bro
Garbage Bla bla bla
• not all reviews are useful
• some are even offensive
13. 13
Pretty close to perfect, this app is
way better than any comic book
reader I've ever used. It's small, it
operates fast, and the interface is
incredibly clean and simple.
• others can provide valuable
information for the developer
14. 14
Pretty close to perfect, this app is
way better than any comic book
reader I've ever used. It's small,
it operates fast, and the
interface is incredibly clean and
simple.
Resources
Usage
15. 15
For info (in case dev not already
aware!), there is a graphical
glitch when scrolling output in
marshmallow on a nexus 5.
Compatibility
Usage
Complaint
16. 16
Building the taxonomy
• feature extraction: TF-IDF scores and 2 and 3-
grams counts
Content analysis in 2 passes:
• start with an empty list of categories
• analyse each review and add a new category
(including definition and keywords) if necessary
• label the review with all the matching categories
• second pass: revisit the list of reviews and label
them with the appropriate categories
17. 17
Category Description
Compatibility mentions the OS, mobile device or a specific hardware component.
Usage talks about the UI or the usability of the app.
Resources
mentions the app’s influence on the battery and memory usage or the
performance of the app/phone.
Pricing statements mentioning the license model or the price of the app.
Protection statements referring to security or privacy issues.
Complaint the user reports or complains about an issue with the app.
High Level Taxonomy
19. 19
Liked it and worked very well in
lollipop, but not MM The plugins
don't refresh, manual navigation
to next image doesn't work.
Some plugins give error.
Altogether seems broken after
MM update on Note 4.
Compatibility
20. 20
Liked it and worked very well in
lollipop, but not MM The plugins
don't refresh, manual navigation
to next image doesn't work.
Some plugins give error.
Altogether seems broken after
MM update on Note 4.
Compatibility
Device
Android Version
25. 25
Training
• feature extraction: TF-IDF scores and 2 and 3-
grams counts
• one-vs-all strategy: separate classifier for each
high and low level category (18 in total)
• used the Gradient Boosted Trees model
27. 27
Example
• feature extraction: TF-IDF scores and 2 and 3-
grams counts
RQ2: Does our approach correctly recommend the software
artifacts that need to be modified in order to handle user
requests and complaints?
• 752 user reviews from our dataset
belong to AcDisplay
• analyse Compatibility and
Complaint reviews (61 reviews)
• Complaint and Android Version (22
reviews)
28. 28
Example
• feature extraction: TF-IDF scores and 2 and 3-
grams counts
“Good but has some issues with Marshmallow I used this on
my old phone and if was flawless and I loved it. I noticed that
sometimes when I had AcDisplay activated I would not be
able to use the fingerprint sensor even after I unlocked
AcDisplay and had to enter a password. This is very frustrating
so I cannot use AcDisplay.”
“Love the design I love the app. It’s super sleek and nice. But
ever since my phone updated to marshmallow it’s stopped
working. Hope it comes back soon.”
“On Marshmallow, the screen is buggy and sometimes shows
the notification shade.”
29. 29
• feature extraction: TF-IDF scores and 2 and 3-
grams counts
• can we link reviews to the related source code?
• IR methods based on the VSM (hard task: the vocabulary
used by reviews and source code is different)
• use additional Android project specific information (e.g.
UI functionality is implemented in Activity classes)
Source Code Localisation
31. 31
Evaluation
• feature extraction: TF-IDF scores and 2 and 3-
grams counts
RQ1: To what extent does our approach organise reviews
according to meaningful maintenance and evolution tasks
for developers?
RQ2: Does our approach correctly recommend the software
artifacts that need to be modified in order to handle user
requests and complaints?
37. 37
Results RQ1
Our approach is able to classify reviews with high precision
and recall according to the mobile specific topics we derived.
The most important categories are Usage, Resources and
Compatibility.
38. 38
Study RQ2
• 1 external evaluator
• 91 user reviews from 2 apps
39. 39
Results RQ2
• feature extraction: TF-IDF scores and 2 and 3-
grams counts
Quality of
Reviews
Precision Recall F1 Score
Difficult to Link 41% 83% 55%
Easier to Link 52% 79% 63%
All 51% 79% 62%
40. 40
Results RQ2
Our approach achieves promising results in recommending
related software artifacts for specific user reviews, furthermore
better quality reviews are easier to link than lower quality ones.
41. 41
Conclusion & Future Work
• reviews can be classified with high precision and recall
using machine learning according to mobile specific
topics
• linking reviews to source code using textual similarity
based methods is difficult
• future work: summarise reviews, improve localisation
(static analysis)