SlideShare una empresa de Scribd logo
1 de 29
Descargar para leer sin conexión
Human-Computer Interaction: final report
Group 1: AnarCHI
http://anarchi11.blogspot.com/
http://apps.facebook.com/datamineralpha/



Sam Agten, 1st master computer sciences
Bart Gerard, 1st master computer sciences
Tanguy Monheim, 1st master computer sciences
Steven Vermeeren, 1st master computer sciences

Abstract

This is the final report for the course of human-computer interaction by group 1. We have
developed an application about social news called “Dataminer”. The most important challenges
were finding a decent concept, attracting enough users and creating a GUI that is easy to use.
From this project we have learned the importance of using an iterative process while developing
a user interface and the importance of listening to user feedback rather than just pursuing our
own ideas.
Contents                                                                                           2


Contents
   1     Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     4
   2     Storyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    5
   2.1   Final Storyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      5
   2.2   Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    6
   2.3   Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     6
   2.4   Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      6
   3     Screen-transition-diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     7
   3.1   Final screen-transition-diagram . . . . . . . . . . . . . . . . . . . . . . . . . . .     7
   3.2   Evolution and Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      7
   4     Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    8
   4.1   Paper iteration 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   8
         4.1.1   Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     8
         4.1.2   Instruments, results and conclusions . . . . . . . . . . . . . . . . . . . .      8
   4.2   Paper iteration 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   9
         4.2.1   Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     9
         4.2.2   Instruments, results and conclusions . . . . . . . . . . . . . . . . . . . . 10
   4.3   Paper iteration 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
         4.3.1   Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
         4.3.2   Instruments, results and conclusions . . . . . . . . . . . . . . . . . . . . 12
   4.4   Digital iteration 1: 09/04 - 24/04 . . . . . . . . . . . . . . . . . . . . . . . . . . 13
         4.4.1   Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
         4.4.2   Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
         4.4.3   Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 13
         4.4.4   Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
   4.5   Digital iteration 2: 25/04 - 03/05 . . . . . . . . . . . . . . . . . . . . . . . . . . 16
         4.5.1   Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
         4.5.2   Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
         4.5.3   Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 16
         4.5.4   Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
   4.6   Digital iteration 3: 04/05 - 09/05 . . . . . . . . . . . . . . . . . . . . . . . . . . 18
         4.6.1   Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
         4.6.2   Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
         4.6.3   Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 19
         4.6.4   Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
   4.7   Digital iteration 4: 10/05 - 23/05 . . . . . . . . . . . . . . . . . . . . . . . . . . 20
         4.7.1   Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
         4.7.2   Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
         4.7.3   Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 20
         4.7.4   Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
   5     Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22



                                             2 of 29
Contents                                                                                        3


   6     Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
   7     Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
   7.1   Answers to questionnaires . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
   7.2   Usage statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
   7.3   Amount of viewers per day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
   7.4   Time spend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29




                                            3 of 29
4


1 Concept
What is dataminer?         We have developed an application called “Dataminer”. Dataminer is
a facebook application in which the user is represented by an avatar and can mine minerals
containing news articles. The hardness of the mineral represents the value of the news article
behind it (e.g. A diamond mineral contains a more valuable news article than a talc mineral).
Every news articles is grouped in one or several categories (e.g. sport, politics, etc.) The value
of a news article is determined by the amount of friends who liked articles from the categories
the article hails from. This means that a player has the ability to rate an article (like/dislike/meh)
after mining it.


Why dataminer?         We think Dataminer offers a fun and new way to explore the news. The
user can choose to mine articles liked by his friends (hard minerals) or has the possibility to
deviate and mine articles disliked by his friends. Dataminer also offers a reward system (badges
and high scores) to keep the user interested.


How does dataminer relate to other applications?                   Dataminer could be compared to
other mining games such as for example Dig It       1.   But the only application we found that even
comes close to our idea is the application HearSay 2 . This application is still being developed.
This application will also let people rate articles and show them which articles their friends liked.
It will also use a badge system. The difference is that HearSay does not use a “mining” concept.


Which alternatives have we considered?                   We also considered making a more generic
social news application, which would allow users to rate and comment on news articles gathered
from different sources. We decided to go with Dataminer instead because we believed it to be
more exciting to work on and more original.


Strong suites/weak suites?          Dataminer’s biggest flaw is also it’s greatest strength. It is an
original idea and by extension, a bit of a gamble. Whether people like digging for their news or
would rather just read it without all the hassle, remains to be seen.




  1   http://appadvice.com/appnn/2009/08/review-i-dig-it/
  2   http://www.newsgaming.de/2010/10/a-social-news-game/


                                                4 of 29
5


2 Storyboard

2.1 Final Storyboard




             Fig. 1: Game screen                                    Fig. 2: Article screen

    To play DataMiner, the user has to login on Facebook and select the DataMiner application.
The user enters the game screen (Figure 1). He can move across the different layers in search
of new minerals. When a mineral is found, an article from a category that is as valuable as the
mineral is shown (Figure 2). The user can like, dislike or meh the article. This will result in a
different value for this category.




            Fig. 3: Statistic screen                                Fig. 4: Badges screen

    The value of the different articles can be viewed when opening the statistics screen (Figure
3). The user sees how much he likes the different categories and also the combined interests
of the user and his friends are shown. By pressing on a category, the user is able to view more
refined statistics of a certain category. To motivate the different users to return again and again to


                                              5 of 29
2.2   Alternatives                                                                                6


our application, a reward system has been built. When selecting the badge icon from the menu,
the user gets an overview of all his achievements until now (Figure 4). By hovering over a badge,
the user gets information about this reward.


2.2 Alternatives
The friend menu from an earlier storyboard did not make the final release because of our lack of
time. When a user selects the friend symbol, he now gets a Facebook invitation screen instead
of a friend overview page.


2.3 Evolution
Through the different iterations the mini game has changed quite a lot. From an initial Super
Mario idea till the current game. Early changes to a simplified concept were made because of
fear for the difficulties in implementation, but later on we changed back to a less safe concept to
broaden our horizon. The older storyboards were rough and unrefined. Just to get a quick idea.
More recent storyboards got more detail to fill in the final gaps.


2.4 Thoughts
A storyboard is a very powerful tool to visualize the different screens of an application. It makes
an idea more tangible for the different members of the group. Initial differences can be found
quickly and reorientation of the different team members to the same realization becomes possible
early on. Still there may be some disadvantages. Once a storyboard is made, it becomes really
difficult to deviate from it. This can be bad if the placement of certain components isn’t that
good and the developer does not want to return to the drawing table. Another disadvantage is the
fact that not everything can be expressed with a storyboard. For example the workflow between
the different screens, is really difficult to capture without the use of additional text or a screen-
transition-diagram. It would also be difficult to simulate a highly interactive application using a
storyboard (such as a fully fledged game).




                                               6 of 29
7


                     3 Screen-transition-diagram

                     3.1 Final screen-transition-diagram

Visual Paradigm for UML Standard Edition(K.U.Leuven)


                                                              Help




                                                                 help
                                                   contains
                                 badges                                             contains                                     left/right/bottom/up (no article)
           Badges                                      Menu                                                      Game            refresh
                                 contains                                            close

                                                                                                                               like/dislike/meh/close
                           friends
                                                                            contains           left/right/bottom/up (with article)

                                     send request               contains                                                             Article
                                                                                   Detailed Statistics
        Add Friends
                                                 statistics
                                                                        category


                                                               Statistics




                                                                     Fig. 5: Screen-transition-diagram

                            Figure 5 contains our screen-transition diagram of our last iteration. Central in our appli-
                     cation is the game element. When starting up the application, the user enters this state. He is
                     able to move around either with a single mouse click or with the arrow keys. When pushing the
                     refresh button, the underground is regenerated and the user can start fresh. If the miner touches a
                     mineral, the user enters the article state. Here he can read the abstract of an article and if desired
                     the user is able to read the full article. Now the user can rate the article or close the article screen.
                            At all times the user is able to use the side-menu. When pushing the help button, the user gets
                     an explanation of the application and a legend of the minerals. When desired, the user can check
                     his badges by selecting the badge button. Also some valuable statistics about the users interests
                     can be checked. By selecting a certain category, the user is guided to detailed statistics about its
                     subcategories. From the menu, it is also possible to select the friend button. A default facebook
                     screen is shown. After selecting the friends he want to invite, the user returns to the previously
                     visited page. At all times the user can close a screen, returning him to the game screen.

                     3.2 Evolution and Alternatives
                     In the previous diagram we wanted to show an explanation pop-up with the adjustments in statis-
                     tics, every time an article was rated or closed. Because this could become rather annoying, it was
                     banned from the final iteration.
                            In this iteration we also had to let go the friend menu, because of the lack of time. This menu
                     was meant for the social interaction between the different users and would have put the social
                     before newsgame.




                                                                                        7 of 29
8


4 Iterations
In the paper iterations we didn’t couple our conclusions in a one on one relation to our goals.
Afterwards we learned we should have done this. However, we want to show the data as we used
it to make our conclusions. For this reason we won’t indicate what conclusion matches what
goal. In the digital iterations we did link our conclusions to our goals. There we will always use
matching numbers to indicate what conclusion matches what goal.


4.1 Paper iteration 1
When we made our first paper prototype, the concept of the game was still different from what it
would end up to be. Concerning the interface however, there are many similarities, so the results
of the iteration have still been used in further iterations.


4.1.1 Goals

Due to the experimental nature of paper prototypes, we decided that for a first prototype any idea
that seemed good would be used and tested. This did cause that the goals for testing were rather
general. We wanted to see if:

    • those ’first ideas’ were indeed good enough for users to use them in the intended way

    • the concept had any potential.


4.1.2 Instruments, results and conclusions

We tested the prototype with only three classmates, but managed to get some results. Following
are the most important conclusions:

    • about the interface:

           – Navigating around in the game was unclear: people tried many different ways (such
              as drag-and-drop), while we wanted them to click on a mining shaft to move there.

           – The fact that a user could earn money made them believe they would also be able
              to buy things, which was not the case.

           – The use of tabs confused people; they were looking for a button to close windows.

    • about the concept:

           – This was met with a lot of negative criticism. People found it unclear what the goal
              and motivation for playing were. The concept also neglected any “value” of news
              items, which made it look like it shouldn’t include news at all.




                                               8 of 29
4.2    Paper iteration 2                                                                        9


4.2 Paper iteration 2
For our second paper prototype, we had revisited our concept and came up with our final concept.
Figure 6 shows how this paper prototype looked like. This new concept solved a few of the
problems with the previous interface by itself:

      • We made the mining more interactive, giving the user several different options to navigate
        around the mine.

      • We abandoned the scoring system, so the money problem was gone.

      • The issue with the tabs remained. We decided not to change this yet, because we felt it
        might be an issue only due to the nature of paper prototypes.




                                Fig. 6: Second paper prototype



4.2.1 Goals

Once again, we didn’t have very specific goals. We mainly wanted to:

      • check if the new concept was more attractive than the old one

      • see which problems users experienced with the updated interface




                                             9 of 29
4.2    Paper iteration 2                                                                         10


4.2.2 Instruments, results and conclusions

We performed tests with five computer science students in their early twenties. Although this is
not our target audience, they were the easiest for us to get access to to perform the tests. These
were the results:

      • About the concept:

           – The concept was still vague when people first started.

           – People wanted to be able to exclude friends from being taken into account when
              calculating the value of articles.

      • About the interface:

           – It was not clear what the button to change settings of the statistics page meant.

           – It was unclear what rating an article did; the user got redirected to the game without
              any further information.

           – We provided an option to change the avatar of the miner, but people did not easily
              find it.




                                              10 of 29
4.3     Paper iteration 3                                                                       11


4.3 Paper iteration 3
With the results of the previous iteration in mind, we performed following changes:

      • We added an introduction screen, to be displayed the first time a user starts the game, with
        a brief explanation.

      • We added a button in the friends list to be able to turn friends on and off.

      • We changed the settings button to have a traditional settings icon.

      • We added a screen that, after rating an article, showed its impact on the statistics.

      • We did not change the avatar option, but rather decided to keep it as bonus for more
        experienced players.

Additionally, we abandoned the tabs and made close buttons for all but the main application
screen. The resulting prototype can be seen in figure 7.




                                  Fig. 7: Third paper prototype



4.3.1 Goals

We wanted to test if:

      • the performed changes had had their intended effect

      • there were any other, up to now unknown problems




                                              11 of 29
4.3    Paper iteration 3                                                                         12


4.3.2 Instruments, results and conclusions

This iteration was tested with housemates of the team members, which was our first input from
non computer scientists. A total of 8 people tested the prototype, ages varying from 18 to 23 and
with mixed computer and facebook experience. These were the most important conclusions:

      • changes:

            – The text in the introduction screen was not clear enough.

            – The button to indicate which friends are taken into account for the value of articles
              – a green/red “stoplight” – was causing confusion. It had to be explicitly explained
              before people understood its purpose.

            – One person thought the closing button for a sub-screen would close the entire ap-
              plication.

      • further issues:

            – The button to rate an article neutral was not clear.

            – Using the terms like/dislike for rating articles caused wrong usage: interesting arti-
              cles about bad events were being rated negative, while for our system they should
              be rated positive.

            – Some people wanted more specific statistics available.




                                              12 of 29
4.4     Digital iteration 1: 09/04 - 24/04                                                               13


4.4 Digital iteration 1: 09/04 - 24/04
In this first digital iteration we stripped down our application to the bare bone essentials: the
mining and rating of articles. The point was to release as soon as possible. Birth by fire. More
specifically, we used dummy sprites to represent the miner, left out the sidebar and used a set
amount of dummy articles instead of real-life ones. This way we were able to test the absolute
core of our application. Unfortunately we don’t have screenshots of how our application looked
in the first iteration.


4.4.1 Goals

This iteration we expected to find out whether or not:
      1. The user could easily navigate the miner (by clicking somewhere on the map, or using the
         keyboard).
      2. The user mined and rated articles (How many articles does a user mine? Do they mine
         articles at all, or just avoid the minerals? Do users just click dislike all the time? etc.).
      3. What was the overall feeling the users had when using our application? Is it fun? Were
         they appealed by the idea or rather appalled?


4.4.2 Instruments

We first tested our application on 6 users offline, these people are randomly chosen people we
know. Afterwards we released the application on facebook. None of the users had to go through
a scenario. Since we didn’t have control over who tests our application, we didn’t put constraints
on this. We benchmarked our goals as follows (numbers coincide with the goal numbers):

      1. We expected users to navigate between articles using the mouse and/or keyboard. To
         track this, we measured the average number of mouse clicks and keystrokes between
         articles. We expected an average of two mouse clicks between articles and an average
         of 25 keystrokes. We chose these numbers because it is possible to reach an article in 1
         mouse click and the average distance between articles is 15 spaces. We give users a 70%
         error margin and round up.
      2. We expected an even distribution among the three possible ratings. To measure this we
         kept track how many times a user clicked on like/meh/dislike after mining an article.
      3. To test the overall feeling the users had when using the application we used a question-
         naire based on the CSUQ test. Scores on the questionnaire range from 1 to 5. We want
         that 50% of the users rate 4 or 5 on questions related to whether they like the application.


4.4.3 Results and observations

Results for the goals are listed below, with numbers coinciding with the previous sections:
      1. Results show an average of 2.45 mouse clicks as evident by the graph below. The key-
         board was severely underused so it is hard to speak about an average number. This is
         visible in figure 8.


                                                13 of 29
4.4     Digital iteration 1: 09/04 - 24/04                                                       14


      2. The number of times users clicked on one of the four buttons displayed when rating an
         article (like/meh/dislike) is shown by the graphs in figure 9. These seem to be evenly
         spread. We did notice that we had more users adding the application on facebook than
         users mining at least one article. This means not all users mined an article. Later on we
         found this is because Safari didn’t submit the results to our database and wasn’t a problem
         visible to the users.

      3. General feedback and the questionnaire showed us that:
         - The users weren’t pleased with the graphics.
         - Users asked for a legend of the different minerals.
         + Most users were medium satisfied with the application and hope to see more soon.
         +/- The opinions about the concept are still divided.
         We would like to refer to appendix 7.1 for a view on how the application scored on all the
         different questions and a more detailed view of the user input.

      Although this is not linked to one of our goals, we did notice something strange. We noticed
that some users allowed access to our application, but didn’t use it afterwards. Later on we found
out this was a bug in our tracking system. The Safari browser didn’t submit data to our database.
As this only affects a small group of users, fixing this isn‘t one of our priorities.




        Fig. 8: Amount of clicks and keystrokes each user needs to reach an article




                       (a) Absolute amounts per user                (b) Overview of the totals

                       Fig. 9: What users selected after reading an article




                                               14 of 29
4.4     Digital iteration 1: 09/04 - 24/04                                                     15


4.4.4 Conclusion

We had a small amount of test subjects. Therefore we will continue tracking what we described
in section 4.4.2. We will only mention these in the following iterations in case the findings
there vary from our conclusions here. The conclusions on each observation can be found below
(Numbers coincide with previous sections):

      1. We feel an average of 2.45 mouse clicks is acceptable. The fact that the keyboard was
         underused isn’t really problematic. We won’t remove the keyboard controls because we
         still think of it as a nice extra.

      2. It seems the ratings are rather evenly distributed, but no hard conclusions can be made
         with only 9 unique users on facebook. Since it seems that users are able to rate and thus
         mine articles, we think it isn’t necessary to change the interface.

      3. People were generally displeased with the look of the application which didn’t really
         come as a surprise as we didn’t put effort in graphics yet. We will improve these.




                                              15 of 29
4.5     Digital iteration 2: 25/04 - 03/05                                                   16


4.5 Digital iteration 2: 25/04 - 03/05
In this iteration we released new functionality and improved the looks of the application to try
to solve the problems we saw in iteration 1. Unfortunately we don’t have a picture of how the
application looked in this iteration.


4.5.1 Goals
      1. The number of users in the previous iteration was problematic. We will try to attract
         more users by improving the graphics (replace the dummy sprite; add better graphics
         for minerals, etc.) and by adding a badge reward system. The combination of both will
         hopefully attract more users.

      2. Some users commented in the previous iteration that they didn’t really understand what
         mineral stands for what article. We will try to solve this by adding a legend.

4.5.2 Instruments
Again we released on Facebook. We hope that improving the application will get us enough
users and won’t use additional ways of gathering users. We benchmarked our goals as follows
(numbers coincide with the goal numbers):

      1. We will simply measure success by counting the number of users that used our applica-
         tion. We would like at least 25 users for this iteration.

      2. We will monitor whether or not there is improvement by using the same questionnaire
         from the previous iteration and see if we made any progress. We want 50% of the users
         to rate 4 or 5 out of 5 on how easy it is to find information.

4.5.3 Results and observations
Results for the goals are listed below with numbers coinciding with the previous sections:

      1. For this iteration we again had a total of 9 unique users.

      2. 40% of the users rated 4 or 5 out of 5 for how easy it is to find information.




Fig. 10: Comparison between how easy users found information in iterations 1 and 2




                                               16 of 29
4.5     Digital iteration 2: 25/04 - 03/05                                                           17


4.5.4 Conclusion
The numbers of our conclusions coincide with previous sections:

      1. Nine users indicates no improvement whatsoever. We will have to find a better way to
         attract more users.

      2. The achieved 40% is not the 50% we wanted, but it is good enough. The other users rated
         very low for the help function. In other words, the opinions about our new help function
         are divided between extremes. We would like to know the concerns the people have who
         gave us a low score for the help function. In any case, as indicated by the people who did
         like the new help function, we did manage to help at least a good portion of our users. In
         that respect we consider the new help function an improvement. Further improvement of
         the help function is a possibility for possible next iteration but certainly not a top priority.
         For now we will leave the help function as it is.




                                                17 of 29
4.6     Digital iteration 3: 04/05 - 09/05                                                      18


4.6 Digital iteration 3: 04/05 - 09/05
In this iteration we tried again to attract more users for our application by expanding on the
existing application. You can see how the application looked in figure 11.




                      Fig. 11: How the application looked in iteration 3



4.6.1 Goals

      1. We will try to attract more users. We will do this by further improving the graphics and
         giving the users insight on how we provide their articles by showing them statistics. This
         idea came from the results of Section 4.3.2.

      2. We will try to make users stay longer. The improvements of 1. should also improve this.


4.6.2 Instruments

We will release another version of our application on facebook. To help with finding users,
we also asked professor Erik Duval to tweet about our application besides the other ways of
recruiting people we were already using (mouth-to-mouth advertising, posting on facebook, etc.).
We benchmarked our goals as follows (numbers coincide with the goal numbers):

      1. We will measure success by measuring the number of users. We expect at least 25 users.

      2. We will use Google analytics to monitor how long users are on our site. We want them
         to stay 2 minutes on average on our application. We feel this is the time it takes to read
         approximately 2 articles.




                                             18 of 29
4.6     Digital iteration 3: 04/05 - 09/05                                                      19


4.6.3 Results and observations

Results for the goals are listed below with numbers coinciding with the previous sections:

      1. For this iteration we had a total of 7 unique users.

      2. Users stayed an average of 7.50 minutes on our application. This varied a lot per day, as
         seen in figure 12




                     Fig. 12: Average time users spend on our site per day



4.6.4 Conclusion

Again, the numbers of our conclusions coincide with the numbers above:

      1. 7 users is in fact a decline from our previous iteration. Even though this was a shorter
         iteration, we will have to drastically increase this number in our next iteration. We will
         do this by making it possible for users to invite their friends.

      2. We were very pleased with the result of 7.50 minutes. The users that do use our applica-
         tion certainly like it.




                                               19 of 29
4.7     Digital iteration 4: 10/05 - 23/05                                                         20


4.7 Digital iteration 4: 10/05 - 23/05
In this iteration we tried again to attract more users for our application. We will do this by letting
our users invite more users. The layout of our application didn’t change compared to the previous
release. We only made the add-friend-button work. This can be seen in figure 13.




                      Fig. 13: How the application looked in iteration 4



4.7.1 Goals

      1. We will add a functionality to invite friends. This will hopefully provide us more users.
      2. Due to the low number of users in the previous iteration, will check if problems occurred
         with the interface. We want to be certain users can mine and rate articles. If there is a
         problem here, this could be a cause as to the low number of users.


4.7.2 Instruments

Again we release our new version on facebook. We benchmarked our goals as follows (numbers
coincide with the goal numbers):
      1. We will measure the number of users, we expect at least 25 users.
      2. We will again measure the average number of mouse clicks, the average number of
         keystroke and the number of times a user clicked like/dislike/meh/close. We will use the
         same benchmark as mentioned in Section 4.4.2 and will monitor if there is any significant
         change.


4.7.3 Results and observations

Results for the goals are listed below with numbers coinciding with the previous sections:
      1. We finally managed to achieve our goal. We had 29 users using our application.
      2. The basic controls are working nicely. All users manage to read and rate articles. The
         ratings are evenly spread. They do use a little more clicks, but all of them manage to read
         articles. These results can be found in figures 14 and 15. The user with lots of clicks is


                                              20 of 29
4.7     Digital iteration 4: 10/05 - 23/05                                                     21


        the same one as in iteration 3. He probably just likes walking through the underground
        without reading articles.




 Fig. 14: Average amount of clicks and keystrokes each user needs to reach an article




                      (a) Absolute amounts per user               (b) Overview of the totals

                     Fig. 15: What users selected after reading an article



4.7.4 Conclusion

We are pleased with the results of this iteration. Everything worked as we wanted. Since we
achieved our pre-set goals, no changes are required.
      There do seem to be problems left, but these are not related to our goals. If we look at the
questionnaires, users don’t find our application effective to find articles and they don’t become
more interested in the news. This is partially because the functionality to select articles isn’t
completely working as planned. We didn’t invest time in this earlier on, because there were
more urgent matters to solve. If we had more time, we could also try to improve those parts of
the application in a new iteration.




                                              21 of 29
22


5 Result
The resulting application somewhat differs from our initial idea. We didn’t get to add the friend
functionality in the way we really wanted or even implement the actual relation between the min-
erals and the ratings given by users which seemed like “core functionality” at the time. Instead
we added what users wanted. We are quite pleased with the end result. We ended up with 29
users in the end who all played our game for a decent amount of time. We had a total of 49 users,
excluding ourselves. Some of these people are now starting to ask for more: more badges, the
addition of high scores, sound effects, etc. It didn’t really occur to us that some people might
actually enjoy mining articles without an added agenda. These people seem to enjoy clicking
on articles and liking or disliking them even if this doesn’t really contribute to anything (cf. the
anecdote mentioned in class about the application where users were clicking on a rubber ducky
to make it take a dive). They seem to like the simple pleasure of earning badges and chop-
ping away at nothing in particular. Even though we might not have achieved our ultimate goal
(making people more interested in news), we did succeed in creating a small game that offered
these people a simple (and short) repose from their daily lives. Between all the statistics and
questionnaires and after all is said and done, that seems like a victory in itself.


6 Conclusion
Personally we are quite satisfied with our Dataminer application. A continuous problem was
the low number of users. During all of our iterations (except the last) we had just a small num-
ber of users. We should have tried more aggressive techniques to attract more users from the
start. Working on Dataminer has taught us the importance of using an iterative process when
developing a user interface. Starting with the bare bone essentials and adding features based on
user requests and not just on what you have planned seems like the ideal way to work on an
application (however time-consuming it may be). This course has also taught us the importance
of a decent user-interface (in which “decent” is judged by the users, not the creators) and how
small changes (like adding a button to invite friends) can have a mayor impact, while big changes
(adding graphs and badges) might not. One of our fellow students commented that by blogging
about our graphics we might have set expectations a little too high. This seems like a valid point.




                                              22 of 29
23


7 Appendix

7.1 Answers to questionnaires
Below are what the users answered on our questionnaires. In the first iteration there were 10
people who answered to the questionnaire. In the second iteration 5 people who filled in the
questionnaire. In the third iteration only 2 people answered and in the fourth iteration 11 people
answered. We didn’t make any conclusions based on the questionnaire in the third iteration and
since not that much changed between the third and fourth iteration, we will take these 13 people
together as one group.
    In the graphs we want to show the evolution of what people think about our program. To be
able to compare the different iterations, we have chosen to show what percentage of the users
answered a certain answer, instead of absolute numbers. Each graph will show how much users
aswered 1 (worst rating), 2, 3, 4 or 5 (best rating) on our questions. Iteration 1 is shown in red,
iteration 2 in green and iteration 3 in blue. The graphs can be found in figures 16 and 17.


7.2 Usage statistics
We have monitored how much clicks and keystrokes each user needs to mine an article. We also
monitored what option they select after reading the article. In the figures below you can find this
for each iteration.
    In iteration 2 we left out a user that constantly clicked dislike. He read lots of articles, but
disliked all of them. This made the percentage of dislikes seem very high, but it was only caused
by one user.
    All the graphs can be found in figures 18, 19, 20, 21, 22, 23, 24 and 25.




                                            23 of 29
7.2   Usage statistics                                               24




                                       24 of 29




                     Fig. 16: Answers to our questionnaires part 1
7.2   Usage statistics                                               25




                                       25 of 29




                     Fig. 17: Answers to our questionnaires part 2
7.2   Usage statistics                                                                   26




Fig. 18: Amount of clicks and keystrokes users need to reach an article in iteration 1




                   (a) Absolute amounts per user            (b) Overview of the totals


           Fig. 19: What users selected after reading an article in iteration 1




Fig. 20: Amount of clicks and keystrokes users need to reach an article in iteration 2




                                           26 of 29
7.2   Usage statistics                                                                   27




                   (a) Absolute amounts per user            (b) Overview of the totals


           Fig. 21: What users selected after reading an article in iteration 2




Fig. 22: Amount of clicks and keystrokes users need to reach an article in iteration 3




                   (a) Absolute amounts per user            (b) Overview of the totals


           Fig. 23: What users selected after reading an article in iteration 3




                                           27 of 29
7.2   Usage statistics                                                                   28




Fig. 24: Amount of clicks and keystrokes users need to reach an article in iteration 4




                   (a) Absolute amounts per user            (b) Overview of the totals


           Fig. 25: What users selected after reading an article in iteration 4




                                           28 of 29
7.3   Amount of viewers per day                                                           29


         7.3 Amount of viewers per day
         Since 4 may, we are tracking how much unique users visit our application per day. This can be
         seen in Figure 26.




                      Fig. 26: Amount of unique users per day from 04/05 till 23/05



         7.4 Time spend

                                       Sam Agten      Bart Gerard     Tanguy Monheim        Steven Vermeeren   Total
Refining concept                            20              20                 20                     20         80
Storyboard                                 10              15                 10                     10         45
Screen-transition-diagram                   0                 2                2                         0      4
Implementation paper prototypes             5                 5               12                         5      27
Evaluation paper prototypes                 5                 4                2                         5      16
Implementation digital iteration 1          1                 5               20                     35         61
Evaluation digital iteration 1              1                 2                2                         1      6
Implementation digital iteration 2         20              10                 40                     15         85
Evaluation digital iteration 2              1                 2                2                         2      7
Implementation digital iteration 3          5              10                 10                         5      30
Evaluation digital iteration 3              1                 2                2                         2      7
Implementation digital iteration 4          1                 2                1                         1      5
Evaluation digital iteration 4              1                 1                2                         1      5
Writing Reports                            10              10                 15                         8      43
Listening to presentations                 20              20                 20                     20         80
Total                                      101            110                 160                   130        501

                                          Tab. 1: Time investment




                                                   29 of 29

Más contenido relacionado

La actualidad más candente

Security concepts
Security conceptsSecurity concepts
Security concepts
Deepak Raj
 
bkremer-report-final
bkremer-report-finalbkremer-report-final
bkremer-report-final
Ben Kremer
 
Data Protection Iin The EU
Data Protection Iin The EUData Protection Iin The EU
Data Protection Iin The EU
Thomas Müller
 
A design and implementation guide for tivoli decision support sg245499
A design and implementation guide for tivoli decision support sg245499A design and implementation guide for tivoli decision support sg245499
A design and implementation guide for tivoli decision support sg245499
Banking at Ho Chi Minh city
 

La actualidad más candente (17)

Manual Civil 3d Ingles
Manual Civil 3d InglesManual Civil 3d Ingles
Manual Civil 3d Ingles
 
Arcanoid 3D
Arcanoid 3DArcanoid 3D
Arcanoid 3D
 
Mmi summary of hci book by aln dix
Mmi summary of hci book by aln dixMmi summary of hci book by aln dix
Mmi summary of hci book by aln dix
 
Security concepts
Security conceptsSecurity concepts
Security concepts
 
Kids media-the-new-millennium-report
Kids media-the-new-millennium-reportKids media-the-new-millennium-report
Kids media-the-new-millennium-report
 
Group Violence Intervention: Implementation Guide
Group Violence Intervention: Implementation GuideGroup Violence Intervention: Implementation Guide
Group Violence Intervention: Implementation Guide
 
Thesis_Report
Thesis_ReportThesis_Report
Thesis_Report
 
MSc_Thesis
MSc_ThesisMSc_Thesis
MSc_Thesis
 
Making Better Decisions Using IBM WebSphere Operational Decision Management
Making Better Decisions Using IBM WebSphere Operational Decision ManagementMaking Better Decisions Using IBM WebSphere Operational Decision Management
Making Better Decisions Using IBM WebSphere Operational Decision Management
 
bkremer-report-final
bkremer-report-finalbkremer-report-final
bkremer-report-final
 
Recommender Engines Seminar Paper
Recommender Engines Seminar PaperRecommender Engines Seminar Paper
Recommender Engines Seminar Paper
 
Real-Time Non-Photorealistic Shadow Rendering
Real-Time Non-Photorealistic Shadow RenderingReal-Time Non-Photorealistic Shadow Rendering
Real-Time Non-Photorealistic Shadow Rendering
 
tutorial poser
tutorial posertutorial poser
tutorial poser
 
Green Asset Management Toolkit: for Multifamily Housing
Green Asset Management Toolkit: for Multifamily HousingGreen Asset Management Toolkit: for Multifamily Housing
Green Asset Management Toolkit: for Multifamily Housing
 
Data Protection Iin The EU
Data Protection Iin The EUData Protection Iin The EU
Data Protection Iin The EU
 
A design and implementation guide for tivoli decision support sg245499
A design and implementation guide for tivoli decision support sg245499A design and implementation guide for tivoli decision support sg245499
A design and implementation guide for tivoli decision support sg245499
 
Samsung Smart TV Cinema Application UX Guideline
Samsung Smart TV Cinema Application UX GuidelineSamsung Smart TV Cinema Application UX Guideline
Samsung Smart TV Cinema Application UX Guideline
 

Similar a Anarchi report

Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)
Priyanka Kapoor
 
QBD_1464843125535 - Copy
QBD_1464843125535 - CopyQBD_1464843125535 - Copy
QBD_1464843125535 - Copy
Bhavesh Jangale
 
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Jason Cheung
 

Similar a Anarchi report (20)

Where tonight mobile application.pdf
Where tonight  mobile application.pdfWhere tonight  mobile application.pdf
Where tonight mobile application.pdf
 
Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)
 
Master_Thesis
Master_ThesisMaster_Thesis
Master_Thesis
 
QBD_1464843125535 - Copy
QBD_1464843125535 - CopyQBD_1464843125535 - Copy
QBD_1464843125535 - Copy
 
SW605F15_DeployManageGiraf
SW605F15_DeployManageGirafSW605F15_DeployManageGiraf
SW605F15_DeployManageGiraf
 
DM_DanielDias_2020_MEI.pdf
DM_DanielDias_2020_MEI.pdfDM_DanielDias_2020_MEI.pdf
DM_DanielDias_2020_MEI.pdf
 
Work Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel BelaskerWork Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel Belasker
 
A.R.C. Usability Evaluation
A.R.C. Usability EvaluationA.R.C. Usability Evaluation
A.R.C. Usability Evaluation
 
report
reportreport
report
 
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
 
Fraser_William
Fraser_WilliamFraser_William
Fraser_William
 
main
mainmain
main
 
Design and implementation of a Virtual Reality application for Computational ...
Design and implementation of a Virtual Reality application for Computational ...Design and implementation of a Virtual Reality application for Computational ...
Design and implementation of a Virtual Reality application for Computational ...
 
Live chat srs
Live chat srsLive chat srs
Live chat srs
 
Vekony & Korneliussen (2016)
Vekony & Korneliussen (2016)Vekony & Korneliussen (2016)
Vekony & Korneliussen (2016)
 
Data over dab
Data over dabData over dab
Data over dab
 
document
documentdocument
document
 
digiinfo website project report
digiinfo website project reportdigiinfo website project report
digiinfo website project report
 
E.M._Poot
E.M._PootE.M._Poot
E.M._Poot
 
Specification of the Linked Media Layer
Specification of the Linked Media LayerSpecification of the Linked Media Layer
Specification of the Linked Media Layer
 

Último

Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
MateoGardella
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
SanaAli374401
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 

Último (20)

microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 

Anarchi report

  • 1. Human-Computer Interaction: final report Group 1: AnarCHI http://anarchi11.blogspot.com/ http://apps.facebook.com/datamineralpha/ Sam Agten, 1st master computer sciences Bart Gerard, 1st master computer sciences Tanguy Monheim, 1st master computer sciences Steven Vermeeren, 1st master computer sciences Abstract This is the final report for the course of human-computer interaction by group 1. We have developed an application about social news called “Dataminer”. The most important challenges were finding a decent concept, attracting enough users and creating a GUI that is easy to use. From this project we have learned the importance of using an iterative process while developing a user interface and the importance of listening to user feedback rather than just pursuing our own ideas.
  • 2. Contents 2 Contents 1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Storyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Final Storyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Screen-transition-diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1 Final screen-transition-diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Evolution and Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1 Paper iteration 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1.2 Instruments, results and conclusions . . . . . . . . . . . . . . . . . . . . 8 4.2 Paper iteration 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.2.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.2.2 Instruments, results and conclusions . . . . . . . . . . . . . . . . . . . . 10 4.3 Paper iteration 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3.2 Instruments, results and conclusions . . . . . . . . . . . . . . . . . . . . 12 4.4 Digital iteration 1: 09/04 - 24/04 . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.4.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.4.2 Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.4.3 Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.5 Digital iteration 2: 25/04 - 03/05 . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.5.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.5.2 Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.5.3 Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.6 Digital iteration 3: 04/05 - 09/05 . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.6.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.6.2 Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.6.3 Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.7 Digital iteration 4: 10/05 - 23/05 . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.7.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.7.2 Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.7.3 Results and observations . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.7.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2 of 29
  • 3. Contents 3 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 7.1 Answers to questionnaires . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 7.2 Usage statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 7.3 Amount of viewers per day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 7.4 Time spend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3 of 29
  • 4. 4 1 Concept What is dataminer? We have developed an application called “Dataminer”. Dataminer is a facebook application in which the user is represented by an avatar and can mine minerals containing news articles. The hardness of the mineral represents the value of the news article behind it (e.g. A diamond mineral contains a more valuable news article than a talc mineral). Every news articles is grouped in one or several categories (e.g. sport, politics, etc.) The value of a news article is determined by the amount of friends who liked articles from the categories the article hails from. This means that a player has the ability to rate an article (like/dislike/meh) after mining it. Why dataminer? We think Dataminer offers a fun and new way to explore the news. The user can choose to mine articles liked by his friends (hard minerals) or has the possibility to deviate and mine articles disliked by his friends. Dataminer also offers a reward system (badges and high scores) to keep the user interested. How does dataminer relate to other applications? Dataminer could be compared to other mining games such as for example Dig It 1. But the only application we found that even comes close to our idea is the application HearSay 2 . This application is still being developed. This application will also let people rate articles and show them which articles their friends liked. It will also use a badge system. The difference is that HearSay does not use a “mining” concept. Which alternatives have we considered? We also considered making a more generic social news application, which would allow users to rate and comment on news articles gathered from different sources. We decided to go with Dataminer instead because we believed it to be more exciting to work on and more original. Strong suites/weak suites? Dataminer’s biggest flaw is also it’s greatest strength. It is an original idea and by extension, a bit of a gamble. Whether people like digging for their news or would rather just read it without all the hassle, remains to be seen. 1 http://appadvice.com/appnn/2009/08/review-i-dig-it/ 2 http://www.newsgaming.de/2010/10/a-social-news-game/ 4 of 29
  • 5. 5 2 Storyboard 2.1 Final Storyboard Fig. 1: Game screen Fig. 2: Article screen To play DataMiner, the user has to login on Facebook and select the DataMiner application. The user enters the game screen (Figure 1). He can move across the different layers in search of new minerals. When a mineral is found, an article from a category that is as valuable as the mineral is shown (Figure 2). The user can like, dislike or meh the article. This will result in a different value for this category. Fig. 3: Statistic screen Fig. 4: Badges screen The value of the different articles can be viewed when opening the statistics screen (Figure 3). The user sees how much he likes the different categories and also the combined interests of the user and his friends are shown. By pressing on a category, the user is able to view more refined statistics of a certain category. To motivate the different users to return again and again to 5 of 29
  • 6. 2.2 Alternatives 6 our application, a reward system has been built. When selecting the badge icon from the menu, the user gets an overview of all his achievements until now (Figure 4). By hovering over a badge, the user gets information about this reward. 2.2 Alternatives The friend menu from an earlier storyboard did not make the final release because of our lack of time. When a user selects the friend symbol, he now gets a Facebook invitation screen instead of a friend overview page. 2.3 Evolution Through the different iterations the mini game has changed quite a lot. From an initial Super Mario idea till the current game. Early changes to a simplified concept were made because of fear for the difficulties in implementation, but later on we changed back to a less safe concept to broaden our horizon. The older storyboards were rough and unrefined. Just to get a quick idea. More recent storyboards got more detail to fill in the final gaps. 2.4 Thoughts A storyboard is a very powerful tool to visualize the different screens of an application. It makes an idea more tangible for the different members of the group. Initial differences can be found quickly and reorientation of the different team members to the same realization becomes possible early on. Still there may be some disadvantages. Once a storyboard is made, it becomes really difficult to deviate from it. This can be bad if the placement of certain components isn’t that good and the developer does not want to return to the drawing table. Another disadvantage is the fact that not everything can be expressed with a storyboard. For example the workflow between the different screens, is really difficult to capture without the use of additional text or a screen- transition-diagram. It would also be difficult to simulate a highly interactive application using a storyboard (such as a fully fledged game). 6 of 29
  • 7. 7 3 Screen-transition-diagram 3.1 Final screen-transition-diagram Visual Paradigm for UML Standard Edition(K.U.Leuven) Help help contains badges contains left/right/bottom/up (no article) Badges Menu Game refresh contains close like/dislike/meh/close friends contains left/right/bottom/up (with article) send request contains Article Detailed Statistics Add Friends statistics category Statistics Fig. 5: Screen-transition-diagram Figure 5 contains our screen-transition diagram of our last iteration. Central in our appli- cation is the game element. When starting up the application, the user enters this state. He is able to move around either with a single mouse click or with the arrow keys. When pushing the refresh button, the underground is regenerated and the user can start fresh. If the miner touches a mineral, the user enters the article state. Here he can read the abstract of an article and if desired the user is able to read the full article. Now the user can rate the article or close the article screen. At all times the user is able to use the side-menu. When pushing the help button, the user gets an explanation of the application and a legend of the minerals. When desired, the user can check his badges by selecting the badge button. Also some valuable statistics about the users interests can be checked. By selecting a certain category, the user is guided to detailed statistics about its subcategories. From the menu, it is also possible to select the friend button. A default facebook screen is shown. After selecting the friends he want to invite, the user returns to the previously visited page. At all times the user can close a screen, returning him to the game screen. 3.2 Evolution and Alternatives In the previous diagram we wanted to show an explanation pop-up with the adjustments in statis- tics, every time an article was rated or closed. Because this could become rather annoying, it was banned from the final iteration. In this iteration we also had to let go the friend menu, because of the lack of time. This menu was meant for the social interaction between the different users and would have put the social before newsgame. 7 of 29
  • 8. 8 4 Iterations In the paper iterations we didn’t couple our conclusions in a one on one relation to our goals. Afterwards we learned we should have done this. However, we want to show the data as we used it to make our conclusions. For this reason we won’t indicate what conclusion matches what goal. In the digital iterations we did link our conclusions to our goals. There we will always use matching numbers to indicate what conclusion matches what goal. 4.1 Paper iteration 1 When we made our first paper prototype, the concept of the game was still different from what it would end up to be. Concerning the interface however, there are many similarities, so the results of the iteration have still been used in further iterations. 4.1.1 Goals Due to the experimental nature of paper prototypes, we decided that for a first prototype any idea that seemed good would be used and tested. This did cause that the goals for testing were rather general. We wanted to see if: • those ’first ideas’ were indeed good enough for users to use them in the intended way • the concept had any potential. 4.1.2 Instruments, results and conclusions We tested the prototype with only three classmates, but managed to get some results. Following are the most important conclusions: • about the interface: – Navigating around in the game was unclear: people tried many different ways (such as drag-and-drop), while we wanted them to click on a mining shaft to move there. – The fact that a user could earn money made them believe they would also be able to buy things, which was not the case. – The use of tabs confused people; they were looking for a button to close windows. • about the concept: – This was met with a lot of negative criticism. People found it unclear what the goal and motivation for playing were. The concept also neglected any “value” of news items, which made it look like it shouldn’t include news at all. 8 of 29
  • 9. 4.2 Paper iteration 2 9 4.2 Paper iteration 2 For our second paper prototype, we had revisited our concept and came up with our final concept. Figure 6 shows how this paper prototype looked like. This new concept solved a few of the problems with the previous interface by itself: • We made the mining more interactive, giving the user several different options to navigate around the mine. • We abandoned the scoring system, so the money problem was gone. • The issue with the tabs remained. We decided not to change this yet, because we felt it might be an issue only due to the nature of paper prototypes. Fig. 6: Second paper prototype 4.2.1 Goals Once again, we didn’t have very specific goals. We mainly wanted to: • check if the new concept was more attractive than the old one • see which problems users experienced with the updated interface 9 of 29
  • 10. 4.2 Paper iteration 2 10 4.2.2 Instruments, results and conclusions We performed tests with five computer science students in their early twenties. Although this is not our target audience, they were the easiest for us to get access to to perform the tests. These were the results: • About the concept: – The concept was still vague when people first started. – People wanted to be able to exclude friends from being taken into account when calculating the value of articles. • About the interface: – It was not clear what the button to change settings of the statistics page meant. – It was unclear what rating an article did; the user got redirected to the game without any further information. – We provided an option to change the avatar of the miner, but people did not easily find it. 10 of 29
  • 11. 4.3 Paper iteration 3 11 4.3 Paper iteration 3 With the results of the previous iteration in mind, we performed following changes: • We added an introduction screen, to be displayed the first time a user starts the game, with a brief explanation. • We added a button in the friends list to be able to turn friends on and off. • We changed the settings button to have a traditional settings icon. • We added a screen that, after rating an article, showed its impact on the statistics. • We did not change the avatar option, but rather decided to keep it as bonus for more experienced players. Additionally, we abandoned the tabs and made close buttons for all but the main application screen. The resulting prototype can be seen in figure 7. Fig. 7: Third paper prototype 4.3.1 Goals We wanted to test if: • the performed changes had had their intended effect • there were any other, up to now unknown problems 11 of 29
  • 12. 4.3 Paper iteration 3 12 4.3.2 Instruments, results and conclusions This iteration was tested with housemates of the team members, which was our first input from non computer scientists. A total of 8 people tested the prototype, ages varying from 18 to 23 and with mixed computer and facebook experience. These were the most important conclusions: • changes: – The text in the introduction screen was not clear enough. – The button to indicate which friends are taken into account for the value of articles – a green/red “stoplight” – was causing confusion. It had to be explicitly explained before people understood its purpose. – One person thought the closing button for a sub-screen would close the entire ap- plication. • further issues: – The button to rate an article neutral was not clear. – Using the terms like/dislike for rating articles caused wrong usage: interesting arti- cles about bad events were being rated negative, while for our system they should be rated positive. – Some people wanted more specific statistics available. 12 of 29
  • 13. 4.4 Digital iteration 1: 09/04 - 24/04 13 4.4 Digital iteration 1: 09/04 - 24/04 In this first digital iteration we stripped down our application to the bare bone essentials: the mining and rating of articles. The point was to release as soon as possible. Birth by fire. More specifically, we used dummy sprites to represent the miner, left out the sidebar and used a set amount of dummy articles instead of real-life ones. This way we were able to test the absolute core of our application. Unfortunately we don’t have screenshots of how our application looked in the first iteration. 4.4.1 Goals This iteration we expected to find out whether or not: 1. The user could easily navigate the miner (by clicking somewhere on the map, or using the keyboard). 2. The user mined and rated articles (How many articles does a user mine? Do they mine articles at all, or just avoid the minerals? Do users just click dislike all the time? etc.). 3. What was the overall feeling the users had when using our application? Is it fun? Were they appealed by the idea or rather appalled? 4.4.2 Instruments We first tested our application on 6 users offline, these people are randomly chosen people we know. Afterwards we released the application on facebook. None of the users had to go through a scenario. Since we didn’t have control over who tests our application, we didn’t put constraints on this. We benchmarked our goals as follows (numbers coincide with the goal numbers): 1. We expected users to navigate between articles using the mouse and/or keyboard. To track this, we measured the average number of mouse clicks and keystrokes between articles. We expected an average of two mouse clicks between articles and an average of 25 keystrokes. We chose these numbers because it is possible to reach an article in 1 mouse click and the average distance between articles is 15 spaces. We give users a 70% error margin and round up. 2. We expected an even distribution among the three possible ratings. To measure this we kept track how many times a user clicked on like/meh/dislike after mining an article. 3. To test the overall feeling the users had when using the application we used a question- naire based on the CSUQ test. Scores on the questionnaire range from 1 to 5. We want that 50% of the users rate 4 or 5 on questions related to whether they like the application. 4.4.3 Results and observations Results for the goals are listed below, with numbers coinciding with the previous sections: 1. Results show an average of 2.45 mouse clicks as evident by the graph below. The key- board was severely underused so it is hard to speak about an average number. This is visible in figure 8. 13 of 29
  • 14. 4.4 Digital iteration 1: 09/04 - 24/04 14 2. The number of times users clicked on one of the four buttons displayed when rating an article (like/meh/dislike) is shown by the graphs in figure 9. These seem to be evenly spread. We did notice that we had more users adding the application on facebook than users mining at least one article. This means not all users mined an article. Later on we found this is because Safari didn’t submit the results to our database and wasn’t a problem visible to the users. 3. General feedback and the questionnaire showed us that: - The users weren’t pleased with the graphics. - Users asked for a legend of the different minerals. + Most users were medium satisfied with the application and hope to see more soon. +/- The opinions about the concept are still divided. We would like to refer to appendix 7.1 for a view on how the application scored on all the different questions and a more detailed view of the user input. Although this is not linked to one of our goals, we did notice something strange. We noticed that some users allowed access to our application, but didn’t use it afterwards. Later on we found out this was a bug in our tracking system. The Safari browser didn’t submit data to our database. As this only affects a small group of users, fixing this isn‘t one of our priorities. Fig. 8: Amount of clicks and keystrokes each user needs to reach an article (a) Absolute amounts per user (b) Overview of the totals Fig. 9: What users selected after reading an article 14 of 29
  • 15. 4.4 Digital iteration 1: 09/04 - 24/04 15 4.4.4 Conclusion We had a small amount of test subjects. Therefore we will continue tracking what we described in section 4.4.2. We will only mention these in the following iterations in case the findings there vary from our conclusions here. The conclusions on each observation can be found below (Numbers coincide with previous sections): 1. We feel an average of 2.45 mouse clicks is acceptable. The fact that the keyboard was underused isn’t really problematic. We won’t remove the keyboard controls because we still think of it as a nice extra. 2. It seems the ratings are rather evenly distributed, but no hard conclusions can be made with only 9 unique users on facebook. Since it seems that users are able to rate and thus mine articles, we think it isn’t necessary to change the interface. 3. People were generally displeased with the look of the application which didn’t really come as a surprise as we didn’t put effort in graphics yet. We will improve these. 15 of 29
  • 16. 4.5 Digital iteration 2: 25/04 - 03/05 16 4.5 Digital iteration 2: 25/04 - 03/05 In this iteration we released new functionality and improved the looks of the application to try to solve the problems we saw in iteration 1. Unfortunately we don’t have a picture of how the application looked in this iteration. 4.5.1 Goals 1. The number of users in the previous iteration was problematic. We will try to attract more users by improving the graphics (replace the dummy sprite; add better graphics for minerals, etc.) and by adding a badge reward system. The combination of both will hopefully attract more users. 2. Some users commented in the previous iteration that they didn’t really understand what mineral stands for what article. We will try to solve this by adding a legend. 4.5.2 Instruments Again we released on Facebook. We hope that improving the application will get us enough users and won’t use additional ways of gathering users. We benchmarked our goals as follows (numbers coincide with the goal numbers): 1. We will simply measure success by counting the number of users that used our applica- tion. We would like at least 25 users for this iteration. 2. We will monitor whether or not there is improvement by using the same questionnaire from the previous iteration and see if we made any progress. We want 50% of the users to rate 4 or 5 out of 5 on how easy it is to find information. 4.5.3 Results and observations Results for the goals are listed below with numbers coinciding with the previous sections: 1. For this iteration we again had a total of 9 unique users. 2. 40% of the users rated 4 or 5 out of 5 for how easy it is to find information. Fig. 10: Comparison between how easy users found information in iterations 1 and 2 16 of 29
  • 17. 4.5 Digital iteration 2: 25/04 - 03/05 17 4.5.4 Conclusion The numbers of our conclusions coincide with previous sections: 1. Nine users indicates no improvement whatsoever. We will have to find a better way to attract more users. 2. The achieved 40% is not the 50% we wanted, but it is good enough. The other users rated very low for the help function. In other words, the opinions about our new help function are divided between extremes. We would like to know the concerns the people have who gave us a low score for the help function. In any case, as indicated by the people who did like the new help function, we did manage to help at least a good portion of our users. In that respect we consider the new help function an improvement. Further improvement of the help function is a possibility for possible next iteration but certainly not a top priority. For now we will leave the help function as it is. 17 of 29
  • 18. 4.6 Digital iteration 3: 04/05 - 09/05 18 4.6 Digital iteration 3: 04/05 - 09/05 In this iteration we tried again to attract more users for our application by expanding on the existing application. You can see how the application looked in figure 11. Fig. 11: How the application looked in iteration 3 4.6.1 Goals 1. We will try to attract more users. We will do this by further improving the graphics and giving the users insight on how we provide their articles by showing them statistics. This idea came from the results of Section 4.3.2. 2. We will try to make users stay longer. The improvements of 1. should also improve this. 4.6.2 Instruments We will release another version of our application on facebook. To help with finding users, we also asked professor Erik Duval to tweet about our application besides the other ways of recruiting people we were already using (mouth-to-mouth advertising, posting on facebook, etc.). We benchmarked our goals as follows (numbers coincide with the goal numbers): 1. We will measure success by measuring the number of users. We expect at least 25 users. 2. We will use Google analytics to monitor how long users are on our site. We want them to stay 2 minutes on average on our application. We feel this is the time it takes to read approximately 2 articles. 18 of 29
  • 19. 4.6 Digital iteration 3: 04/05 - 09/05 19 4.6.3 Results and observations Results for the goals are listed below with numbers coinciding with the previous sections: 1. For this iteration we had a total of 7 unique users. 2. Users stayed an average of 7.50 minutes on our application. This varied a lot per day, as seen in figure 12 Fig. 12: Average time users spend on our site per day 4.6.4 Conclusion Again, the numbers of our conclusions coincide with the numbers above: 1. 7 users is in fact a decline from our previous iteration. Even though this was a shorter iteration, we will have to drastically increase this number in our next iteration. We will do this by making it possible for users to invite their friends. 2. We were very pleased with the result of 7.50 minutes. The users that do use our applica- tion certainly like it. 19 of 29
  • 20. 4.7 Digital iteration 4: 10/05 - 23/05 20 4.7 Digital iteration 4: 10/05 - 23/05 In this iteration we tried again to attract more users for our application. We will do this by letting our users invite more users. The layout of our application didn’t change compared to the previous release. We only made the add-friend-button work. This can be seen in figure 13. Fig. 13: How the application looked in iteration 4 4.7.1 Goals 1. We will add a functionality to invite friends. This will hopefully provide us more users. 2. Due to the low number of users in the previous iteration, will check if problems occurred with the interface. We want to be certain users can mine and rate articles. If there is a problem here, this could be a cause as to the low number of users. 4.7.2 Instruments Again we release our new version on facebook. We benchmarked our goals as follows (numbers coincide with the goal numbers): 1. We will measure the number of users, we expect at least 25 users. 2. We will again measure the average number of mouse clicks, the average number of keystroke and the number of times a user clicked like/dislike/meh/close. We will use the same benchmark as mentioned in Section 4.4.2 and will monitor if there is any significant change. 4.7.3 Results and observations Results for the goals are listed below with numbers coinciding with the previous sections: 1. We finally managed to achieve our goal. We had 29 users using our application. 2. The basic controls are working nicely. All users manage to read and rate articles. The ratings are evenly spread. They do use a little more clicks, but all of them manage to read articles. These results can be found in figures 14 and 15. The user with lots of clicks is 20 of 29
  • 21. 4.7 Digital iteration 4: 10/05 - 23/05 21 the same one as in iteration 3. He probably just likes walking through the underground without reading articles. Fig. 14: Average amount of clicks and keystrokes each user needs to reach an article (a) Absolute amounts per user (b) Overview of the totals Fig. 15: What users selected after reading an article 4.7.4 Conclusion We are pleased with the results of this iteration. Everything worked as we wanted. Since we achieved our pre-set goals, no changes are required. There do seem to be problems left, but these are not related to our goals. If we look at the questionnaires, users don’t find our application effective to find articles and they don’t become more interested in the news. This is partially because the functionality to select articles isn’t completely working as planned. We didn’t invest time in this earlier on, because there were more urgent matters to solve. If we had more time, we could also try to improve those parts of the application in a new iteration. 21 of 29
  • 22. 22 5 Result The resulting application somewhat differs from our initial idea. We didn’t get to add the friend functionality in the way we really wanted or even implement the actual relation between the min- erals and the ratings given by users which seemed like “core functionality” at the time. Instead we added what users wanted. We are quite pleased with the end result. We ended up with 29 users in the end who all played our game for a decent amount of time. We had a total of 49 users, excluding ourselves. Some of these people are now starting to ask for more: more badges, the addition of high scores, sound effects, etc. It didn’t really occur to us that some people might actually enjoy mining articles without an added agenda. These people seem to enjoy clicking on articles and liking or disliking them even if this doesn’t really contribute to anything (cf. the anecdote mentioned in class about the application where users were clicking on a rubber ducky to make it take a dive). They seem to like the simple pleasure of earning badges and chop- ping away at nothing in particular. Even though we might not have achieved our ultimate goal (making people more interested in news), we did succeed in creating a small game that offered these people a simple (and short) repose from their daily lives. Between all the statistics and questionnaires and after all is said and done, that seems like a victory in itself. 6 Conclusion Personally we are quite satisfied with our Dataminer application. A continuous problem was the low number of users. During all of our iterations (except the last) we had just a small num- ber of users. We should have tried more aggressive techniques to attract more users from the start. Working on Dataminer has taught us the importance of using an iterative process when developing a user interface. Starting with the bare bone essentials and adding features based on user requests and not just on what you have planned seems like the ideal way to work on an application (however time-consuming it may be). This course has also taught us the importance of a decent user-interface (in which “decent” is judged by the users, not the creators) and how small changes (like adding a button to invite friends) can have a mayor impact, while big changes (adding graphs and badges) might not. One of our fellow students commented that by blogging about our graphics we might have set expectations a little too high. This seems like a valid point. 22 of 29
  • 23. 23 7 Appendix 7.1 Answers to questionnaires Below are what the users answered on our questionnaires. In the first iteration there were 10 people who answered to the questionnaire. In the second iteration 5 people who filled in the questionnaire. In the third iteration only 2 people answered and in the fourth iteration 11 people answered. We didn’t make any conclusions based on the questionnaire in the third iteration and since not that much changed between the third and fourth iteration, we will take these 13 people together as one group. In the graphs we want to show the evolution of what people think about our program. To be able to compare the different iterations, we have chosen to show what percentage of the users answered a certain answer, instead of absolute numbers. Each graph will show how much users aswered 1 (worst rating), 2, 3, 4 or 5 (best rating) on our questions. Iteration 1 is shown in red, iteration 2 in green and iteration 3 in blue. The graphs can be found in figures 16 and 17. 7.2 Usage statistics We have monitored how much clicks and keystrokes each user needs to mine an article. We also monitored what option they select after reading the article. In the figures below you can find this for each iteration. In iteration 2 we left out a user that constantly clicked dislike. He read lots of articles, but disliked all of them. This made the percentage of dislikes seem very high, but it was only caused by one user. All the graphs can be found in figures 18, 19, 20, 21, 22, 23, 24 and 25. 23 of 29
  • 24. 7.2 Usage statistics 24 24 of 29 Fig. 16: Answers to our questionnaires part 1
  • 25. 7.2 Usage statistics 25 25 of 29 Fig. 17: Answers to our questionnaires part 2
  • 26. 7.2 Usage statistics 26 Fig. 18: Amount of clicks and keystrokes users need to reach an article in iteration 1 (a) Absolute amounts per user (b) Overview of the totals Fig. 19: What users selected after reading an article in iteration 1 Fig. 20: Amount of clicks and keystrokes users need to reach an article in iteration 2 26 of 29
  • 27. 7.2 Usage statistics 27 (a) Absolute amounts per user (b) Overview of the totals Fig. 21: What users selected after reading an article in iteration 2 Fig. 22: Amount of clicks and keystrokes users need to reach an article in iteration 3 (a) Absolute amounts per user (b) Overview of the totals Fig. 23: What users selected after reading an article in iteration 3 27 of 29
  • 28. 7.2 Usage statistics 28 Fig. 24: Amount of clicks and keystrokes users need to reach an article in iteration 4 (a) Absolute amounts per user (b) Overview of the totals Fig. 25: What users selected after reading an article in iteration 4 28 of 29
  • 29. 7.3 Amount of viewers per day 29 7.3 Amount of viewers per day Since 4 may, we are tracking how much unique users visit our application per day. This can be seen in Figure 26. Fig. 26: Amount of unique users per day from 04/05 till 23/05 7.4 Time spend Sam Agten Bart Gerard Tanguy Monheim Steven Vermeeren Total Refining concept 20 20 20 20 80 Storyboard 10 15 10 10 45 Screen-transition-diagram 0 2 2 0 4 Implementation paper prototypes 5 5 12 5 27 Evaluation paper prototypes 5 4 2 5 16 Implementation digital iteration 1 1 5 20 35 61 Evaluation digital iteration 1 1 2 2 1 6 Implementation digital iteration 2 20 10 40 15 85 Evaluation digital iteration 2 1 2 2 2 7 Implementation digital iteration 3 5 10 10 5 30 Evaluation digital iteration 3 1 2 2 2 7 Implementation digital iteration 4 1 2 1 1 5 Evaluation digital iteration 4 1 1 2 1 5 Writing Reports 10 10 15 8 43 Listening to presentations 20 20 20 20 80 Total 101 110 160 130 501 Tab. 1: Time investment 29 of 29