SlideShare una empresa de Scribd logo
1 de 118
Descargar para leer sin conexión
Evaluating the Usability of GrantFinder
Majella Haverty, John Kiely, Satish Narodey & Kunal Kalra
School of Information and Communication Studies
University College of Dublin
This Capstone is submitted to University College Dublin in part fulfilment of the
requirements for the degree of Masters of Library and Communication and Masters
of Science in Information Systems, August 2016.
Supervisor: Dr. Judith Wusteman
Acknowledgements
We would like to extend our sincere gratitude to our supervisor Dr. Judith Wusteman
for all her advice, help and guidance throughout this Capstone project.
The team would also like to take this opportunity to thank all the participants who
contributed to the Capstone project and for taking time to participate in the usability
tests.
Contents
I. Executive Summary
1. Introduction
1.1. GrantFinder
1.2. GrantFinder Website
1.3. Research Aims & Objectives
1.4. Research Approach
1.5. Changing Nature of Goals
1.6. Capstone Project Stakeholders & Participants
1.7. Capstone Team Roles & Tasks
2. Literature Review
2.1. Grant Finding
2.2. Search Functionality
2.3. Usability & Usability Testing
3. Methodology
3.1. Research Approach
3.2. Alternative Approaches
3.3. Heuristic Evaluation
3.4. Personas
3.5. Usability Testing
3.5.1. Think Aloud Approach
3.6. Test Design
3.6.1. Introduction
3.6.2. Pre-Test Questionnaire
3.6.3. Scenarios
3.6.4. Post Test Questionnaire
3.6.5. Script
3.7. Team Review
3.8. Pilot Review
3.9. Participants
3.10. Ethical Considerations
3.11. Data Analysis
3.11.1 Coding
3.12. Limitations
3.13. Reliability
4. Results
4.1. Heuristic Evaluation Findings
4.2. Pre-Test Questionnaire Findings
4.3. Usability Test Findings
4.3.1. Search
4.3.2. Forms & Data Entry
4.3.3. Help, Feedback & Error Tolerance
4.3.4. Homepage
4.3.5. Page Layout & Visual Design
4.3.6. Writing & Content Quality
4.3.7. User Perceptions
4.4. Post-Test Questionnaire Findings
4.5. Triangulation
5. Discussion
5.1. Usability of GrantFinder
5.2. Future Directions
5.2.1. Hot Jar
5.2.2. URL/Host
6. Recommendations
6.1. Forms & Data Entry
6.1.1. Search
6.1.2. Score
6.1.3. Standard Words
6.2. Functionality
6.2.1. Real-Time Results
6.2.2. Personalisation/User Profile
6.2.3. Homepage
6.3. Help, Feedback & Error Tolerance
6.3.1. Tutorial
6.3.2. Email Validation/Error Notification
6.3.2. FAQ Section
6.4. Writing & Content Quality
6.4.1. Language
6.4.2. Ordering of Lists
6.4.3. Glossary
6.5. Layout & Design
6.5.1. Homepage
6.5.2. Results Page
7. Conclusion
8. References
Appendices
Appendix 1: Heuristic Evaluation Checklist Example: Trust & Credibility
Appendix 2: Personas
Appendix 3: Test Protocol
Appendix 4: Consent Form
Appendix 5: Pre-Test Questionnaire
Appendix 6: Scenarios
Appendix 7: Post-Test Questionnaire
Appendix 8: Usability Test Script
Appendix 9: Email to Participants
Appendix 10: Information Leaflet
Appendix 11: Usability Testing Timetable
Appendix 12: Ethics Form
Appendix 13: Ethical Exemption Approval
Appendix 14: Themes & Codes
Appendix 15: Coding Hierarchy Chart
Appendix 16: Coding Comparison Query Example
Appendix 17: Thank You Email
Appendix 18: Pre-Test Questionnaire Findings
Appendix 19: Post-Test Questionnaire Findings
Appendix 20: Gantt Chart
Appendix 21: Group Reflection
Table of Figures
Figure 1: Homepage
Figure 2: Tutorial Page
Figure 3: Three Field Inputs
Figure 4: Results Page
Figure 5: Job Submission Page
Figure 6: Contact Page
Figure 7: Scenarios
Figure 8: Heuristic Results Radar Chart
Figure 9: Add Button Placement
Figure 10: Current Tutorial Page
Figure 11: Current FAQ Page
Figure 12: Current Results Page
Figure 13: Homepage Image
Figure 14: Results Page
Figure 15: Search Page: Standard Words
Figure 16: Tutorial Page: Spelling Mistakes
Figure 17: Hotjar: Heatmap
Figure 18: Hotjar: Recording
Figure 19: Hotjar: Funnel
Figure 20: Hotjar: Form Analytics
Figure 21: Standard Words Recommendation
Figure 22: Search Page Wireframe
Figure 23: Current Job Submission Message
Figure 24: Example Sign Up Form for a Personal Account
Figure 25: Main Grants on Homepage
Figure 26: Tutorial Page Wireframe
Figure 27: Inline Form Error Notifications
Figure 28: Combination Error
Figure 29: Suggested FAQ
Figure 30: Alphabetically Ordered Clickable Standard Words
Figure 31: Speciality: Alphabetically Ordered Drop Down Menu
Figure 32: Glossary of Keywords Example
Figure 33: Homepage Wireframe
Figure 34: Results Page Wireframe
i
I. Executive Summary
The primary aim of this project was to evaluate the usability of GrantFinder and to
identify areas of the software that users had difficulty with.
The usability of the tool was evaluated using a mixed methods approach. Firstly, the
researchers carried out a heuristic evaluation. Following this, 15 participants recruited
by the researchers performed usability testing. The testing consisted of think-aloud
observations while performing pre-defined tasks as well as questionnaires. The
participants were recruited based on personas developed by the researchers in order
to identify the target user groups for GrantFinder.
A series of recommendations was derived from this data. These recommendations
aim to improve the overall usability of the tool.
Findings
The participants’ reactions were varied. While the concept behind the development
was sometimes praised, users did not find GrantFinder to be intuitive or easy to use.
Other usability issues included:
• Layout and text wrapping on the results page
• Reason for and inclusion of standard words and score
• Empty FAQ section
• Use of jargon
• Inclusion of foreign language words
Recommendations
• Inclusion of a video tutorial and sample search for new users
• Real-time results
• Filtered and free-text searching
• Clickable standard words
• An error notification when a valid email address is not entered
• Improved resolution of images and logos on the site
• The inclusion of well-known grant providers on the homepage
ii
• Glossary of jargon used in the tool
• Allow users to set up profiles and personalize their usage
1
1. Introduction
This report describes the usability testing of the GrantFinder tool. The following
groups participated in the tests: academics, administrators, PhD students and
librarians from University College Dublin.
The participants carried out a series of six pre-defined tasks. The activities and
comments of the participants were observed, recorded and examined and usability
issues were identified. Other issues related to the website’s performance were also
highlighted through a heuristic evaluation. A number of recommendations were
proposed.
1.1.GrantFinder
GrantFinder is a web-based tool for searching research-based grant calls. It was
developed and is hosted by the Structural Bioinformatics and High Performance
Computing Research group, based in St. Anthony’s Catholic University, Murcia,
Spain (http://bio-hpc.ucam.edu/GrantFinder/web/Contact/Contact.php).
GrantFinder aims to provide a single point of access to all information on funding
for a particular research field. These results are ranked according to score in
descending order as well as the number of word clusters discovered. Each result
includes a link to the website of the grant call (Pena-Gracia, Den-hann, Caballero,
Vicente-Contreras and Perez-Sanchez, 2016).
1.2. GrantFinder Website
GrantFinder is a unique search engine tool that allows users to have access
to grants calls in a different way. It works in a two searching phases. First, it looks
for all the grants with predefined words that are mentioned by the user in their
search, after that a clustering technique is used to rank the grants based on the
number of times the selected words are repeated.
The website is developed on an Apache Tomcat Web server and runs on the SUSE
Linux Enterprise Server 11. The user interfaces and web design are implemented
2
by a combination of several front-end technologies including JavaScript, JQuery,
PHP and HTML (Peréz-Sánchez et al, 2016).
GrantFinder works by listing a set of predefined standard words with a corresponding
score. To acquire search results, three input fields can be used. The first input is the
speciality field which is divided into five categories; drugs, medicine, architecture,
nutrition and sports. The second input option is that of mandatory words. Users are
given five input fields that they can fill according to their grant request. Mandatory
words can be chosen from the predefined standard words. Users are required to enter
a score with each mandatory word. The third input is excluded words, this is a non-
mandatory field. Users can choose to define words that he/she wishes to exclude from
the search.
The Home page of GrantFinder, as illustrated in Figure 1, welcomes its user’s and
provides links (top right) to the rest of the site.
Figure 1: Homepage
3
Tutorial page of GrantFinder contains step by step instruction for the user (See Figure
2)
Figure 2: Tutorial Page
In the search page, users are required to fill in all the data columns to order to get their
desired results (See Figure 3).
Figure 3. Three Field Inputs
4
In the result section of the website, users can access the results and discover how
many forms of clusters were found (See Figure 4).
Figure 4: Results page
This page is not initially visible on the website until a search query has been submitted.
The user is redirected to this job submission page, where they are provided with their
respective job_id.
Figure 5: Job Submission Page
5
Figure 6: Contact Page
1.3.Research Aims and Objectives
The primary research aims of our capstone project is to evaluate the usability of the
GrantFinder tool, to identify the most prominent usability issues and develop a series
of recommendations to solve these issues.
In order to achieve this, the research objectives are as follows;
1. Create a test structure that evaluates full functionality of the website.
2. Run a series of usability tests of the website among future user groups
3. Analyse the issues that are highlighted during the testing procedure
4. Make appropriate recommendations.
1.4. Research Approach
Usability testing was the primary source of data collection. Furthermore, participants
were asked to perform the task using a ‘thinking out loud’ approach. A modified version
of David Travis’ (2014) comprehensive heuristic evaluation method mainly pertained
to the usability of a GrantFinder.
1.5. Changing Nature of Goals
Initially, the project title was ‘Evaluating the Usability of the Nation, Genre and Gender
Case Studies’ however, the case studies were not ready within the time frame and
6
thus the project was deemed unrealistic. The Structural Bioinformatics and High
Performance Computing Research Group were interested in completing usability
testing on their site, Grant Finder. While the subject of our usability testing had
changed, the objective of usability evaluation and scope remains the same.
1.6.Capstone Project Stakeholders and Participants
This Capstone project was undertaken by four Master’s students under the supervision
of Dr. Judith Wusteman, all from University College Dublin’s School of Information and
Communication Studies. The research was conducted with the approval of the
Structural Bioinformatics and High Performance Computing Research Group, as well
as the website developer team including Horacio Pérez-Sánchez. The participants of
the usability tests were post-doctoral students, academic staff, administrative staff,
librarians and PhD students from University College Dublin.
1.7.Capstone Team Roles and Tasks
Team
Member
Tasks Assigned
Kunal Kalra Ethical Approval, Data Collection & Recruitment
Satish
Narodey
Ethical Approval, Data Collection, Data Analysis & Technical
Support
Majella
Haverty
Ethical Approval, Data Collection, Data Analysis & Administrative
Duties
John Kiely Ethical Approval, Data Collection & Literature Review
7
2. Literature Review
2.1. Grant Finding
Herther (2009) states that “Behind every significant research innovation, each key
social program, or improved scientific coverage in history lies some form of financial
support”. Finding financial support in the age of the internet has never been more
complicated. Not only can one apply for academic or federal grants, but popular
crowdfunding organisations such as gofundme.org and Kick Starter allow the general
public to offer money to causes that catch their interest, from niche technological
advances to a squirrel census (Emiko, D. 2012).
Search engines such as Grant Watch (https://www.grantwatch.com/grant-search.php)
allow users the opportunity to find financial support across a range of different
disciplines, from academia to not-for-profit organizations. The resource allows users
to select an area of interest, potential funding bodies, and a geographical location in
order to tailor their results.
It is worth noting, however, that seeking an appropriate funding institution or
organisation requires extensive web browsing and searching, as performing these
searches using a popular search engine such as Google or Bing can be generic and
will not provide comprehensive results without multiple attempts. This is very time
consuming and does not optimize the search process. Peréz-Sánchez et al (2015, p1),
in their introductory article on GrantFinder, highlight the need to create a service that
not only finds and displays grant calls, but produces them conveniently with the most
relevant information prioritised.
2.2. Search
Many of the popular search engines available today allow for exploratory search. By
way of using keywords or natural language questions, one can receive an answer to
almost any query. We can take from this that there are two forces at work in online
searching; the querist and the search engine. A carefully designed search engine will
allow users to find information quickly and easily, but most users are not skilled
searchers and are not interested in search itself. They are more invested in the results
aspect of searching (Elliott et al, 2002). Indeed, the same study showed that browsing
8
results rather than straightforward results-seeking elicited a more positive response
from the user, depending on what they were looking for (Elliott et al, 2002).
The importance of search result retrieval cannot be overstated. It is often necessary
to search more than once in order to find pertinent information through traditional
search engines. Ray & Ray (1999, p569) discuss the ways in which internet users
expect search engines to operate as they want them to, rather than as they were
designed to. Due to this misunderstanding between man and machine, it is often the
case that users are not satisfied by the results that they receive. There is, of course,
the option of advanced search. This option can, however, be daunting within a
traditional search engine, given the reliance on simple, keyword searches. In fact,
many users may not understand what is required for an advanced search, given that
they have little experience using this function. Duarte et al (2015) provide a set of
guidelines for Search User Interface, including a guideline on the prominence of the
advanced search feature within a search engine. Its inclusion on the main search
page, as well as the subsequent results page, allows users to refine the information
that they see in order to tailor the vast range of information available online to suit their
needs. This does not only allow them to broaden and to narrow their searches, it also
battles a common problem with online searching: that of search fatigue. In his 2007
article, Bealle (2007, p.47.) mentions that users experience this issue due to
synonyms. This is something that a search engine will not recognize and therefore a
significant number of results are omitted. Standardizing metadata eliminates this
problem to an extent.
2.3. Usability and Usability Testing
The Nielsen Norman Group defines usability as “a quality attribute that assesses how
ease user interfaces are to use” (Nielsen, 2012). In order to evaluate the usability of a
website or search engine, one must first understand what it is that makes a resource
‘usable’. The following guidelines, taken from Nielsen (1993), aid developers and
testers to ensure that their website is doing its best for the target user.
• Visibility of system status
• Match between system and the real world
• User control and freedom
9
• Consistency and standards
• Error prevention
• Recognition rather than recall
• Flexibility and efficiency of use
• Aesthetic and minimalist design
• Help users recognize, diagnose, and recover from errors
• Help and documentation
Despite the fact that usability is a multidimensional concept, and no one key definition
exists, it appears that the idea of usability testing seems to follow a similar pattern.
One form or another suggesting that usability testing requires a number of prospective
users to perform pre-identified tasks on a particular product/site (Alshamari & Mayhew,
2009; Becker, 2011; Dumas & Redish, 1999; Krug, 2006; Sonderegger, Schmutz &
Sauer, 2015). A vast amount of literature gives precautionary advice as not to confuse
the use of focus groups and usability testing (Becker, 2011; Dumas & Redish, 1999;
Krug, 2006). Focus groups are used early in the process of designing websites and
tell very little about how users behave with the site and hence the reason as not be
confused with usability testing.
According to the 1998 ISO 9241-11: Guidance on Usability definition of usability,
usability is determined by effectiveness, efficiency and satisfaction in a specific context
of use. Considering such usability measures, Sonderegger et al. (2016) refer to
effectiveness as the degree in which a task is successfully carried out, that is the task
completion rate, accuracy, quality of outcome and other such factors. Whilst efficiency
concerns itself with the ease at which the task is carried out, i.e. the task completion
time, error rate, learning time, etc. Hornbæk (2006) reviews current practices in
usability testing and measures, he encounters a similar pattern and also enhances it
by summarizing satisfaction measures as preferences, standard questionnaires,
satisfaction with interface and others. However, it is noteworthy to mention that
Hornbæk recognizes the limitations of his review as he has only analysed research
studies and not usability testing within the software industry. He also recognizes that
this review does not account for how different tasks and domains impact the choice of
usability measures. Yet, he mentions an important factor, stating that ‘the conclusions
of some usability studies are weakened by their choices of usability measures and by
10
the way they use measures of usability to reason about quality-in-use.’ It is essential
to consider the usability measures and whether factors such as the ones mentioned
above should be considered and measured during usability testing.
While examining the effects of age in usability testing, Sonderegger et al. ‘point to the
importance of considering differences between age groups and type performance
measures when comparing results across studies.’ As a result of their testing, a strong
correlation between perceived usability and effectiveness emerges, although not
between perceived usability and efficiency amongst older users, as expected. Yet no
such difference was found amongst younger users. This may be an important factor
to consider when considering usability testing and measures of grant finding software,
as the issue of grant finding can often transcend a wide variety of ages and disciplines.
This may indicate that a combination of usability measures should be used, as
suggested by Hornbæk’s review on recent research.
11
3. Methodology
3.1. Research Approach
A mixed method was used to avoid any weaknesses of either approach alone (Blake
1989; Greene, Caracelli, and Graham, 1989; Rossman and Wilson 1991). Three data
collection techniques were used: heuristic evaluation, pre/post-test questionnaires and
a usability test.
The principal method of research was usability testing:
• The researcher read out a task to the participant and the participant performed
such task.
• The participant vocalized his/her thought as they performed the task.
• Tests were audio recorded.
Qualitative research was the dominant model throughout the study. Nonetheless, the
team completed a second data collection technique, a heuristic evaluative prior to the
usability tests in order to triangulate results and add a quantitative dimension to the
research.
3.2. Alternative Approaches
Alternative approaches and techniques were considered for the evaluation of the
usability of GrantFinder. Focus groups, surveys and beta tests were all considered.
Focus groups generally consist of 8 to 10 people and reveal user’s opinions, attitudes
and preferences, however a focus group does not usually let you see how users
actually behave with the tool (Dumas & Redish, 1999, p.24.). Thus, a focus group was
deemed invalid as it was inadequate for evaluating or assessing the usability of this
tool. Surveys were also deemed invalid for the same reason.
Despite beta tests allowing people to use the product in real environments to do real
tasks, the technique was deemed invalid. Beta tests are considered ‘too little, too
12
unsystematic and much too late to be the primary test of usability’ (Dumas & Redish,
1991, p.24.).
Questionnaires and think aloud scenarios were the chosen techniques of the research
team. The ‘think aloud’ approach was chosen for the following reasons; cost effective,
flexible, robust and expressive (Nielsen, 2012). The think aloud approach allows users
to verbalize thoughts and enable researchers to comprehend how they view the tool.
3.3. Heuristic Evaluation
The term heuristic evaluation essentially ‘describes a method in a which a small set of
evaluators examine a user interface and look for problems that violate some of the
general principles of good user interface design’ (Dumas and Redish,1999,
p.65.). These principles can be described as the ‘heuristics’ which are generally broad
rules of thumb and not specific usability guidelines. Nielsen and Molich (1990) created
nine basic usability principles to evaluate the usability of a product:
1. Use simple and natural language
2. Speak the user’s language
3. Minimize user memory load
4. Be consistent
5. Provide feedback
6. Provide clearly marked exits
7. Provide shortcuts
8. Provide good error messages
9. Prevent errors
In 1995, Nielsen developed and expanded these guidelines further. Despite the
extensiveness of Nielsen new heuristic evaluation, Travis (2014) developed a
comprehensive list which applies chiefly to the usability of a website. Travis’ heuristic
checklist (See Appendix 1.) will be employed for this reason. GrantFinder was
evaluated in the following areas: Home Page, Task Orientation, Navigation & IA,
Forms & Data Entry Trust & Credibility, Writing & Content Quality, Page Layout &
Virtual Design, Search, Help, Feedback & Error Tolerance. Each section contains a
13
number of statements. These statements were interpreted in context and rated on a
scale of -1 to 1. With -1 suggesting that the website doesn't comply with the guideline,
+1 complying with the guideline and 0 suggesting that the website partially complies
with the guideline. The research team assumed the role of evaluator and rated the
website using Travis’s checklist. Each member completed the list individually in an
attempt not to skew results.
3.4. Personas
Having identified potential user groups for GrantFinder. The team created four
personas based on user research (See Appendix 2.). Personas are representations of
audience members and provide a rich description of your audience (Spencer, 2010,
p.88). The purpose of these personas is to create reliable and accurate
representations of your key audience targets for reference (U.S. Department of Health
& Human Services, 2016). Using personas, the ideal users, their goals, behaviours,
need and interests were illustrated.
The four user groups presented in the personas are as follows:
1. PhD Students
Due to the nature of their degree and programme, PhD students
are often on the search for research grants.
2. Administrative Staff
Selected as they would be seeking grants on behalf of both
students and staff within their department.
3. Librarians
Given the changing nature of the role of the librarian, many
librarians are moving towards the area of research and for this
reason they were chosen.
4. Academic Staff
Chosen as they are also constantly on the lookout for grants for
their own research.
14
3.5. Usability Testing
Usability testing was the primary method employed. Usability tests consist of five
characteristics:
1. The chief goal is to improve the usability of a product/tool.
2. The participants represent actual users.
3. The participants perform real tasks.
4. The research team observes and records the participant’s actions.
5. The research team analyses the data collected, identifies real issues
and make appropriate recommendations.
(Dumas & Redish, 1991, p.22.)
3.5.1. Think Aloud Approach
A thinking aloud approach was selected. Participants complete a set of predefined
tasks/scenarios in which they are required to verbalize whatever comes to mind as
they perform the task. This gives the research team an insight into the cognitive
processes of the participants as they express their feelings, thoughts and perceptions.
The think aloud approach has survived nineteen years as the number one usability
tool and according to Nielsen it may be the single most valuable usability method and
thus is an accreditation to the longevity to the usability technique (Nielsen, 2012).
Whilst disadvantages exist with the think aloud approach, these are outweighed by the
advantages.
Some advantages include:
• Verbalizing thoughts enable researchers to comprehend how they view the tool
• Can collect a vast amount of qualitative data from a small number of
participants (Nielsen, 1994, p.195.)
• Cost effective as no special equipment is required.
• Robust as even with small prompts to participants, meaningful data is collected.
• Participants avoid later rationalizations as they think aloud.
(Nielsen, 2012)
15
Some disadvantages include:
• Verbalizing thoughts may slow participants down.
• Verbalizing all thoughts places participants into an unnatural environment.
• Participants may be concerned with the perception of their opinion, that they
may filter and alter thoughts.
(Nielsen, 2012)
3.6. Test Design
The test design followed a standard usability test format:
1. Introduction
2. Consent Form
3. Reassurances
4. Pre-Test Questionnaire
5. Think Aloud Scenarios
6. Post-Test Questionnaire
The research team divided into two teams of two in order to conduct the usability test.
Each team assigned a facilitator and an observer/note taker. These roles interchanged
between the two conducting teams.
Testing was completed in the participant's’ office or in the Innovation Lab of the School
of Information and Communication Studies, UCD, depending on the preference of the
participant. The tests were conducted on a laptop of the participant's choice: either a
Mac or a Windows laptop. Because the location and operating system was chosen by
the participant in each case, a screen recorder was not used. Comments and actions
were recorded by one member of each team.
3.6.1. Introduction
An introduction was prepared for the test in order to explain the purpose of the test,
the testing process and also to reassure participants of their involvement in the testing.
Permission was requested for audio recording the test.
16
A consent form was completed by the participants indicating that they understand their
role and involvement with the study and that they consented to the session being
audio-recorded. (See Appendix 4.).
3.6.2. Pre-Test Questionnaire
After the consent form and reassurances, participants were asked a total of five open
ended questions. According to the Oxford Dictionary of Psychology (2015), an open
ended question is framed in such a way as to encourage a full expression of an opinion
rather than a constrained response. Participants can, therefore, respond to questions
exactly as how they would like to answer them.
The questionnaire was created to cater for the varying participants and their
backgrounds (See Appendix 5.). The questionnaire included two pathways for those
who answered ‘yes’ and ‘no’ in terms of grant finding experience. Questions were then
established based on the participants grant finding experience.
3.6.3. Scenarios
Six think aloud scenarios were created (See Figure 7. & See Appendix 6.). Participants
first carried out a practice task/scenario in order to familiarise themselves with the
process and the think aloud approach. They then carried out six further tasks that
tested the key usability concerns, as identified by the heuristic evaluation (See Section
4.1 Heuristic Evaluation Findings).
17
Figure 7: Scenarios
3.6.4. Post Test Questionnaire
The post-test questionnaire provides researchers to gather further impressions and
information about the participants understanding of the website's strengths and
weaknesses (Dumas & Redish, 1999; Rubin & Chisnell, 2008). A post-test
questionnaire of seven open ended questions was composed for this reason (See
Appendix 7.).
18
3.6.5. Script
A script was created by the team based on a design proved by Steve Krug (2010) in
order to create fair testing conditions in which each participant was subject to the exact
same test creating consistency and reliability amongst testing teams.
The script included an introductory section, informing the participant of the purpose of
the research, the process of the test as well as reassuring the participant about their
involvement (See Appendix 8.).
3.7. Team Review
Following the creation of the usability test, the team met to review the test. A test
protocol was then drawn up in order to avoid any errors (See Appendix.3.).
Prior to the pilot review and the conduction of the test amongst participants, the group
conducted a dry run test. A dry run test is a test performed to test the efficiency,
performance or stability of a particular product or software. Such a test was conducted
to identify any other possible failings that could affect our testing.
3.8. Pilot Review
A pilot test was conducted with a contact of one of the researchers. The main objective
of a pilot test is to ‘debug’ the equipment, materials, and procedures you will use for
the test (Dumas & Redish, 1999, p.264.). The pilot test illustrated that minor
adjustments needed to made to the script and that the test would approximately take
forty minutes.
3.9. Participants
The ideal number of participants for any usability test is 5 participants (Nielsen,
2012). The initial aim was to test 4-5 participants from each user group. Usually, 3-4
19
users per group can suffice because the user experience may overlap slightly between
the two groups (Nielsen, 2012). Due to difficulty in recruitment, the user groups were
imbalanced and consisted of two academic staff, four administrative staff, three
librarians, five PhD students and one post doctorate student.
Participants were de-identified using anonymising codes. Each group had one code.
Librarians were described using ‘L’, academics using ‘A’, administrative staff using
‘AD’, PhD students using ‘P’ and finally postdoctoral students using ‘PD’.
3.9.1. Recruitment
Participants were recruited from the University College Dublin population. The team
recruited participants using multistage and snowball sampling methods.
Multistage sampling was employed. Multistage sampling refers to sampling plans
where the sampling is carried out in stages using smaller and smaller sampling units
at each stage (ResearchGate). The first stage involves choosing an initial cluster
group. This can be recognized as UCD. The second stage involves using our personas
and narrowing the participant population down to sub clusters that met similar
characteristics and belonged to these specific user groups. Our supervisor, Dr Judith
Wusteman, individually emailed a limited number UCD academic, administrative and
librarian staff known to her from across a variety of UCD schools. There was no
general "mailshot".
A snowballing sample approach was also applied. Describing snowball sampling in its
simplest form, it consists of identifying participants who are then used to refer
researchers on to other researchers (Atkinson & Flint, 2001). Due to the testing taking
place during out of term hours, this was the most effective method in terms of recruiting
PhD students as participants. Once participants had been identified, their respective
head of schools were contacted for permission.
20
All participants were, then, contacted via email and provided with an information leaflet
(see Appendix 9. & Appendix 10.). The leaflet outlined the scope of the project and
their role within the testing process.
3.10. Ethical Considerations
Ethical exemption was approved from the university’s Human Research Ethics
Committee (See Appendix 12 & 13).
3.11. Data Analysis
The results from the heuristic evaluation, provided the group with essential information
about the usability and the function of the tool. Usability issues and priori themes could
be identified.
The results of the heuristic evaluation were used in the development of a priori codes
which were then used, along with emergent codes, in the coding of the usability test
transcripts. The priori coding categories ultimately formed the themes for our data
analysis. The team checked the reliability of the coding using an Nvivo coding
comparison query.
Triangulation was employed to allow each data collection technique to complement
each other. The concept of triangulation refers to the combination of two or more
research methods in a study used for cross verification (Bogdan & Biklen, 2006).
Quantitative data from the heuristic evaluation and qualitative data from the usability
tests were used for cross verification. This ensured that issues identified from the
researchers were also recognized by the test participants, helping to eliminate bias
and solidify issues identified.
3.11.1. Coding
A priori and emergent codes were applied to transcripts (See Appendix 14.). The
transcripts were coded by two members of the research team. Both members coded
the transcripts independently, using Nvivo 11, and then coding results were merged.
21
A hierarchy chart was created (see Appendix 15.) illustrating the most common codes
extracted and the dominant themes.
Intercoder consistency or reliability is expressed as the extent to which independent
coders evaluate a characteristic of data and reach the same conclusion (Lombard,
2010). High levels of disagreement among coders can suggest weaknesses in
research method, categories and coder training (Kolbe & Burnett, 1991, p.248.). This
was avoided. Using Nvivo, coding was merged. Following this, a coding comparison
query was ran (See Appendix 16.). All coding reached a level of 86.16% agreement
and higher ensuring consistency and reliability amongst coders.
3.12. Limitations
Two major limitations emerged; time constraints and the absence of a screen recorder.
Test design and recruitment had to be completed within a short time span. As each of
the team had other important commitments, the short time span proved difficult at time
regards completing tasks and jobs on schedule. Recruitment can be a challenging
aspect of usability testing. The usability testing also had to be completed during the
month of June, out of term hours for the university contributing to recruitment issues.
A second limitation included the absence of a screen recorder. As tests were
conducted in the location of the participant’s choice and also on the computer/laptop
of their choice, it was difficult to use a screen recorder and install on each
computer/laptop used. However, the team recognize the impact a screen recorder may
have made and also the information it may have contributed to the results in terms of
user mouse movement and patterns.
3.13. Reliability
In order to ensure reliable results (Hughes, 2011) and provide valid recommendations,
the team created a test protocol to be adhered to with each test. The team developed
a script in order to produce fair test conditions as well as providing a standard coding
scheme that both coders abided by.
22
4. Results
4.1. Heuristic Evaluation Results
The heuristic data was analysed using Microsoft Excel. Each section of the evaluation
received a percentage grade out of a possible 100. As the team conducted the
heuristic evaluation independently, the average result for each section was calculated.
Scoring above 50% indicated that that area of the website scored average, with above
75% indicating an above average compliance with the guidelines.
The results indicated that there were at least three areas of major concern:
• Search
• Forms & Data Entry
• Help, Feedback & Error Tolerance.
However, despite identifying three principal areas of concern, no area scored
exceptionally high, with only four areas scoring above the average of 50%. Each area
had various usability issues and concerns.
The results are as follows:
Search 21%
Forms & Data Entry 41%
Help, Feedback & Error Tolerance 44%
Trust & Credibility 47%
Home Page 48%
Task Orientation 50%
Writing & Content Quality 61%
Navigation & IA 65%
Page Layout & Visual Design 72%
A radar chart was created to illustrate these results more clearly (See Figure 8.)
23
Figure 8. Heuristic Results Radar Chart
4.2. Pre-Test questionnaire Findings
The majority of participants identified as having previous experience of grant-
finding. Research Professional was the most commonly used grant-finding
website. Participants emphasized the importance of having an intuitive, easy to
use and professional interface for a grant finding website. Participants also
mentioned the importance of an advanced search function. None of the
participants had used or heard about GrantFinder prior to the testing (See
Appendix 18.)
24
4.3.Usability Test Findings
4.3.1. Search
Search is one of the most important aspects of GrantFinder. It is primarily a search
engine tool, albeit with a very specific function. The website has no internal search
function for finding information within the site itself, so the only aspect of searching to
be considered is the search page, rather than also looking at a function that allows
users to search the whole website. Given that our test participants were, for the most
part, skilled at finding either grants or resources, a number of them testified to the
importance of the search interface, with one stating that search functionality should
“…be easy to broaden and narrow.” (AD2)
The predetermined specialities allow users to choose from five different disciplines in
order to filter their searches. Several of the users praised this aspect of the search
page. When asked what the positive aspects of the site, one participant stated, “What
was helpful was the specialities” (A2). However, while this was a helpful function, other
users felt that the choice was too broad and that it did little to aid them with their
queries, for example: “I would like a little more breakdown to categories and
subcategories.” (L1)
The use of standard metadata should allow users to broaden and narrow the searches.
However, the bearing of the scores on these standard words is not explained at any
point on the website’s tutorial page, or in the FAQs. This caused some difficulty as it
made the search interface difficult for the user.
‘Either show the scores’ meaning or don’t show them at all…it does add an element
of confusion that I would deem to be unnecessary.’ (AD2)
4.3.2. Forms and Data Entry
The participants had difficulty understanding the reason for the standard metadata
used by GrantFinder. Given that most search engines (grant seeking sites included)
25
use free-text searching, this style of search engine is alien to most. So, too, is the
score attached to all of the standard words. This created some confusion in the testing,
with some users rejecting it outright.
“I would not assign any score as I don’t understand what they mean.” (AD3)
Similarly, many of the participants felt that the standard words and specialities were
too broad and did not allow them to properly search for grant calls. Several mentioned
that the standard metadata provided on the search page were common to a significant
number of grant calls, and that this would be reflected in the number of results given.
“These words could be for any grant proposal, any good grant proposal that
universities would want to get involved in.” (A2)
Another issue that arose in relation to data entry was that the standard words are not
clickable and must be entered manually, along with the score. Not only this, but there
is a scrolling feature for the input of the scores, but this increases in increments of one.
This is not feasible given that some words, such as “deadline” and “research grant”
have scores in the hundreds and even thousands. In this instance the score must be
entered manually. This is time consuming for the user and does not aid them in
facilitating their searches.
“I thought looking at the tutorial that I could click the standard words but there’s no
link attached…” (AD4)
Relating to clickable areas and data entry was the placement of the plus buttons,
highlighted in Figure.9. It was unclear to some users what these buttons related to.
This was most apparent in the case of the second button, which allows users to add
excluded words.
“It is confusing whether this add button, this plus sign, relates to mandatory words or
excluded words. And the fact that it is closer to excluded words confuses me more.”
(P2)
26
Figure. 9: Add Button placement on current search page.
4.3.3. Help, FAQ and Error Tolerance
Tutorial Page
Figure 10 shows part of the current tutorial page for GrantFinder. The tutorial page
caused confusion among most participants. Much of this confusion stemmed from the
explanation of the following terms: standard words, mandatory words, job ID and
score.
Figure. 10. Current Tutorial Page: Information Overload
27
Users also expressed concern over the quality and quantity of the writing on the tutorial
page, which could potentially lead to information overload.
“And tutorial there is just too much text. And I would probably never understand this,
if I was to read it 100 times. So instead of having the sections in bold. I would
shorten the sentences so that there is 15 words max, or 10 words max.” (P2)
FAQ
When clicking to the FAQ from the menu header, the FAQ page is empty (see Figure
11.). This concerned participants as it did not give them any information about the
GrantFinder.
“There’s no FAQ, so I assume this is a very early web site or something that is being for
experiment or first iteration and it could be improved.” (L1)
Figure 11. Current FAQ page: Empty
Error Tolerance
The final piece of information necessary to carry out a search through GrantFinder is
a valid email address. The results of the search query are to be accessed with the
user's email address and the Job ID received when the query is complete (See Figure
12.). Although, some participants did not enter their email address, the software still
navigated to the job confirmation page.
In addition to this, while checking the results for the respective Job ID, there is no email
verification. The page will display the results upon any email address submission. No
28
error notification is used for the entry of a wrong email address. The results will still be
displayed with the wrong combination of information illustrating a lack of security.
Figure 12. Current Results Page – Email Validation
4.3.4. Homepage
Findings indicated that there are several problems with the homepage. Users found
the homepage to be empty and lacking information. They observed that the supporting
image present on the homepage did not relate directly to the service GrantFinder
provides, and does not illustrate anything of use to the user.
“Doesn’t give too much information. The homepage is almost blank. There is not
much information. I believe it would be better if some more information could be
added to it.” (P1)
29
Figure 13. Homepage Image: Irrelevant Image
4.3.5. Page Layout and Visual Design
A key difficulty encountered by users was understanding the results from GrantFinder.
Participants expected a small number of relevant potential funding opportunities and
reported that they did not understand the complexities of cluster section. They also
found the layout of the results page to be confusing. The text does not correctly wrap
making parts of the results page difficult for the reader to read and use.
“What’s quite pertinent is that the text does not wrap around. The layout is incorrect
and that needs to be immediately assessed. It should be readable.” (AD4)
Figure 14: Results Page – Visual Design
Concerns were also raised about the layout and design of the tutorial page. The layout
is unclear, with several users commenting negatively on the visual design and layout
of the page.
4.3.6. Writing & Content Quality
Findings revealed that a mix of languages (English and Spanish) are used particularly
in the standard words of the search page. The standard words are not in any order or
sequence. The standard words also include a number of repeated words with different
scores which did not make sense to users.
30
“I’m not really sure what the difference between ‘elgib’ and ‘eligibility’ and why they
are two different words. And so similarly, there is another word that is ‘eligib.’ And as
well there are two similar words like applicant and applicant. What’s the difference
between the two?” (P1)
Figure 15. Search Page – Standard Words
A number of spelling mistakes and typos were recognized by the participants,
indicating that the software lacks proof-reading.
Figure 16. Tutorial Page: Spelling Mistakes
4.3.7. Users' Perception
Users found the website very complicated. This was mainly due to the unclear
instructions and scrambled structure. Many users were also frustrated by the lack of
explanation provided. The website required the users to go through a three stage data
entry structure for grant finding which was labelled as ‘old fashioned’ by most of the
31
participants. The users particularly struggled with the concept of applying a score to
their mandatory words. They were unsure as to how it would affect their search. This
confusion soon led to frustration for many participants. It was noted that users also
repeatedly complained about the lack of clickable links or drop-down menus.
4.4. Post-Test Questionnaire Finding
Reviewing the post-test questionnaire confirmed that the majority of the participants
found the website to be complex and unfriendly (See Appendix 19.). Participants
complained that searching for grants based on the poor search criteria was difficult
and that the lack of real-time results made it less efficient. Again, participants found
that the use of scores were difficult to comprehend because there was no indication
of the units used. The scores could mean percentage, number of occurrences etc., but
this was not clear. Participants also recommended using a simpler and more user
friendly interface. As well as recommending that one language only should be used
for the tool. Most participants stated that they would not use GrantFinder again. Most
said they would be more likely to use an alternative website in order to find grants in
the future, as GrantFinder did not meet their expectations.
4.5. Triangulation
Results from the heuristic evaluation and usability testing were triangulated. Findings
show that similar results were found through the two methods.
32
Function Heuristic Evaluation Results Usability Test Results
Search • Lack of real-time results.
• Difficult to know if search was
effective
• Users acknowledged the lack of real-
time results.
• Users found search function difficult
to use and confused as to whether
their search was effective or not.
Forms & Data Entry • Users must enter information using
basic text entry fields
• Speciality drop down menu
restrictive.
• Lack of use of drop down menus,
check boxes and clickable areas.
• No clear distinction between
required and optional entry fields.
• Entry of information outdated.
• Speciality fields narrow.
• Preference for more filtered search
containing clickable areas and
filtered search options.
• Users unsure required data inputs.
Help, Feedback & Error Tolerance • Tutorial page may cause confusion
• Tutorial page contains information
overload.
• FAQ page contains no information.
• Options between field inputs are
not obvious to users.
• Users had to reread tutorial page
several times.
• Users experienced information
overload.
• Users were frustrated by the lack of
content on FAQ page
• Users were confused by the
mandatory word input and use of
standard words.
Homepage • Homepage does contain blurb but
appears irrelevant.
• Irrelevant image included.
• Homepage contains very little
information about the site and what
it does.
• An unrelated image is included
Page Layout & Visual Design • Simple user interface.
• Little use of images.
• Concerns raised about layout of
results page.
• Users admired simple design
• Design needs to include more
images and screenshots particularly
the tutorial page.
• Users layout of results page
Task Orientation • Users have to re-enter email
address with each search
• Users commented negatively on the
re-entry of email address.
33
5. Discussion
5.1. Usability of GrantFinder
Usability of a website can be described as the effectiveness and efficiency of the
website (Guidance on Usability 2015). The key principles on which the usability of a
website depends are availability, accessibility, learnability, efficiency, memorability,
clarity and ‘satisfaction’ (Idler 2013). These key principles are of the utmost importance
when discussing GrantFinder’s usability, particularly in terms of the websites layout,
navigation, interface, written content and functionality. The findings from this report will
identify the key issues involving GrantFinder and its user accessibility.
Sonderegger et al. (2016) defines the effectiveness of a website as the degree to
which a task is successfully carried out i.e. the task completion rate, accuracy, quality
of outcome and task completion time. GrantFinder seems to adhere to most of these
qualities except accuracy in terms of receiving a search result. The website takes a
few hours to produce the results which did prove a problem to many of the participants.
Participants commented that there are many other websites which provide real-time
results e.g. Research Professional. Many users said that if they have to wait for
results, they would probably search elsewhere.
Schneiderman (2005) argues that the design of the website should be balanced with
its functionality. The high complexity and clutter of information such as the tutorial page
that users described as “text‐heavy”, can lead to user frustration. Schneiderman
stresses the need for a site to really understand its users in order to effectively achieve
an adequate balance. Decluttering the tutorial page, providing information in the FAQs,
as well as correcting all the typos will create a more user friendly interface.
Jain (2013) emphasizes the important factors to consider when building a website,
and thus place emphasis on the home page. The author suggests that images that are
used in the website should represent what the website offers. In this regard,
GrantFinder does not achieve its objectives, as the image shown on the homepage is
not relevant to what the website offers.
34
“An impractical or a confused website can break a company’s success” (Wahmarticles
(n.d)). It is important not only to make a resourceful website, but one that is easily
navigated because it is the user for whom the website has been designed. GrantFinder
can be used, however it may not be ready for use. Several participants whom were
confused with the websites functionality questioned what stage of development
GrantFinder was at and was it ready for use. Questions such as these imply that the
user is not satisfied with the website currently presented to them.
5.2. Future Directions
The future direction of GrantFinder may be unclear due to the amount of adjustments
needed to improve the site and its current design. A possible solution would be to
begin a complete rework of the software and then move forward with qualitative
analysis.
5.2.1. Hot Jar
A future direction that GrantFinder can move towards next or later in the future is
qualitative analysis. User online behaviour can be revealed via tools such as Hotjar.
Hotjar is a tool that uses a combination of analysis and feedback. GrantFinder analysts
will be able to improve the user experience by observing and measuring user
behaviour. The installation process of Hotjar within GrantFinder is easy and simply
needs registration. Once successful, a script is generated and the developer needs to
plug this script into GrantFinder’s main Template page. (https://www.hotjar.com/).
Heatmap
Heatmap helps in knowing user needs by visually representing their clicks, scrolls and
tapings. Heatmap reveals the user motivation and desire.
Figure 17. Hotjar: Heatmap
35
Recording
The recording tool will store information from visitor sessions about their interactions
with GrantFinder. With the web socket connection, the script will send information such
as keystrokes and mouse clicks to the Hotjar Server. It stores the coloured strokes
that will help monitor user actions.
Figure 18. Hotjar: Recording
Funnel
With Funnel, GrantFinder analysts will be able to identify the pages from which users
are leaving the web site. They can be able to trace the bounce rate of GrantFinder.
Figure 19. Hotjar: Funnel
Form Analytics
Form Analytics will help to improve the search page submission form by identifying the
fields that take the longest to complete from the users perspective. It will also help to
reduce the number of visitors who abandon the form.
36
Figure 20. Hotjar: Form Analytics
5.2.2. URL/Host
When users try to search for grants through search engines such as Google, it is
unlikely that they will reach the URL http://bio-hpc.ucam.edu/GrantFinder/. The site is
hosted through BIO HPC and Universidad Católica San Antonio de Murcia. To access
the tool, users must go through the university to gain access. A direction in which
GrantFinder could move in the future would be to move the software to an independent
site, making it easier for users to access. To encourage usage of GrantFinder, the
URL could then be restructured using a different domain name.
37
6. Recommendations
6.1. Forms and Data Entry
6.1.1. Search
As users found the use of five mandatory words limiting, it is recommended that the
developer add an unlimited number of search columns. This will allow the user to
narrow down their search and increase the specificity of a search. The developer could
also incorporate free text searching, which would give the users freedom to specify
what they require in the search.
6.1.2. Score
According to the participants, the highest raised concern was the score that was given
to the standard input words. The participants had no idea how to interpret the scores,
or how to incorporate them into the search. To overcome this problem, the site could
provide a detailed explanation of what a score means, how it is used, and how it relates
to the results that the user obtains. If the developer of the website does not wish to
reveal the workings of the search engine, they should remove the score section from
the website.
6.1.3. Standard Words
The participants felt that the standard words were broad in their search criteria and did
not give specific search results. The developer could incorporate speciality-specific
words and a free text search option through which users could use specific words that
are of relevance (See Figure 21.). The standard words are listed as a table with
corresponding scores in the search page; many participants confused these words as
links. This problem can be fixed by providing these words as a drop down menu or
clickable links that would be entered automatically into the mandatory words search
box of the user’s choice.
38
Figure 21. Standard Words Recommendation
In Figure 21, the user is provided with all the specialities, each with a radio button
option; as soon as the user chooses one option all other options are disabled. Using
the drop-down menu, users can see the list of predefined words given for an individual
speciality and can choosing accordingly.
Taking these recommendations into account, the wireframe shown in Figure 22 has
been developed to indicate the suggested revisions to the search page.
Figure 22. Search Page Wireframe
39
6.2. Functionality
6.2.1. Real-Time Results
It is recommended that users receive results as soon as they click the search button.
If this is not yet achievable, the language of the message should be changed as it is
casual and not appropriate for the software.
Figure 23. Current Job Submission Message
6.2.2. Personalization/ User Profile
To optimize the ease of use and reduce the user’s workload, users should be able to
create their own profiles. Users should be able to save their essential details such as
email address, previous searches, previous results, favourite keywords and grants.
They should be able to save their search results and share these via email and other
social media. In this way, the user does not have to enter his/her email address for
each search query and would be aware of their own search history. Figure 24 is an
example of the entry point for users to begin to create their own account.
40
Figure 24. Example Sign Up Form for a Personal Account
6.2.3. Main Grants on Homepage
User location can be easily identified with the IP address. It is recommended that using
this IP address, the homepage highlights main grants published in the user’s
location/country. These grants can be highlighted on the homepage similar to News
Feed or Carousel. Implementing this, will provide the user with grants without having
to search first. It also gives users an idea to what type of grant calls they may receive
in a search result. Figure 25 is an example of the main grants displayed on the
homepage.
41
Figure 25. Main Grants on Homepage
6.3. Help, Feedback & Error Tolerance
6.3.1. Tutorial
The tutorial page needs to reduce the number of words it uses to explain the search
process. The process should be explained in three to five distinct steps. The steps
should be presented in a logical order. Sentences should be clear and concise. The
use of highlighting and acknowledging keywords is encouraged. Hyperlinks should be
used where possible.
The information provided in this section should be reduced to the vital information
required only. The results access section of the tutorial page should also be displayed
in a user friendly way. Bullet points are advised. Screenshots and images of the
website are encouraged where appropriate. It is recommended that any images
included should be of high resolution for users to clearly see and interpret.
A video tutorial should be included, in which a user performs a sample search. Each
step of the search process should be clearly defined. The video should be no more
than 45-90 seconds. As with all pages of GrantFinder, the Tutorial page should be
proof-read.
42
The following wireframe shows the recommended format for the Tutorial page, based
on the feedback from the participants (See Figure 26.)
Figure 26. Tutorial Page Wireframe
6.3.2. Email Validation/Error Notification
An issue detected during the testing was that users could complete a search without
entering their email address and a successful job submission message was still
revealed. Users may be misled by this message and assume that they have completed
a search correctly. It is a recommendation that the successful job submission page
does not proceed if email address has not been entered. Instead, an error notification
or reminder should appear on the mandatory field i.e. email address which needs to
be completed before proceeding (See Figure 27.) The website needs to ensure form
validation. It is recommendation that inline validation is used here.
43
Figure 27. Inline Form Error Notifications
A similar issue may occur on the results page: any Job ID can be entered with any
email address. This breaches confidentiality as well as user trust and site credibility. It
is important that user information and user searches remain confidential and only
available to the correct user. It is suggested that this section also contains a password
to improve security. If the incorrect combination is entered, an appropriate error
message should be displayed similar to when an incorrect email address and
password are entered (See Figure. 28.).
44
Figure 28. Combination Error. A similar approach to Google should be utilized on Results
Page.
6.3.3. FAQ Section
Currently, the FAQ page remains completely blank. In order to reduce user confusion
and increase website use, it is a recommendation that the FAQ section be completed
with relevant questions and answers.
The FAQ page should avoid the use of jargon and should be written in simple user
friendly language that potential users can understand. A suggested layout is illustrated
in Figure 29. It is recommended that GrantFinder use keywords and phrases relevant
to the website in the FAQ section. This can help to boost the sites SEO. Google and
other search engines can therefore understand what type of business or tool the
website is and rank the website accordingly for relevant searches.
45
Figure 29: Suggested FAQ
6.4. Writing & Content Quality
6.4.1. Language
GrantFinder consists of a mix of languages across the site but particularly in the
standard words section of the search page. There are duplications of standard words
in more than one language which should be standardized to English. Having a
dropdown menu for the purpose of selecting which language the user wishes to
continue in before completing a search query is recommended (See Figure 21). User
default language would be English
6.4.2. Ordering of Lists
The standard words and the speciality field should be arranged in alphabetical order
(See Figure 30 & 31). This will make it easier for the user to comprehend and access
what they are seeking more efficiently.
46
Figure 30. Alphabetically Ordered Clickable Standard Words
Figure 31. Speciality: Alphabetically Ordered Dropdown Menu
6.4.3. Glossary
Users should be allowed to understand the meaning or definition of keywords used by
GrantFinder. Providing a keyword glossary on the tutorial page can do this (See Figure
32.).
Figure 32. Glossary of Keywords Example
6.5. Layout and Design
6.5.1. Home Page
We recommend that the developers design a logo and display it on the homepage.
This will allow users to automatically register that they have found the service that they
are looking for. It is also important to include information on the tool itself for new users.
47
This measure will give users an opportunity to gain some insight into how the tool
works and whether or not it is the right service for them, etc. Images not directly relating
to the software should be removed.
Findings from the testing indicated that it would be useful to incorporate some aspects
of the tutorial page into the home page. As such, a video tutorial is included on the
homepage. Links to the tutorial page and to a sample search page are included in the
same area of the homepage. This will allow users to further practice in engaging with
the software.
As previously mentioned a newsfeed or carrousel of familiar funding bodies should be
present, so that users will see a familiar name and be more likely to trust GrantFinder.
Recommendations for alterations to the layout of the home page are illustrated in
figure 33.
Figure 33. Home Page Wireframe
48
6.5.2. Results Page
As with the search page, simple but important alterations are required in order to
improve the appearance and usability of the results page.
In Figure 34, the display of actual word clusters has been removed. Participants in the
testing regarded this as being confusing and cumbersome. Instead, what is shown on
the results page is the title of the grant, the funding body, and a hyperlink that links the
user directly to the webpage of the result. The number of word clusters and the score
still appear, but are clearly labelled so that the user can identify them.
Figure 34. Results Page Wireframe
49
7. Conclusion
The Capstone project was completed through the work of four master’s students from
the School of Information and Communication Studies over a nine-month period (See
Appendix 20.). The project’s projected aims of identifying the most prominent usability
issues with GrantFinder, developing a series of recommendations and ultimately
evaluating the usability of GrantFinder itself were completed.
Usability issues were clearly identified and outlined in this report. Ultimately, the data
collected through these methods enabled the team to evaluate the usability of
GrantFinder. Through the triangulation of data collection techniques, the team were
able to make appropriate and valid recommendations that will improve the use and
usability of the site. These recommendations will be communicated to the developers
of the site.
50
References
Alshamari, M., & Mayhew, P. (2009). Technical review: Current issues of usability
testing. IETE Technical Review, 26(6), 402-406. doi:10.4103/0256-4602.57825
Atkinson, R. & Flint, J. (2001). Accessing hidden and hard-to-reach populations:
snowball research strategies. Social Research Update 33. Retrieved from
http://sru.soc.surrey.ac.uk/SRU33.pdf
Becker, D. (2011). Usability testing on a shoestring test-driving your website.
WILTON: ONLINE INC.
Beall, J. (2007). Search fatigue: finding a cure for the database blues. American
Libraries, 38(7), 46-50.
Blake, R. (1989) Integrating Quantitative and Qualitative Methods in Family
Research. Families Systems and Health 7:411–427
Bogdan, R. C. & Biklen, S. K. (2006). Qualitative research in education: An
introduction to theory and methods. Allyn & Bacon.
Caracelli, V. J., & Greene, J. (1993) Data Analysis Strategies for Mixed-Method
Evaluation Designs. Educational Evaluation and Policy Analysis 15(2): 195-207
Duarte, E., Oliveira, Edson, J., Côgo, F., & Pereira, R. (2015). Dico: A conceptual
search model to support the design and evaluation of advanced search features for
exploratory search. Human Computer Interaction, 9299 87-104.
Dumas, J. S., & Redish, J. (1999). A practical guide to usability testing (Rev. ed.).
Exeter: Intellect.
Elliott, A., Hearst, M., English, J., Sinha, R., Swearingen, K., & Yee, K. (2002).
Finding the flow of web site search. Communications of the ACM, 45(9), 42-
49.Retrived from http://bailando.sims.berkeley.edu/papers/cacm02-final.html
Emiko, D. (2012). Eight crazy kickstarter campaigns that should never have
succeeded. The Guardian Retrieved from
https://www.theguardian.com/technology/2014/jul/08/eight-crazy-kickstarter-
campaigns-that-should-never-have-succeeded
51
Guarte, J. M., & Barrios, E. B. (2006). Estimation under purposive sampling.
Communications in Statistics - Simulation and Computation, 35(2), 277-284. Doi:
10.1080/03610910600591610
Haney, W., Russell, M., Gulek, C., & Fierros, E. (Jan-Feb, 1998). Drawing on
education: Using student drawings to promote middle school improvement. Schools
in the Middle, 7(3), 38- 43.
Hearther. N. (2009). Grantsmanship: Information Resources to Help Researchers
Get Funding. University of Minnesota.
Hornbæk, K. (2006). Current practice in measuring usability: Challenges to usability
studies and research. International Journal of Human - Computer Studies, 64(2), 79-
102. doi:10.1016/j.ijhcs.2005.06.002
Hughes, M. (2011). Reliability and Dependability in Usability Testing. Retrieved from
http://www.uxmatters.com/mt/archives/2011/06/reliability-and-dependability-in-
usability-testing.php
Idler, S. (2016, August 16). 5 key principles of good website usability. Retrieved
from
https://blog.crazyegg.com/2013/03/26/principles-website-usability
ISO/IEC. (1998) "9241-11 Ergonomic requirements for office work with visual
display terminals (VDT) s-part II guidance on Usability," ISO/IEC 9241-11
Jain, M. (2016, August 16). Important factor to consider when building a website.
Retrieved from
http://blog.clicktraffic.com/building-website-factors/
Kolbe, R. H. and Burnett, M. S. (1991). Content-analysis research: An examination
of applications with directives for improving research reliability and objectivity.
Journal of Consumer Research, 18, 243-250.
Krug, S. (2005). Don't make me think: A common sense approach to web usability
(2nd ed.). Indianapolis, Ind; London; New Riders.
Krug, S. (2010). Rocket surgery made easy: The do-it-yourself guide to finding and
fixing usability problems. Berkeley, CA: New Riders.
Lewis, C. H. (1982). Using the "Thinking Aloud" Method In Cognitive Interface
Design (Technical report). IBM.
52
Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2010). Intercoder reliability.
Malchanov, D. (2014). 22 Principles of good website navigation and usability.
Retrieved from
http://swimbi.com/blog/22-principles-of-good-web-navigation-and-maximum-usability/
Nielsen, J. & Molich, R. (1990). Heuristic evaluation of user interfaces. Proceedings
of the ACM CHI’90, pp.249-256
Nielsen, J. (1993). 10 usability heuristics for user interface design.
Nielsen, J. (1994). Usability engineering. Elsevier.
Nielsen, J. (1995). 10 usability heuristics for user interface design. Retrieved from
http://www.nngroup.com/articles/ten‐usability‐heuristics/
Nielsen, J. (2012). How many test users in a usability study? Retrieved from
https://www.nngroup.com/articles/how-many-test-users/
Nielsen, J. (2012). Thinking aloud: the #1 usability tool. Retrieved from
https://www.nngroup.com/articles/thinking-aloud-the-1-usability-tool/
Porter, R. (2009). Research Management Review: Can We Talk? Contacting Grant
Program Officers. University of Tennessee. 17(1), 2-5.
Pena-Gracia, J., Den-hann, H., Caballero A., Imbernon, B., Ceron-Carrason, J.P.,
Vicente-Contreras, A., & Perez-Sanchez, H. (2016). GrantFinder, a web based tool
for search of research grant call
Ray, D.S. & Ray, E.J. (1999). Matching internet resources to information needs: an
approach to improving internet search results. Technical Communication, 46(4), 569-
574.
Rossman, G., & Wilson, B. (1991). Numbers and Words Revisited: Being
“Shamelessly Eclectic.” Evaluation Review 9(5):627–643.
Rubin, J. & Chisnell, D. (2008). Handbook of Usability Testing: How to Plan, Design
and Conduct Effective Tests (2nd ed.). Retrieved from
http://www.ac4d.com/classes/301/HandbookofUsabilityTestingHowtoPlanDesignand
CondChapter8PrepareTestMateri.pdf
53
Schneiderman, B. (2005). Designing the User Interface, Fourth Edition. Pearson‐
Addison Wesley, New York.
Sonderegger, A., Schmutz, S., & Sauer, J. (2016). The influence of age in usability
testing. Applied Ergonomics, 52, 291-300. doi:10.1016/j.apergo.2015.06.012
Stemler, Steve (2001). An overview of content analysis. Practical Assessment,
Research & Evaluation, 7(17). Retrieved from
http://PAREonline.net/getvn.asp?v=7&n=17
Spencer, D. (2010). A practical guide to information architecture. Penarth: Five
Simple Steps.
Travis, D. (2014). 247 Web Usability Guidelines. Retrieved from
http://www.userfocus.co.uk/resources/guidelines.html
U.S. Department of Health & Human Services, (2016, 11th July). Personas.
Retrieved from https://www.usability.gov/how-to-and-tools/methods/personas.html
Usability Architect (2016, August 15). The components of Usability Retrieved from
http://www.usability-architects.com/compnts.htm
Wahm Articles (2016, August 15): The Importance of usability in Website design
Retrieved from http://www.wahm.com/articles/the-importance-of-usability-in-website-
design.html
Weber, R. P. (1990). Basic Content Analysis, 2nd ed. Newbury Park, CA open-
ended question (2015). (4th ed.) Oxford University Press
W3C Web Accessibility initiative (2016, 16 August): Accessibility Evaluation
Resources Retrieved from https://www.w3.org/standards/webdesign/accessibility”
open-ended question (2015). (4th ed.) Oxford University Press.
Multistage sampling (Ch13) - Researchgate.
54
Appendices
Appendix 1: Heuristic Evaluation Checklist Example: Trust & Credibility
55
Appendix 2: Personas
Example 1. Librarian Persona
56
Example 2. PhD Persona
57
Example 3. Academic Persona
58
Example 4. Administrative Staff Persona
59
Appendix 3: Test Protocol
Testing protocol for GrantFinder Usability Testing
This document contains the testing protocol for the usability of GrantFinder.
Aims of testing:
• Discovering usability issues faced by participants
• Identifying possible solutions to issues discovered.
The GrantFinder Software test aims to recruit 15 participants from the following groups, PhD
students, postdoctoral students, administrative and academic staff from University College Dublin.
The usability testing of the GrantFinder will take place within a 2 week period with each individual
test not lasting more than 1 hour.
Test Requirements:
1. Material needed:
• Consent Form
• Notebook
• Pens
• Printed version of six scenarios
• Laptops
• Audio Recorder
2. Two team members are required for each test:
• Test Facilitator: One team member will be the test facilitator. They will facilitate
the test and read the script as required.
• Note Taker: The second team member will take notes from the testing. They will
take care of the all the external material used during the interview such as an
audio recorder. They will observe the mouse movements of the participant.
60
Appendix 4: Consent Form
GrantFinder Software Usability Test
I _____________ understand my role as a participant for the usability testing of the
(Print Name)
GrantFinder web software. I have been given the opportunity to ask questions in relation to
the research methods, as well as my role as a research participant. Thus, I hereby agree to
participate in a usability test that will be audio recorded. I understand that these subsequent
recordings will be listened to by the members of the research team and their supervisor. I
understand and give consent that my comments and interactions will be used in the final
research report and subsequent publications using anonymising codes. I am aware that
interview information will remain confidential at all times and that I am entitled to withdraw from
the research process at any stage.
Signed: ________________________
Date: ________________________
61
Appendix 5. Pre-Test Questionnaire
PRE TEST QUESTIONNAIRE
1. Do you have any experience in seeking or applying for grants?
IF ANSWER YES:
• How frequently do you seek or apply for grants?
• Do you use any particular websites?
• What aspects of those websites do you particularly like?
IF ANSWER NO:
• In terms of discovering resources, how skilled would you describe yourself?
• What websites or search tools would you use to discover resources?
• What, do you consider, to be important aspect of a search engine tool?
2. Have you ever used GrantFinder before? If answer yes, was your search successful?
62
Appendix 6. Scenarios
Scenario: Tutorial
Task:
You are unsure how to perform a search using GrantFinder. Where would you find more
information about how to use this site?
Steps:
1. Start on homepage: http://193.147.26.44/GrantFinder/
2. Click on Tutorials
3. Scroll down to ‘3. Results Access’
Assumption: The participant will see the word Tutorial in the menu buttons at the top of the
page. We assume the participant will then read the tutorial information.
Scenario 1: Search: Mandatory Words
Task:
You are interested in conducting research on Georgian Buildings in Dublin. You are
particularly concerned about the closing dates and necessary criteria for funding applications
in this field. Conduct a search that takes this into account.
Steps:
1. 1.Start on homepage:
2. Click on Search
3. Select Architecture from the drop down Speciality menu.
4. Enter up to 5 mandatory words and their scores including “application deadline”
“application requirement” “application guide” etc.
Assumption: The participant will understand the correlations between the information given
to them and the standard words available to them. We assume the participant will choose
“application deadline”, “application requirement” and “application guide”.
Scenario 2: Search: Mandatory/Excluded Words
Task:
You are a European citizen seeking funding for a research project concerning the general field
of health and nutrition. Your main priority is to discover whether or not you are suitable for the
relevant grant.
You are not concerned with who the investigator might be or whether the funding is a
fellowship.
Conduct a search taking all of these issues into account.
Steps:
63
1. Start on homepage:
2. Click on Search
3. Select Nutrition from the drop down Speciality menu.
4. Enter up to 5 mandatory words and their relevant scores such as ‘non-U.S.citizens’,
‘eligibility’, ‘eligib’.
5. Click green plus icon for excluded words.
6. Enter up to 3 excluded words such as ‘principal investigator’, ‘investigator’ and
‘fellowship’.
Assumption: Participants will use mandatory words such as ‘eligible’ and ‘non-U.S. citizens’.
However, there are four variations of the word ‘eligible’ and its score. This may confuse
some participants. Participants will also exclude words based on information provided.
Scenario 3: Search: Mandatory Words
Task:
You are seeking funding for a new research project in the field of pharmaceutical compounds.
In this instance you are interested in accessing funding from international bodies.
Steps:
1. Start on homepage:
2. Click on Search
3. Select Drugs from the drop down Speciality menu.
4. Enter up to 5 mandatory words and their relevant scores such as ‘new proposal’, ‘any
country.’ and ‘international funding’.
Assumptions: Users are expected to consider the choice of country and funding bodies
Scenario 4: Search: Open Speciality
Task:
You are interested in conducting research on the treatment of minor injuries. You wish to carry
this research out in the country where you are currently residing and would prefer to do so in
conjunction with an institute of higher learning.
Steps:
1. Start on homepage
2. Click on Search
3. Select either Medicine/Sports from the drop down Speciality menu.
4. Exclude standard words related to foreign funding bodies.
5. Use any mandatory words they feel would be relevant.
Assumptions: It is assumed that the participant will have to decide between specialities
‘Medicine’ and ‘Sports’.
It is assumed that the participant will exclude words related to foreign funding.
64
Scenario 5: Results Access
Task:
Earlier today, you conducted a search using GrantFinder. You have just received the following
email:
“You have received this email because you submitted a job to the GrantFinder server. A
summary of the results can be obtained via the Results page on GrantFinder.
Results will expire in two weeks.
In case of questions please reply directly to this email.”
Access your search results using the Experiment ID: 237 and email address:
majella.haverty@ucdconnect.ie.
Earlier today, you conducted a search using GrantFinder. You have just received the following
email: [give it here in exact form]. Access your search results.
Steps:
1. Start on Homepage
2. Click on Results
3. Enter experiment ID in experiment field.
4. Enter email address in email field.
5. Click Submit Data.
Assumptions: Users will be able to access results using relevant email and experiment ID.
Scenario 6: Results Evaluation
Task:
Using the search results you have just accessed, choose the highest-scoring result link that
contains 9 word clusters. Retrieve this information based on the results section of the tutorial
page.
Steps:
1. Click Results.
2. Enter experiment ID and email address (from previous task).
3. Click Submit Data
4. Select the link _______________________.
Assumption: It assumed that the user will select the link with the highest score and the
suggested number of word clusters. Participants may choose to refer back to the tutorial page.
65
Appendix 7: Post-Test Questionnaire
POST TEST QUESTIONNAIRE
1. What is your overall opinion of the website?
**if they have already answered it in Q1, don't ask Q2 & Q3.
1. Is there anything you like about the website?
2. Is there anything you dislike about the website?
3. What one major improvement would you make to the website?
4. What did you think of the use of standard words provided by the website?
5. Would you use this website again, if you had to search for grants? Why/ Why not?
6. Do you have any other comments about the website?
66
Appendix 8. Usability Test Script
Test Script
**Each participant must be read exact same script in order to create fair testing conditions.
• Introduction
i. First, we would like to thank you for agreeing to take part in our usability test. The research
will be conducted by postgraduate students and supervised by Dr Judith Wusteman, all from School
of Information and Communication Studies at UCD.
ii. The purpose of this Capstone project is to evaluate the usability of the GrantFinder tool.
GrantFinder is a web-based tool that aims to help researchers find research grant calls.
iii. The aim of this test is to assess how easy the website is to use. The test will consist of a pre-
test questionnaire, you will be given a series of predefined tasks to perform using GrantFinder. You
will be asked to comment aloud while completing the tasks. The test will conclude with another brief
questionnaire. The whole session will take less than an hour.
• Explain the Process
i. In terms of the test, you will complete 6 pre-defined tasks and one warm up task. We ask you
to use a think aloud approach, in which you will be asked to verbalise any opinions or thoughts while
you are completing the task at hand.
ii. The thoughts you verbalise simply allow us to know what you are thinking, why you are making
such a decision or how you feel about the website. These thoughts will be the primary source of our
data collection and our ability to evaluate the usability of GrantFinder.
iii. Following the pre-test questionnaire, we will ask you to complete a warm up task, in which I
will help and guide you through the process. For the remainder of the test, I may use verbal prompts
to elicit more thoughts and responses from you in relation to the website and tasks.
iv. If you have any questions, please do not hesitate to ask.
• Consent Process
i. Your participation is voluntary and you can withdraw from the research at any
time. With your permission, your comments and your interaction with the computer
during the session will be recorded. The recording will only be viewed by members of
the team and their supervisor.
ii. My research partner will also be taking some notes whilst you are completing
the tasks. Along with the recordings, the notes won’t be seen by anyone except the
people working on the project.
iii. Your identity will not be disclosed to anyone outside the student team and
their supervisor. In the project report and any subsequent publications, anonymising
codes (e.g. ‘Librarian’, ‘academic’, ‘administrator’, etc.) will be used.
**Ask Participant to read and sign the consent from.
• Reassurance
67
i. Before we begin, I want to ensure you that it is the website we are testing and not you. This
test does not reflect on you, there are no right or wrong answers.
ii. We encourage you to be as honest as possible, so please don’t be worried about offending
anyone about any difficulties or issues you face whilst using the website. Our aim is to provide the
developer with recommendations that can help improve the site by using the information provided
by you.
iii. Again, we are using a think aloud approach, so the more you can talk about what you are
thinking when using the website, the better.
iv. Feel free to ask questions during the task. However due to the nature of the test, we may not
answer them straight away as we wish to see how you complete the tasks without any aid. Once the
test is completed, we will try to answer any questions you have.
• Pre Test Questionnaire
We will first begin with a few questions:
PRE TEST QUESTIONNAIRE
1. Do you have any experience in seeking or applying for grants?
IF ANSWER YES:
• How frequently do you seek or apply for grants?
• Do you use any particular websites?
• What aspects of those websites do you particularly like?
IF ANSWER NO:
• In terms of discovering resources, how skilled would you describe yourself?
• What websites or search tools would you use to discover resources?
• What, do you consider, to be important aspect of a search engine tool?
Have you ever used GrantFinder before? If answer yes, was your search successful?
• Initial Reaction
**Prior to beginning the tasks, open the GrantFinder Homepage and ask participant the following
question:
• What are your first impressions of the website?
• Demonstration Task
We will start with a warm-up task, in which we encourage you to ask any questions about the
think aloud process. We may prompt you and encourage you to verbalise your thoughts during
this task.
Scenario: Tutorial
Task:
You have arrived on the GrantFinder website for the first time. You are unsure how to
potentially view your search results. Find more information.
You are unsure how to perform a search using GrantFinder. Where would you find
more information about how to use this site?
68
• Test Tasks
**For this stage, participants should complete tasks without any assistance from either member of the
research team.
** Researcher may prompt participant using non-leading questions such as:
What are you thinking? Why did you take that step?
We will now, begin the usability test. For this part, we will not be able to help you complete
any of the tasks. We may encourage you from time to time to think aloud.
Scenario 1: Search: Mandatory Words
Task:
You are interested in conducting research on Georgian Buildings in Dublin. You are
particularly concerned about the closing dates and necessary criteria for funding applications
in this field. Conduct a search that takes this into account.
Scenario 2: Search: Mandatory/Excluded Words
Task:
You are a European citizen seeking funding for a research project concerning the general field
of health and nutrition. Your main priority is to discover whether or not you are suitable for the
relevant grant.
You are not concerned with who the investigator might be or whether the funding is a
fellowship.
Conduct a search taking all of these issues into account.
Scenario 3: Search: Mandatory Words
Task:
You are seeking funding for a new research project in the field of pharmaceutical compounds.
In this instance you are interested in accessing funding from international bodies.
Scenario 4: Search: Open Speciality
Task:
You are interested in conducting research on the treatment of minor injuries. You wish to carry
this research out in the country where you are currently residing and would prefer to do so in
conjunction with an institute of higher learning.
Scenario 5: Results Access
Task:
Earlier today, you conducted a search using GrantFinder. You have just received the following
email:
“You have received this email because you submitted a job to the GrantFinder server.
A summary of the results can be obtained via the Results page on GrantFinder.
Results will expire in two weeks.
In case of questions please reply directly to this email.”
Access your search results using the Experiment ID: 237 and email address:
majella.haverty@ucdconnect.ie.
Scenario 6: Results Evaluation
69
Task:
Using the search results you have just accessed, choose the highest-scoring result link that
contains 9 word clusters. Retrieve this information based on the results section of the tutorial
page.
• Post-Test Questionnaires
POST TEST QUESTIONNAIRE
1. What is your overall opinion of the website?
**if they have already answered it in Q1, don't ask Q2 & Q3.
1. Is there anything you like about the website?
2. Is there anything you dislike about the website?
3. What one major improvement would you make to the website?
**If they have answered already do not ask again
1. What did you think of the use of standard words provided by the website?
2. Would you use this website again, if you had to search for grants? Why/ Why not?
3. Do you have any other comments about the website?
• Debrief & Thank You
i. We have come to the end of the usability test. We want to thank you for participating in
today’s test and giving us your time.
ii. If you have any questions now or later, we would be happy to try and answer any questions
you may have.
iii. Please feel free to contact us later, at any stage.
Thank you, again.
70
Appendix 9. Email to Participants
‘Dear XX,
Many thanks for agreeing to help us evaluate the GrantFinder web tool. Attached is some
further information about the testing sessions.
Would Thursday, Xth of June at Xam be convenient for you? The session shouldn't take more
than an hour and we will bring a laptop with us so we just need a quiet space in which to carry
out the testing. Would you like to carry out the test in your own office, a quiet space in your
own building, or would you like to come to the School of Information & Communication
Studies?
Although GrantFinder is publicly available, we would be grateful if you didn't try it out before
the test session, as we are interested in exploring participants' first impressions of the site.
Regards
John Kiely, Majella Haverty, Satish Narodey, Kunal Kalra’
71
Appendix 10: Information Leaflet
Usability Testing of the GrantFinder tool
Information Leaflet, May 2016
What is GrantFinder?
GrantFinder is a web-based tool that aims to help researchers find research grant calls.
Who is conducting the research?
The research is conducted by four postgraduate students and supervised by Dr Judith Wusteman from
School of Information and Communication Studies at UCD.
What are the aims of the research?
The aim of this Capstone project is to evaluate the usability of the GrantFinder tool.
What will participation in the research involve?
Two members of the student team will meet you, either in your office or in the School of Information
& Communication Studies. After a brief introduction and a questionnaire, you will be given a series of
predefined tasks to perform using GrantFinder. You will be asked to comment aloud while completing
the tasks. The test will conclude with another brief questionnaire. The whole session will take less
than an hour.
With your permission, your comments and your interaction with the computer during the session will
be recorded. The recording will only be viewed by members of the team and their supervisor.
Your identity will not be disclosed to anyone outside the student team and their supervisor. In the
project report and any subsequent publications, anonymising codes (e.g. "librarian", "academic",
"administrator" etc.) will be used.
What if I change my mind about participation in this research?
Your participation is voluntary and you can withdraw from the research at any time.
What if I have questions?
If you have any questions, please email Dr Judith Wusteman judith.wusteman@ucd.ie.
72
Appendix 11: Usability Testing Timetable
73
Appendix 12: Ethics Form
Human Subjects Exemption from Full Ethical Review Form
Including Access to UCD Students & University-Wide Surveys
An Exemption from Full Ethical Review is not an exemption from ethical best practice and all
researchers are obliged to ensure that their research is conducted according to HREC Guidelines.
Depending on the nature of the study described below your study may require a preliminary review
by the HREC Chairs and may be subject to further clarification.
Please do not alter the format of this form and submit it as a word document.
Section A: General Information
I apply for Exemption from Full Ethical Review of the research protocol summarised below,
on the basis that (select Yes or No):
a. All aspects of the protocol have received ethical approval from an
approved body (e.g. Hospitals, hospices, prisons, health authorities)
No
a. The research protocol meets one or more of the criteria for exemption
from review as detailed in Section 3 of Further Exploration of the
Process of Seeking Ethical Approval for Research (HREC Doc 7)
Yes
I am also requesting permission to access UCD Students for one of the following (select Yes or
No):
a. I am accessing students from one school only and will seek permission
from the Head of that school
No
a. I am seeking permission to access UCD Students from more than one
school (accessing students in more than one school will require HREC approval)
Yes, but PhD
students only
(max 10) and I
will request
permission
from head of
74
school and
supervisor in all
cases.
a. I am seeking permission to conduct a university-wide survey of UCD
students (if the research is a campus-wide student survey and involves students from
two or more schools, then permission to schedule the survey will be sought from the
University Student Survey Board (USSB) on your behalf after this form has been
reviewed by a HREC Chair and/or HREC Committee).
No
I have also read the following Guidelines (select Yes or No):
(i) HREC Guidelines and Policies specifically Relating to Research Involving
Human Subjects http://www.ucd.ie/researchethics/policies_guidelines/
Yes
(ii) The UCD Data Protection Policy http://www.ucd.ie/dataprotection/policy.htm
Yes
(iii) The Data Protection Guidelines on Research in the health sector, (if
applicable)
https://www.dataprotection.ie/documents/guidance/Health_research.pdf
Yes
For all the latest versions of the HREC Policies and Guidelines please see the research ethics website:
http://www.ucd.ie/researchethics/policies_guidelines/
1. PROJECT DETAILS
a) Project Title: Evaluating the Usability of the GrantFinder Tool
b)
Study Start Date:
(dd/mm/yy)
28/01/16
Study Completion
Date:
(dd/mm/yy)
01/09/16
c)
Start Date of
Data Collection:
(dd/mm/yy)
01/05/16
Completion Date
of Data
Collection:
(dd/mm/yy)
01/09/16
75
NOTE: In no case will approval be given if recruitment and/or data collection has already
begun
1. APPLICANT DETAILS
Name of
Applicant
(please include
title if
applicable):
Majella Haverty
UCD
Student
Number: (if
applicable)
15200660
a.
b.
Applicant’s
position in
UCD (please
put ‘yes’ in
relevant space):
Staff Postgraduate
Undergradu
ate
No Yes No
a.
b.
Academic/Pr
ofessional
Qualification
s
Bachelor of Religious Education with English (Hons)
a.
b.
Applicant’s
UCD Contact
Details
UCD Telephone (if
applicable)
UCD Email (bearing reference to applicant’s name)
majella.haverty@ucdconnect.ie
e)
Applicant’s
UCD
Address
(UCD school
address NOT
home address.)
School of Information and Communication Studies,
Belfield
f)
Name of
Supervisor
(please include
title if
applicable):
Dr. Judith Wusteman
76
g)
Supervisor’s
UCD Contact
Details
UCDe Telephone UCD Email:
7612 judith.wusteman@ucd.ie
h)
UCD
Investigator(
s) and
affiliations
(name all investigators & co-investigators on project)
All Masters students in UCD School of Information and Communication
Studies:
Majella Haverty
Satish Narodey,
Kunal Kalra,
John Kiely
i)
Funding if
applicable
Source Amount
j. EXTERNAL APPLICANTS ONLY (if study is not associated with any UCD staff member or school)
a.
xternal
Investig
ator(s) if
applicabl
e
Yes / No
If YES, please provide
name(s)
a.
ame of
Organiz
ation
Relationship with External
Organization
a.
ddress
of
Organiz
ation
a.
xternal
Investig
77
ator(s)
if
applicabl
e
a.
roject
Title:
a.
tart
Date of
Data
Collecti
on:
(dd/mm/yy) Completion Date of Data
Collection:
(dd/mm/yy)
78
k. INSURANCE Please note that UCD’s existing insurance policy providing cover in relation to
research work and placements, being undertaken by UCD staff and students, is currently
limited to Public Liability only. Provisions of other types of insurance cover, as listed in the
table below, are the sole responsibility of the researcher.
Please select Yes or No and provide details, where required. Please do not assume that you do not
require insurance.
NOTE: This section is mandatory – your application will not be processed unless this section is
completed.
i. Does this study require medical malpractice or clinical indemnity
insurance?
No
Is relevant insurance cover already in place? (Yes/No)
Insurance Holder’s Name:
i. Is this study covered by Clinical Indemnity Scheme (CIS)? No
Healthcare Provider’s Name:
i. Is there any blood sampling involved in this study? No
Who will be taking samples?
Insurance details:
i. Are there other medical procedures involved in this study? No
Details of Procedures:
i. Does this study involve travelling outside of Ireland?
If Yes, please name the country/countries where the researcher
will travel in the field below
No
Name country/countries outside of Ireland:
The Office of Research Ethics will liaise with the Insurers and will advise you of any specific requirements, if
necessary.
79
Section B: Research Design & Methodology
1. RESEARCH PROPOSAL
a. Methods of data collection (please select the appropriate box and provide
brief details)
i standard educational practices No
ii standard educational tests No
iii standard personality tests No
iv standard psychological tests No
v interviews or focus groups Yes
Usability tests. The duration will not exceed more
than one hour per test. The usability test
will comprise tasks that the participants will be
asked to complete using the website plus brief pre
and post questionnaires.
vi public observations No
vii persons in public office No
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder
Evaluating the Usability of GrantFinder

Más contenido relacionado

Similar a Evaluating the Usability of GrantFinder

ECE695DVisualAnalyticsprojectproposal (2)
ECE695DVisualAnalyticsprojectproposal (2)ECE695DVisualAnalyticsprojectproposal (2)
ECE695DVisualAnalyticsprojectproposal (2)
Shweta Gupte
 
SCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docx
SCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docxSCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docx
SCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docx
kenjordan97598
 
Interaction_Design_Project_N00147768
Interaction_Design_Project_N00147768Interaction_Design_Project_N00147768
Interaction_Design_Project_N00147768
Stephen Norman
 
28 09-2011 user-centred development of sakai survey tool
28 09-2011 user-centred development of sakai survey tool28 09-2011 user-centred development of sakai survey tool
28 09-2011 user-centred development of sakai survey tool
0xford
 
Social Work Research Planning a Program EvaluationJoan is a soc.docx
Social Work Research Planning a Program EvaluationJoan is a soc.docxSocial Work Research Planning a Program EvaluationJoan is a soc.docx
Social Work Research Planning a Program EvaluationJoan is a soc.docx
samuel699872
 
Workbook for Designing a Process Evaluation
 Workbook for Designing a Process Evaluation  Workbook for Designing a Process Evaluation
Workbook for Designing a Process Evaluation
MoseStaton39
 

Similar a Evaluating the Usability of GrantFinder (20)

Experience Research Best Practices
Experience Research Best PracticesExperience Research Best Practices
Experience Research Best Practices
 
UX Design Process | Sample Proposal
UX Design Process | Sample Proposal UX Design Process | Sample Proposal
UX Design Process | Sample Proposal
 
R.M Evaluation Program complete research.pptx
R.M Evaluation Program complete research.pptxR.M Evaluation Program complete research.pptx
R.M Evaluation Program complete research.pptx
 
Project evaluation
Project evaluationProject evaluation
Project evaluation
 
Collaborative work 2 Part 1
Collaborative work 2 Part 1Collaborative work 2 Part 1
Collaborative work 2 Part 1
 
Hwap pres__w bri cugelman nov2010
Hwap  pres__w bri cugelman nov2010Hwap  pres__w bri cugelman nov2010
Hwap pres__w bri cugelman nov2010
 
Online survey tools_google-forms_nv_nsh (2) (2)
Online survey tools_google-forms_nv_nsh (2) (2)Online survey tools_google-forms_nv_nsh (2) (2)
Online survey tools_google-forms_nv_nsh (2) (2)
 
Collaborative 2 ingrid margarita and sandra
Collaborative 2 ingrid margarita and sandraCollaborative 2 ingrid margarita and sandra
Collaborative 2 ingrid margarita and sandra
 
Usability study of a methodology based on concepts of ontology design to defi...
Usability study of a methodology based on concepts of ontology design to defi...Usability study of a methodology based on concepts of ontology design to defi...
Usability study of a methodology based on concepts of ontology design to defi...
 
LDA Program Enhancement - Class of 2011-12 LDA Collaborative Project Presenta...
LDA Program Enhancement - Class of 2011-12 LDA Collaborative Project Presenta...LDA Program Enhancement - Class of 2011-12 LDA Collaborative Project Presenta...
LDA Program Enhancement - Class of 2011-12 LDA Collaborative Project Presenta...
 
Part III. Project evaluation
Part III. Project evaluationPart III. Project evaluation
Part III. Project evaluation
 
ECE695DVisualAnalyticsprojectproposal (2)
ECE695DVisualAnalyticsprojectproposal (2)ECE695DVisualAnalyticsprojectproposal (2)
ECE695DVisualAnalyticsprojectproposal (2)
 
SCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docx
SCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docxSCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docx
SCIENTIFIC MERIT ACTION RESEARCH TEMPLATE (SMART) FORMa..docx
 
Interaction_Design_Project_N00147768
Interaction_Design_Project_N00147768Interaction_Design_Project_N00147768
Interaction_Design_Project_N00147768
 
28 09-2011 user-centred development of sakai survey tool
28 09-2011 user-centred development of sakai survey tool28 09-2011 user-centred development of sakai survey tool
28 09-2011 user-centred development of sakai survey tool
 
Social Work Research Planning a Program EvaluationJoan is a soc.docx
Social Work Research Planning a Program EvaluationJoan is a soc.docxSocial Work Research Planning a Program EvaluationJoan is a soc.docx
Social Work Research Planning a Program EvaluationJoan is a soc.docx
 
Pt Module 3
 Pt Module 3 Pt Module 3
Pt Module 3
 
SWA-Presentation2
SWA-Presentation2SWA-Presentation2
SWA-Presentation2
 
Organizational Capacity-Building Series - Session 6: Program Evaluation
Organizational Capacity-Building Series - Session 6: Program EvaluationOrganizational Capacity-Building Series - Session 6: Program Evaluation
Organizational Capacity-Building Series - Session 6: Program Evaluation
 
Workbook for Designing a Process Evaluation
 Workbook for Designing a Process Evaluation  Workbook for Designing a Process Evaluation
Workbook for Designing a Process Evaluation
 

Evaluating the Usability of GrantFinder

  • 1. Evaluating the Usability of GrantFinder Majella Haverty, John Kiely, Satish Narodey & Kunal Kalra School of Information and Communication Studies University College of Dublin This Capstone is submitted to University College Dublin in part fulfilment of the requirements for the degree of Masters of Library and Communication and Masters of Science in Information Systems, August 2016. Supervisor: Dr. Judith Wusteman
  • 2. Acknowledgements We would like to extend our sincere gratitude to our supervisor Dr. Judith Wusteman for all her advice, help and guidance throughout this Capstone project. The team would also like to take this opportunity to thank all the participants who contributed to the Capstone project and for taking time to participate in the usability tests.
  • 3. Contents I. Executive Summary 1. Introduction 1.1. GrantFinder 1.2. GrantFinder Website 1.3. Research Aims & Objectives 1.4. Research Approach 1.5. Changing Nature of Goals 1.6. Capstone Project Stakeholders & Participants 1.7. Capstone Team Roles & Tasks 2. Literature Review 2.1. Grant Finding 2.2. Search Functionality 2.3. Usability & Usability Testing 3. Methodology 3.1. Research Approach 3.2. Alternative Approaches 3.3. Heuristic Evaluation 3.4. Personas 3.5. Usability Testing 3.5.1. Think Aloud Approach 3.6. Test Design 3.6.1. Introduction 3.6.2. Pre-Test Questionnaire 3.6.3. Scenarios 3.6.4. Post Test Questionnaire 3.6.5. Script 3.7. Team Review 3.8. Pilot Review 3.9. Participants 3.10. Ethical Considerations 3.11. Data Analysis 3.11.1 Coding 3.12. Limitations 3.13. Reliability 4. Results 4.1. Heuristic Evaluation Findings 4.2. Pre-Test Questionnaire Findings 4.3. Usability Test Findings 4.3.1. Search
  • 4. 4.3.2. Forms & Data Entry 4.3.3. Help, Feedback & Error Tolerance 4.3.4. Homepage 4.3.5. Page Layout & Visual Design 4.3.6. Writing & Content Quality 4.3.7. User Perceptions 4.4. Post-Test Questionnaire Findings 4.5. Triangulation 5. Discussion 5.1. Usability of GrantFinder 5.2. Future Directions 5.2.1. Hot Jar 5.2.2. URL/Host 6. Recommendations 6.1. Forms & Data Entry 6.1.1. Search 6.1.2. Score 6.1.3. Standard Words 6.2. Functionality 6.2.1. Real-Time Results 6.2.2. Personalisation/User Profile 6.2.3. Homepage 6.3. Help, Feedback & Error Tolerance 6.3.1. Tutorial 6.3.2. Email Validation/Error Notification 6.3.2. FAQ Section 6.4. Writing & Content Quality 6.4.1. Language 6.4.2. Ordering of Lists 6.4.3. Glossary 6.5. Layout & Design 6.5.1. Homepage 6.5.2. Results Page 7. Conclusion 8. References
  • 5. Appendices Appendix 1: Heuristic Evaluation Checklist Example: Trust & Credibility Appendix 2: Personas Appendix 3: Test Protocol Appendix 4: Consent Form Appendix 5: Pre-Test Questionnaire Appendix 6: Scenarios Appendix 7: Post-Test Questionnaire Appendix 8: Usability Test Script Appendix 9: Email to Participants Appendix 10: Information Leaflet Appendix 11: Usability Testing Timetable Appendix 12: Ethics Form Appendix 13: Ethical Exemption Approval Appendix 14: Themes & Codes Appendix 15: Coding Hierarchy Chart Appendix 16: Coding Comparison Query Example Appendix 17: Thank You Email Appendix 18: Pre-Test Questionnaire Findings Appendix 19: Post-Test Questionnaire Findings Appendix 20: Gantt Chart Appendix 21: Group Reflection
  • 6. Table of Figures Figure 1: Homepage Figure 2: Tutorial Page Figure 3: Three Field Inputs Figure 4: Results Page Figure 5: Job Submission Page Figure 6: Contact Page Figure 7: Scenarios Figure 8: Heuristic Results Radar Chart Figure 9: Add Button Placement Figure 10: Current Tutorial Page Figure 11: Current FAQ Page Figure 12: Current Results Page Figure 13: Homepage Image Figure 14: Results Page Figure 15: Search Page: Standard Words Figure 16: Tutorial Page: Spelling Mistakes Figure 17: Hotjar: Heatmap Figure 18: Hotjar: Recording Figure 19: Hotjar: Funnel Figure 20: Hotjar: Form Analytics Figure 21: Standard Words Recommendation Figure 22: Search Page Wireframe Figure 23: Current Job Submission Message Figure 24: Example Sign Up Form for a Personal Account Figure 25: Main Grants on Homepage Figure 26: Tutorial Page Wireframe Figure 27: Inline Form Error Notifications Figure 28: Combination Error Figure 29: Suggested FAQ Figure 30: Alphabetically Ordered Clickable Standard Words Figure 31: Speciality: Alphabetically Ordered Drop Down Menu Figure 32: Glossary of Keywords Example Figure 33: Homepage Wireframe Figure 34: Results Page Wireframe
  • 7. i I. Executive Summary The primary aim of this project was to evaluate the usability of GrantFinder and to identify areas of the software that users had difficulty with. The usability of the tool was evaluated using a mixed methods approach. Firstly, the researchers carried out a heuristic evaluation. Following this, 15 participants recruited by the researchers performed usability testing. The testing consisted of think-aloud observations while performing pre-defined tasks as well as questionnaires. The participants were recruited based on personas developed by the researchers in order to identify the target user groups for GrantFinder. A series of recommendations was derived from this data. These recommendations aim to improve the overall usability of the tool. Findings The participants’ reactions were varied. While the concept behind the development was sometimes praised, users did not find GrantFinder to be intuitive or easy to use. Other usability issues included: • Layout and text wrapping on the results page • Reason for and inclusion of standard words and score • Empty FAQ section • Use of jargon • Inclusion of foreign language words Recommendations • Inclusion of a video tutorial and sample search for new users • Real-time results • Filtered and free-text searching • Clickable standard words • An error notification when a valid email address is not entered • Improved resolution of images and logos on the site • The inclusion of well-known grant providers on the homepage
  • 8. ii • Glossary of jargon used in the tool • Allow users to set up profiles and personalize their usage
  • 9. 1 1. Introduction This report describes the usability testing of the GrantFinder tool. The following groups participated in the tests: academics, administrators, PhD students and librarians from University College Dublin. The participants carried out a series of six pre-defined tasks. The activities and comments of the participants were observed, recorded and examined and usability issues were identified. Other issues related to the website’s performance were also highlighted through a heuristic evaluation. A number of recommendations were proposed. 1.1.GrantFinder GrantFinder is a web-based tool for searching research-based grant calls. It was developed and is hosted by the Structural Bioinformatics and High Performance Computing Research group, based in St. Anthony’s Catholic University, Murcia, Spain (http://bio-hpc.ucam.edu/GrantFinder/web/Contact/Contact.php). GrantFinder aims to provide a single point of access to all information on funding for a particular research field. These results are ranked according to score in descending order as well as the number of word clusters discovered. Each result includes a link to the website of the grant call (Pena-Gracia, Den-hann, Caballero, Vicente-Contreras and Perez-Sanchez, 2016). 1.2. GrantFinder Website GrantFinder is a unique search engine tool that allows users to have access to grants calls in a different way. It works in a two searching phases. First, it looks for all the grants with predefined words that are mentioned by the user in their search, after that a clustering technique is used to rank the grants based on the number of times the selected words are repeated. The website is developed on an Apache Tomcat Web server and runs on the SUSE Linux Enterprise Server 11. The user interfaces and web design are implemented
  • 10. 2 by a combination of several front-end technologies including JavaScript, JQuery, PHP and HTML (Peréz-Sánchez et al, 2016). GrantFinder works by listing a set of predefined standard words with a corresponding score. To acquire search results, three input fields can be used. The first input is the speciality field which is divided into five categories; drugs, medicine, architecture, nutrition and sports. The second input option is that of mandatory words. Users are given five input fields that they can fill according to their grant request. Mandatory words can be chosen from the predefined standard words. Users are required to enter a score with each mandatory word. The third input is excluded words, this is a non- mandatory field. Users can choose to define words that he/she wishes to exclude from the search. The Home page of GrantFinder, as illustrated in Figure 1, welcomes its user’s and provides links (top right) to the rest of the site. Figure 1: Homepage
  • 11. 3 Tutorial page of GrantFinder contains step by step instruction for the user (See Figure 2) Figure 2: Tutorial Page In the search page, users are required to fill in all the data columns to order to get their desired results (See Figure 3). Figure 3. Three Field Inputs
  • 12. 4 In the result section of the website, users can access the results and discover how many forms of clusters were found (See Figure 4). Figure 4: Results page This page is not initially visible on the website until a search query has been submitted. The user is redirected to this job submission page, where they are provided with their respective job_id. Figure 5: Job Submission Page
  • 13. 5 Figure 6: Contact Page 1.3.Research Aims and Objectives The primary research aims of our capstone project is to evaluate the usability of the GrantFinder tool, to identify the most prominent usability issues and develop a series of recommendations to solve these issues. In order to achieve this, the research objectives are as follows; 1. Create a test structure that evaluates full functionality of the website. 2. Run a series of usability tests of the website among future user groups 3. Analyse the issues that are highlighted during the testing procedure 4. Make appropriate recommendations. 1.4. Research Approach Usability testing was the primary source of data collection. Furthermore, participants were asked to perform the task using a ‘thinking out loud’ approach. A modified version of David Travis’ (2014) comprehensive heuristic evaluation method mainly pertained to the usability of a GrantFinder. 1.5. Changing Nature of Goals Initially, the project title was ‘Evaluating the Usability of the Nation, Genre and Gender Case Studies’ however, the case studies were not ready within the time frame and
  • 14. 6 thus the project was deemed unrealistic. The Structural Bioinformatics and High Performance Computing Research Group were interested in completing usability testing on their site, Grant Finder. While the subject of our usability testing had changed, the objective of usability evaluation and scope remains the same. 1.6.Capstone Project Stakeholders and Participants This Capstone project was undertaken by four Master’s students under the supervision of Dr. Judith Wusteman, all from University College Dublin’s School of Information and Communication Studies. The research was conducted with the approval of the Structural Bioinformatics and High Performance Computing Research Group, as well as the website developer team including Horacio Pérez-Sánchez. The participants of the usability tests were post-doctoral students, academic staff, administrative staff, librarians and PhD students from University College Dublin. 1.7.Capstone Team Roles and Tasks Team Member Tasks Assigned Kunal Kalra Ethical Approval, Data Collection & Recruitment Satish Narodey Ethical Approval, Data Collection, Data Analysis & Technical Support Majella Haverty Ethical Approval, Data Collection, Data Analysis & Administrative Duties John Kiely Ethical Approval, Data Collection & Literature Review
  • 15. 7 2. Literature Review 2.1. Grant Finding Herther (2009) states that “Behind every significant research innovation, each key social program, or improved scientific coverage in history lies some form of financial support”. Finding financial support in the age of the internet has never been more complicated. Not only can one apply for academic or federal grants, but popular crowdfunding organisations such as gofundme.org and Kick Starter allow the general public to offer money to causes that catch their interest, from niche technological advances to a squirrel census (Emiko, D. 2012). Search engines such as Grant Watch (https://www.grantwatch.com/grant-search.php) allow users the opportunity to find financial support across a range of different disciplines, from academia to not-for-profit organizations. The resource allows users to select an area of interest, potential funding bodies, and a geographical location in order to tailor their results. It is worth noting, however, that seeking an appropriate funding institution or organisation requires extensive web browsing and searching, as performing these searches using a popular search engine such as Google or Bing can be generic and will not provide comprehensive results without multiple attempts. This is very time consuming and does not optimize the search process. Peréz-Sánchez et al (2015, p1), in their introductory article on GrantFinder, highlight the need to create a service that not only finds and displays grant calls, but produces them conveniently with the most relevant information prioritised. 2.2. Search Many of the popular search engines available today allow for exploratory search. By way of using keywords or natural language questions, one can receive an answer to almost any query. We can take from this that there are two forces at work in online searching; the querist and the search engine. A carefully designed search engine will allow users to find information quickly and easily, but most users are not skilled searchers and are not interested in search itself. They are more invested in the results aspect of searching (Elliott et al, 2002). Indeed, the same study showed that browsing
  • 16. 8 results rather than straightforward results-seeking elicited a more positive response from the user, depending on what they were looking for (Elliott et al, 2002). The importance of search result retrieval cannot be overstated. It is often necessary to search more than once in order to find pertinent information through traditional search engines. Ray & Ray (1999, p569) discuss the ways in which internet users expect search engines to operate as they want them to, rather than as they were designed to. Due to this misunderstanding between man and machine, it is often the case that users are not satisfied by the results that they receive. There is, of course, the option of advanced search. This option can, however, be daunting within a traditional search engine, given the reliance on simple, keyword searches. In fact, many users may not understand what is required for an advanced search, given that they have little experience using this function. Duarte et al (2015) provide a set of guidelines for Search User Interface, including a guideline on the prominence of the advanced search feature within a search engine. Its inclusion on the main search page, as well as the subsequent results page, allows users to refine the information that they see in order to tailor the vast range of information available online to suit their needs. This does not only allow them to broaden and to narrow their searches, it also battles a common problem with online searching: that of search fatigue. In his 2007 article, Bealle (2007, p.47.) mentions that users experience this issue due to synonyms. This is something that a search engine will not recognize and therefore a significant number of results are omitted. Standardizing metadata eliminates this problem to an extent. 2.3. Usability and Usability Testing The Nielsen Norman Group defines usability as “a quality attribute that assesses how ease user interfaces are to use” (Nielsen, 2012). In order to evaluate the usability of a website or search engine, one must first understand what it is that makes a resource ‘usable’. The following guidelines, taken from Nielsen (1993), aid developers and testers to ensure that their website is doing its best for the target user. • Visibility of system status • Match between system and the real world • User control and freedom
  • 17. 9 • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation Despite the fact that usability is a multidimensional concept, and no one key definition exists, it appears that the idea of usability testing seems to follow a similar pattern. One form or another suggesting that usability testing requires a number of prospective users to perform pre-identified tasks on a particular product/site (Alshamari & Mayhew, 2009; Becker, 2011; Dumas & Redish, 1999; Krug, 2006; Sonderegger, Schmutz & Sauer, 2015). A vast amount of literature gives precautionary advice as not to confuse the use of focus groups and usability testing (Becker, 2011; Dumas & Redish, 1999; Krug, 2006). Focus groups are used early in the process of designing websites and tell very little about how users behave with the site and hence the reason as not be confused with usability testing. According to the 1998 ISO 9241-11: Guidance on Usability definition of usability, usability is determined by effectiveness, efficiency and satisfaction in a specific context of use. Considering such usability measures, Sonderegger et al. (2016) refer to effectiveness as the degree in which a task is successfully carried out, that is the task completion rate, accuracy, quality of outcome and other such factors. Whilst efficiency concerns itself with the ease at which the task is carried out, i.e. the task completion time, error rate, learning time, etc. Hornbæk (2006) reviews current practices in usability testing and measures, he encounters a similar pattern and also enhances it by summarizing satisfaction measures as preferences, standard questionnaires, satisfaction with interface and others. However, it is noteworthy to mention that Hornbæk recognizes the limitations of his review as he has only analysed research studies and not usability testing within the software industry. He also recognizes that this review does not account for how different tasks and domains impact the choice of usability measures. Yet, he mentions an important factor, stating that ‘the conclusions of some usability studies are weakened by their choices of usability measures and by
  • 18. 10 the way they use measures of usability to reason about quality-in-use.’ It is essential to consider the usability measures and whether factors such as the ones mentioned above should be considered and measured during usability testing. While examining the effects of age in usability testing, Sonderegger et al. ‘point to the importance of considering differences between age groups and type performance measures when comparing results across studies.’ As a result of their testing, a strong correlation between perceived usability and effectiveness emerges, although not between perceived usability and efficiency amongst older users, as expected. Yet no such difference was found amongst younger users. This may be an important factor to consider when considering usability testing and measures of grant finding software, as the issue of grant finding can often transcend a wide variety of ages and disciplines. This may indicate that a combination of usability measures should be used, as suggested by Hornbæk’s review on recent research.
  • 19. 11 3. Methodology 3.1. Research Approach A mixed method was used to avoid any weaknesses of either approach alone (Blake 1989; Greene, Caracelli, and Graham, 1989; Rossman and Wilson 1991). Three data collection techniques were used: heuristic evaluation, pre/post-test questionnaires and a usability test. The principal method of research was usability testing: • The researcher read out a task to the participant and the participant performed such task. • The participant vocalized his/her thought as they performed the task. • Tests were audio recorded. Qualitative research was the dominant model throughout the study. Nonetheless, the team completed a second data collection technique, a heuristic evaluative prior to the usability tests in order to triangulate results and add a quantitative dimension to the research. 3.2. Alternative Approaches Alternative approaches and techniques were considered for the evaluation of the usability of GrantFinder. Focus groups, surveys and beta tests were all considered. Focus groups generally consist of 8 to 10 people and reveal user’s opinions, attitudes and preferences, however a focus group does not usually let you see how users actually behave with the tool (Dumas & Redish, 1999, p.24.). Thus, a focus group was deemed invalid as it was inadequate for evaluating or assessing the usability of this tool. Surveys were also deemed invalid for the same reason. Despite beta tests allowing people to use the product in real environments to do real tasks, the technique was deemed invalid. Beta tests are considered ‘too little, too
  • 20. 12 unsystematic and much too late to be the primary test of usability’ (Dumas & Redish, 1991, p.24.). Questionnaires and think aloud scenarios were the chosen techniques of the research team. The ‘think aloud’ approach was chosen for the following reasons; cost effective, flexible, robust and expressive (Nielsen, 2012). The think aloud approach allows users to verbalize thoughts and enable researchers to comprehend how they view the tool. 3.3. Heuristic Evaluation The term heuristic evaluation essentially ‘describes a method in a which a small set of evaluators examine a user interface and look for problems that violate some of the general principles of good user interface design’ (Dumas and Redish,1999, p.65.). These principles can be described as the ‘heuristics’ which are generally broad rules of thumb and not specific usability guidelines. Nielsen and Molich (1990) created nine basic usability principles to evaluate the usability of a product: 1. Use simple and natural language 2. Speak the user’s language 3. Minimize user memory load 4. Be consistent 5. Provide feedback 6. Provide clearly marked exits 7. Provide shortcuts 8. Provide good error messages 9. Prevent errors In 1995, Nielsen developed and expanded these guidelines further. Despite the extensiveness of Nielsen new heuristic evaluation, Travis (2014) developed a comprehensive list which applies chiefly to the usability of a website. Travis’ heuristic checklist (See Appendix 1.) will be employed for this reason. GrantFinder was evaluated in the following areas: Home Page, Task Orientation, Navigation & IA, Forms & Data Entry Trust & Credibility, Writing & Content Quality, Page Layout & Virtual Design, Search, Help, Feedback & Error Tolerance. Each section contains a
  • 21. 13 number of statements. These statements were interpreted in context and rated on a scale of -1 to 1. With -1 suggesting that the website doesn't comply with the guideline, +1 complying with the guideline and 0 suggesting that the website partially complies with the guideline. The research team assumed the role of evaluator and rated the website using Travis’s checklist. Each member completed the list individually in an attempt not to skew results. 3.4. Personas Having identified potential user groups for GrantFinder. The team created four personas based on user research (See Appendix 2.). Personas are representations of audience members and provide a rich description of your audience (Spencer, 2010, p.88). The purpose of these personas is to create reliable and accurate representations of your key audience targets for reference (U.S. Department of Health & Human Services, 2016). Using personas, the ideal users, their goals, behaviours, need and interests were illustrated. The four user groups presented in the personas are as follows: 1. PhD Students Due to the nature of their degree and programme, PhD students are often on the search for research grants. 2. Administrative Staff Selected as they would be seeking grants on behalf of both students and staff within their department. 3. Librarians Given the changing nature of the role of the librarian, many librarians are moving towards the area of research and for this reason they were chosen. 4. Academic Staff Chosen as they are also constantly on the lookout for grants for their own research.
  • 22. 14 3.5. Usability Testing Usability testing was the primary method employed. Usability tests consist of five characteristics: 1. The chief goal is to improve the usability of a product/tool. 2. The participants represent actual users. 3. The participants perform real tasks. 4. The research team observes and records the participant’s actions. 5. The research team analyses the data collected, identifies real issues and make appropriate recommendations. (Dumas & Redish, 1991, p.22.) 3.5.1. Think Aloud Approach A thinking aloud approach was selected. Participants complete a set of predefined tasks/scenarios in which they are required to verbalize whatever comes to mind as they perform the task. This gives the research team an insight into the cognitive processes of the participants as they express their feelings, thoughts and perceptions. The think aloud approach has survived nineteen years as the number one usability tool and according to Nielsen it may be the single most valuable usability method and thus is an accreditation to the longevity to the usability technique (Nielsen, 2012). Whilst disadvantages exist with the think aloud approach, these are outweighed by the advantages. Some advantages include: • Verbalizing thoughts enable researchers to comprehend how they view the tool • Can collect a vast amount of qualitative data from a small number of participants (Nielsen, 1994, p.195.) • Cost effective as no special equipment is required. • Robust as even with small prompts to participants, meaningful data is collected. • Participants avoid later rationalizations as they think aloud. (Nielsen, 2012)
  • 23. 15 Some disadvantages include: • Verbalizing thoughts may slow participants down. • Verbalizing all thoughts places participants into an unnatural environment. • Participants may be concerned with the perception of their opinion, that they may filter and alter thoughts. (Nielsen, 2012) 3.6. Test Design The test design followed a standard usability test format: 1. Introduction 2. Consent Form 3. Reassurances 4. Pre-Test Questionnaire 5. Think Aloud Scenarios 6. Post-Test Questionnaire The research team divided into two teams of two in order to conduct the usability test. Each team assigned a facilitator and an observer/note taker. These roles interchanged between the two conducting teams. Testing was completed in the participant's’ office or in the Innovation Lab of the School of Information and Communication Studies, UCD, depending on the preference of the participant. The tests were conducted on a laptop of the participant's choice: either a Mac or a Windows laptop. Because the location and operating system was chosen by the participant in each case, a screen recorder was not used. Comments and actions were recorded by one member of each team. 3.6.1. Introduction An introduction was prepared for the test in order to explain the purpose of the test, the testing process and also to reassure participants of their involvement in the testing. Permission was requested for audio recording the test.
  • 24. 16 A consent form was completed by the participants indicating that they understand their role and involvement with the study and that they consented to the session being audio-recorded. (See Appendix 4.). 3.6.2. Pre-Test Questionnaire After the consent form and reassurances, participants were asked a total of five open ended questions. According to the Oxford Dictionary of Psychology (2015), an open ended question is framed in such a way as to encourage a full expression of an opinion rather than a constrained response. Participants can, therefore, respond to questions exactly as how they would like to answer them. The questionnaire was created to cater for the varying participants and their backgrounds (See Appendix 5.). The questionnaire included two pathways for those who answered ‘yes’ and ‘no’ in terms of grant finding experience. Questions were then established based on the participants grant finding experience. 3.6.3. Scenarios Six think aloud scenarios were created (See Figure 7. & See Appendix 6.). Participants first carried out a practice task/scenario in order to familiarise themselves with the process and the think aloud approach. They then carried out six further tasks that tested the key usability concerns, as identified by the heuristic evaluation (See Section 4.1 Heuristic Evaluation Findings).
  • 25. 17 Figure 7: Scenarios 3.6.4. Post Test Questionnaire The post-test questionnaire provides researchers to gather further impressions and information about the participants understanding of the website's strengths and weaknesses (Dumas & Redish, 1999; Rubin & Chisnell, 2008). A post-test questionnaire of seven open ended questions was composed for this reason (See Appendix 7.).
  • 26. 18 3.6.5. Script A script was created by the team based on a design proved by Steve Krug (2010) in order to create fair testing conditions in which each participant was subject to the exact same test creating consistency and reliability amongst testing teams. The script included an introductory section, informing the participant of the purpose of the research, the process of the test as well as reassuring the participant about their involvement (See Appendix 8.). 3.7. Team Review Following the creation of the usability test, the team met to review the test. A test protocol was then drawn up in order to avoid any errors (See Appendix.3.). Prior to the pilot review and the conduction of the test amongst participants, the group conducted a dry run test. A dry run test is a test performed to test the efficiency, performance or stability of a particular product or software. Such a test was conducted to identify any other possible failings that could affect our testing. 3.8. Pilot Review A pilot test was conducted with a contact of one of the researchers. The main objective of a pilot test is to ‘debug’ the equipment, materials, and procedures you will use for the test (Dumas & Redish, 1999, p.264.). The pilot test illustrated that minor adjustments needed to made to the script and that the test would approximately take forty minutes. 3.9. Participants The ideal number of participants for any usability test is 5 participants (Nielsen, 2012). The initial aim was to test 4-5 participants from each user group. Usually, 3-4
  • 27. 19 users per group can suffice because the user experience may overlap slightly between the two groups (Nielsen, 2012). Due to difficulty in recruitment, the user groups were imbalanced and consisted of two academic staff, four administrative staff, three librarians, five PhD students and one post doctorate student. Participants were de-identified using anonymising codes. Each group had one code. Librarians were described using ‘L’, academics using ‘A’, administrative staff using ‘AD’, PhD students using ‘P’ and finally postdoctoral students using ‘PD’. 3.9.1. Recruitment Participants were recruited from the University College Dublin population. The team recruited participants using multistage and snowball sampling methods. Multistage sampling was employed. Multistage sampling refers to sampling plans where the sampling is carried out in stages using smaller and smaller sampling units at each stage (ResearchGate). The first stage involves choosing an initial cluster group. This can be recognized as UCD. The second stage involves using our personas and narrowing the participant population down to sub clusters that met similar characteristics and belonged to these specific user groups. Our supervisor, Dr Judith Wusteman, individually emailed a limited number UCD academic, administrative and librarian staff known to her from across a variety of UCD schools. There was no general "mailshot". A snowballing sample approach was also applied. Describing snowball sampling in its simplest form, it consists of identifying participants who are then used to refer researchers on to other researchers (Atkinson & Flint, 2001). Due to the testing taking place during out of term hours, this was the most effective method in terms of recruiting PhD students as participants. Once participants had been identified, their respective head of schools were contacted for permission.
  • 28. 20 All participants were, then, contacted via email and provided with an information leaflet (see Appendix 9. & Appendix 10.). The leaflet outlined the scope of the project and their role within the testing process. 3.10. Ethical Considerations Ethical exemption was approved from the university’s Human Research Ethics Committee (See Appendix 12 & 13). 3.11. Data Analysis The results from the heuristic evaluation, provided the group with essential information about the usability and the function of the tool. Usability issues and priori themes could be identified. The results of the heuristic evaluation were used in the development of a priori codes which were then used, along with emergent codes, in the coding of the usability test transcripts. The priori coding categories ultimately formed the themes for our data analysis. The team checked the reliability of the coding using an Nvivo coding comparison query. Triangulation was employed to allow each data collection technique to complement each other. The concept of triangulation refers to the combination of two or more research methods in a study used for cross verification (Bogdan & Biklen, 2006). Quantitative data from the heuristic evaluation and qualitative data from the usability tests were used for cross verification. This ensured that issues identified from the researchers were also recognized by the test participants, helping to eliminate bias and solidify issues identified. 3.11.1. Coding A priori and emergent codes were applied to transcripts (See Appendix 14.). The transcripts were coded by two members of the research team. Both members coded the transcripts independently, using Nvivo 11, and then coding results were merged.
  • 29. 21 A hierarchy chart was created (see Appendix 15.) illustrating the most common codes extracted and the dominant themes. Intercoder consistency or reliability is expressed as the extent to which independent coders evaluate a characteristic of data and reach the same conclusion (Lombard, 2010). High levels of disagreement among coders can suggest weaknesses in research method, categories and coder training (Kolbe & Burnett, 1991, p.248.). This was avoided. Using Nvivo, coding was merged. Following this, a coding comparison query was ran (See Appendix 16.). All coding reached a level of 86.16% agreement and higher ensuring consistency and reliability amongst coders. 3.12. Limitations Two major limitations emerged; time constraints and the absence of a screen recorder. Test design and recruitment had to be completed within a short time span. As each of the team had other important commitments, the short time span proved difficult at time regards completing tasks and jobs on schedule. Recruitment can be a challenging aspect of usability testing. The usability testing also had to be completed during the month of June, out of term hours for the university contributing to recruitment issues. A second limitation included the absence of a screen recorder. As tests were conducted in the location of the participant’s choice and also on the computer/laptop of their choice, it was difficult to use a screen recorder and install on each computer/laptop used. However, the team recognize the impact a screen recorder may have made and also the information it may have contributed to the results in terms of user mouse movement and patterns. 3.13. Reliability In order to ensure reliable results (Hughes, 2011) and provide valid recommendations, the team created a test protocol to be adhered to with each test. The team developed a script in order to produce fair test conditions as well as providing a standard coding scheme that both coders abided by.
  • 30. 22 4. Results 4.1. Heuristic Evaluation Results The heuristic data was analysed using Microsoft Excel. Each section of the evaluation received a percentage grade out of a possible 100. As the team conducted the heuristic evaluation independently, the average result for each section was calculated. Scoring above 50% indicated that that area of the website scored average, with above 75% indicating an above average compliance with the guidelines. The results indicated that there were at least three areas of major concern: • Search • Forms & Data Entry • Help, Feedback & Error Tolerance. However, despite identifying three principal areas of concern, no area scored exceptionally high, with only four areas scoring above the average of 50%. Each area had various usability issues and concerns. The results are as follows: Search 21% Forms & Data Entry 41% Help, Feedback & Error Tolerance 44% Trust & Credibility 47% Home Page 48% Task Orientation 50% Writing & Content Quality 61% Navigation & IA 65% Page Layout & Visual Design 72% A radar chart was created to illustrate these results more clearly (See Figure 8.)
  • 31. 23 Figure 8. Heuristic Results Radar Chart 4.2. Pre-Test questionnaire Findings The majority of participants identified as having previous experience of grant- finding. Research Professional was the most commonly used grant-finding website. Participants emphasized the importance of having an intuitive, easy to use and professional interface for a grant finding website. Participants also mentioned the importance of an advanced search function. None of the participants had used or heard about GrantFinder prior to the testing (See Appendix 18.)
  • 32. 24 4.3.Usability Test Findings 4.3.1. Search Search is one of the most important aspects of GrantFinder. It is primarily a search engine tool, albeit with a very specific function. The website has no internal search function for finding information within the site itself, so the only aspect of searching to be considered is the search page, rather than also looking at a function that allows users to search the whole website. Given that our test participants were, for the most part, skilled at finding either grants or resources, a number of them testified to the importance of the search interface, with one stating that search functionality should “…be easy to broaden and narrow.” (AD2) The predetermined specialities allow users to choose from five different disciplines in order to filter their searches. Several of the users praised this aspect of the search page. When asked what the positive aspects of the site, one participant stated, “What was helpful was the specialities” (A2). However, while this was a helpful function, other users felt that the choice was too broad and that it did little to aid them with their queries, for example: “I would like a little more breakdown to categories and subcategories.” (L1) The use of standard metadata should allow users to broaden and narrow the searches. However, the bearing of the scores on these standard words is not explained at any point on the website’s tutorial page, or in the FAQs. This caused some difficulty as it made the search interface difficult for the user. ‘Either show the scores’ meaning or don’t show them at all…it does add an element of confusion that I would deem to be unnecessary.’ (AD2) 4.3.2. Forms and Data Entry The participants had difficulty understanding the reason for the standard metadata used by GrantFinder. Given that most search engines (grant seeking sites included)
  • 33. 25 use free-text searching, this style of search engine is alien to most. So, too, is the score attached to all of the standard words. This created some confusion in the testing, with some users rejecting it outright. “I would not assign any score as I don’t understand what they mean.” (AD3) Similarly, many of the participants felt that the standard words and specialities were too broad and did not allow them to properly search for grant calls. Several mentioned that the standard metadata provided on the search page were common to a significant number of grant calls, and that this would be reflected in the number of results given. “These words could be for any grant proposal, any good grant proposal that universities would want to get involved in.” (A2) Another issue that arose in relation to data entry was that the standard words are not clickable and must be entered manually, along with the score. Not only this, but there is a scrolling feature for the input of the scores, but this increases in increments of one. This is not feasible given that some words, such as “deadline” and “research grant” have scores in the hundreds and even thousands. In this instance the score must be entered manually. This is time consuming for the user and does not aid them in facilitating their searches. “I thought looking at the tutorial that I could click the standard words but there’s no link attached…” (AD4) Relating to clickable areas and data entry was the placement of the plus buttons, highlighted in Figure.9. It was unclear to some users what these buttons related to. This was most apparent in the case of the second button, which allows users to add excluded words. “It is confusing whether this add button, this plus sign, relates to mandatory words or excluded words. And the fact that it is closer to excluded words confuses me more.” (P2)
  • 34. 26 Figure. 9: Add Button placement on current search page. 4.3.3. Help, FAQ and Error Tolerance Tutorial Page Figure 10 shows part of the current tutorial page for GrantFinder. The tutorial page caused confusion among most participants. Much of this confusion stemmed from the explanation of the following terms: standard words, mandatory words, job ID and score. Figure. 10. Current Tutorial Page: Information Overload
  • 35. 27 Users also expressed concern over the quality and quantity of the writing on the tutorial page, which could potentially lead to information overload. “And tutorial there is just too much text. And I would probably never understand this, if I was to read it 100 times. So instead of having the sections in bold. I would shorten the sentences so that there is 15 words max, or 10 words max.” (P2) FAQ When clicking to the FAQ from the menu header, the FAQ page is empty (see Figure 11.). This concerned participants as it did not give them any information about the GrantFinder. “There’s no FAQ, so I assume this is a very early web site or something that is being for experiment or first iteration and it could be improved.” (L1) Figure 11. Current FAQ page: Empty Error Tolerance The final piece of information necessary to carry out a search through GrantFinder is a valid email address. The results of the search query are to be accessed with the user's email address and the Job ID received when the query is complete (See Figure 12.). Although, some participants did not enter their email address, the software still navigated to the job confirmation page. In addition to this, while checking the results for the respective Job ID, there is no email verification. The page will display the results upon any email address submission. No
  • 36. 28 error notification is used for the entry of a wrong email address. The results will still be displayed with the wrong combination of information illustrating a lack of security. Figure 12. Current Results Page – Email Validation 4.3.4. Homepage Findings indicated that there are several problems with the homepage. Users found the homepage to be empty and lacking information. They observed that the supporting image present on the homepage did not relate directly to the service GrantFinder provides, and does not illustrate anything of use to the user. “Doesn’t give too much information. The homepage is almost blank. There is not much information. I believe it would be better if some more information could be added to it.” (P1)
  • 37. 29 Figure 13. Homepage Image: Irrelevant Image 4.3.5. Page Layout and Visual Design A key difficulty encountered by users was understanding the results from GrantFinder. Participants expected a small number of relevant potential funding opportunities and reported that they did not understand the complexities of cluster section. They also found the layout of the results page to be confusing. The text does not correctly wrap making parts of the results page difficult for the reader to read and use. “What’s quite pertinent is that the text does not wrap around. The layout is incorrect and that needs to be immediately assessed. It should be readable.” (AD4) Figure 14: Results Page – Visual Design Concerns were also raised about the layout and design of the tutorial page. The layout is unclear, with several users commenting negatively on the visual design and layout of the page. 4.3.6. Writing & Content Quality Findings revealed that a mix of languages (English and Spanish) are used particularly in the standard words of the search page. The standard words are not in any order or sequence. The standard words also include a number of repeated words with different scores which did not make sense to users.
  • 38. 30 “I’m not really sure what the difference between ‘elgib’ and ‘eligibility’ and why they are two different words. And so similarly, there is another word that is ‘eligib.’ And as well there are two similar words like applicant and applicant. What’s the difference between the two?” (P1) Figure 15. Search Page – Standard Words A number of spelling mistakes and typos were recognized by the participants, indicating that the software lacks proof-reading. Figure 16. Tutorial Page: Spelling Mistakes 4.3.7. Users' Perception Users found the website very complicated. This was mainly due to the unclear instructions and scrambled structure. Many users were also frustrated by the lack of explanation provided. The website required the users to go through a three stage data entry structure for grant finding which was labelled as ‘old fashioned’ by most of the
  • 39. 31 participants. The users particularly struggled with the concept of applying a score to their mandatory words. They were unsure as to how it would affect their search. This confusion soon led to frustration for many participants. It was noted that users also repeatedly complained about the lack of clickable links or drop-down menus. 4.4. Post-Test Questionnaire Finding Reviewing the post-test questionnaire confirmed that the majority of the participants found the website to be complex and unfriendly (See Appendix 19.). Participants complained that searching for grants based on the poor search criteria was difficult and that the lack of real-time results made it less efficient. Again, participants found that the use of scores were difficult to comprehend because there was no indication of the units used. The scores could mean percentage, number of occurrences etc., but this was not clear. Participants also recommended using a simpler and more user friendly interface. As well as recommending that one language only should be used for the tool. Most participants stated that they would not use GrantFinder again. Most said they would be more likely to use an alternative website in order to find grants in the future, as GrantFinder did not meet their expectations. 4.5. Triangulation Results from the heuristic evaluation and usability testing were triangulated. Findings show that similar results were found through the two methods.
  • 40. 32 Function Heuristic Evaluation Results Usability Test Results Search • Lack of real-time results. • Difficult to know if search was effective • Users acknowledged the lack of real- time results. • Users found search function difficult to use and confused as to whether their search was effective or not. Forms & Data Entry • Users must enter information using basic text entry fields • Speciality drop down menu restrictive. • Lack of use of drop down menus, check boxes and clickable areas. • No clear distinction between required and optional entry fields. • Entry of information outdated. • Speciality fields narrow. • Preference for more filtered search containing clickable areas and filtered search options. • Users unsure required data inputs. Help, Feedback & Error Tolerance • Tutorial page may cause confusion • Tutorial page contains information overload. • FAQ page contains no information. • Options between field inputs are not obvious to users. • Users had to reread tutorial page several times. • Users experienced information overload. • Users were frustrated by the lack of content on FAQ page • Users were confused by the mandatory word input and use of standard words. Homepage • Homepage does contain blurb but appears irrelevant. • Irrelevant image included. • Homepage contains very little information about the site and what it does. • An unrelated image is included Page Layout & Visual Design • Simple user interface. • Little use of images. • Concerns raised about layout of results page. • Users admired simple design • Design needs to include more images and screenshots particularly the tutorial page. • Users layout of results page Task Orientation • Users have to re-enter email address with each search • Users commented negatively on the re-entry of email address.
  • 41. 33 5. Discussion 5.1. Usability of GrantFinder Usability of a website can be described as the effectiveness and efficiency of the website (Guidance on Usability 2015). The key principles on which the usability of a website depends are availability, accessibility, learnability, efficiency, memorability, clarity and ‘satisfaction’ (Idler 2013). These key principles are of the utmost importance when discussing GrantFinder’s usability, particularly in terms of the websites layout, navigation, interface, written content and functionality. The findings from this report will identify the key issues involving GrantFinder and its user accessibility. Sonderegger et al. (2016) defines the effectiveness of a website as the degree to which a task is successfully carried out i.e. the task completion rate, accuracy, quality of outcome and task completion time. GrantFinder seems to adhere to most of these qualities except accuracy in terms of receiving a search result. The website takes a few hours to produce the results which did prove a problem to many of the participants. Participants commented that there are many other websites which provide real-time results e.g. Research Professional. Many users said that if they have to wait for results, they would probably search elsewhere. Schneiderman (2005) argues that the design of the website should be balanced with its functionality. The high complexity and clutter of information such as the tutorial page that users described as “text‐heavy”, can lead to user frustration. Schneiderman stresses the need for a site to really understand its users in order to effectively achieve an adequate balance. Decluttering the tutorial page, providing information in the FAQs, as well as correcting all the typos will create a more user friendly interface. Jain (2013) emphasizes the important factors to consider when building a website, and thus place emphasis on the home page. The author suggests that images that are used in the website should represent what the website offers. In this regard, GrantFinder does not achieve its objectives, as the image shown on the homepage is not relevant to what the website offers.
  • 42. 34 “An impractical or a confused website can break a company’s success” (Wahmarticles (n.d)). It is important not only to make a resourceful website, but one that is easily navigated because it is the user for whom the website has been designed. GrantFinder can be used, however it may not be ready for use. Several participants whom were confused with the websites functionality questioned what stage of development GrantFinder was at and was it ready for use. Questions such as these imply that the user is not satisfied with the website currently presented to them. 5.2. Future Directions The future direction of GrantFinder may be unclear due to the amount of adjustments needed to improve the site and its current design. A possible solution would be to begin a complete rework of the software and then move forward with qualitative analysis. 5.2.1. Hot Jar A future direction that GrantFinder can move towards next or later in the future is qualitative analysis. User online behaviour can be revealed via tools such as Hotjar. Hotjar is a tool that uses a combination of analysis and feedback. GrantFinder analysts will be able to improve the user experience by observing and measuring user behaviour. The installation process of Hotjar within GrantFinder is easy and simply needs registration. Once successful, a script is generated and the developer needs to plug this script into GrantFinder’s main Template page. (https://www.hotjar.com/). Heatmap Heatmap helps in knowing user needs by visually representing their clicks, scrolls and tapings. Heatmap reveals the user motivation and desire. Figure 17. Hotjar: Heatmap
  • 43. 35 Recording The recording tool will store information from visitor sessions about their interactions with GrantFinder. With the web socket connection, the script will send information such as keystrokes and mouse clicks to the Hotjar Server. It stores the coloured strokes that will help monitor user actions. Figure 18. Hotjar: Recording Funnel With Funnel, GrantFinder analysts will be able to identify the pages from which users are leaving the web site. They can be able to trace the bounce rate of GrantFinder. Figure 19. Hotjar: Funnel Form Analytics Form Analytics will help to improve the search page submission form by identifying the fields that take the longest to complete from the users perspective. It will also help to reduce the number of visitors who abandon the form.
  • 44. 36 Figure 20. Hotjar: Form Analytics 5.2.2. URL/Host When users try to search for grants through search engines such as Google, it is unlikely that they will reach the URL http://bio-hpc.ucam.edu/GrantFinder/. The site is hosted through BIO HPC and Universidad Católica San Antonio de Murcia. To access the tool, users must go through the university to gain access. A direction in which GrantFinder could move in the future would be to move the software to an independent site, making it easier for users to access. To encourage usage of GrantFinder, the URL could then be restructured using a different domain name.
  • 45. 37 6. Recommendations 6.1. Forms and Data Entry 6.1.1. Search As users found the use of five mandatory words limiting, it is recommended that the developer add an unlimited number of search columns. This will allow the user to narrow down their search and increase the specificity of a search. The developer could also incorporate free text searching, which would give the users freedom to specify what they require in the search. 6.1.2. Score According to the participants, the highest raised concern was the score that was given to the standard input words. The participants had no idea how to interpret the scores, or how to incorporate them into the search. To overcome this problem, the site could provide a detailed explanation of what a score means, how it is used, and how it relates to the results that the user obtains. If the developer of the website does not wish to reveal the workings of the search engine, they should remove the score section from the website. 6.1.3. Standard Words The participants felt that the standard words were broad in their search criteria and did not give specific search results. The developer could incorporate speciality-specific words and a free text search option through which users could use specific words that are of relevance (See Figure 21.). The standard words are listed as a table with corresponding scores in the search page; many participants confused these words as links. This problem can be fixed by providing these words as a drop down menu or clickable links that would be entered automatically into the mandatory words search box of the user’s choice.
  • 46. 38 Figure 21. Standard Words Recommendation In Figure 21, the user is provided with all the specialities, each with a radio button option; as soon as the user chooses one option all other options are disabled. Using the drop-down menu, users can see the list of predefined words given for an individual speciality and can choosing accordingly. Taking these recommendations into account, the wireframe shown in Figure 22 has been developed to indicate the suggested revisions to the search page. Figure 22. Search Page Wireframe
  • 47. 39 6.2. Functionality 6.2.1. Real-Time Results It is recommended that users receive results as soon as they click the search button. If this is not yet achievable, the language of the message should be changed as it is casual and not appropriate for the software. Figure 23. Current Job Submission Message 6.2.2. Personalization/ User Profile To optimize the ease of use and reduce the user’s workload, users should be able to create their own profiles. Users should be able to save their essential details such as email address, previous searches, previous results, favourite keywords and grants. They should be able to save their search results and share these via email and other social media. In this way, the user does not have to enter his/her email address for each search query and would be aware of their own search history. Figure 24 is an example of the entry point for users to begin to create their own account.
  • 48. 40 Figure 24. Example Sign Up Form for a Personal Account 6.2.3. Main Grants on Homepage User location can be easily identified with the IP address. It is recommended that using this IP address, the homepage highlights main grants published in the user’s location/country. These grants can be highlighted on the homepage similar to News Feed or Carousel. Implementing this, will provide the user with grants without having to search first. It also gives users an idea to what type of grant calls they may receive in a search result. Figure 25 is an example of the main grants displayed on the homepage.
  • 49. 41 Figure 25. Main Grants on Homepage 6.3. Help, Feedback & Error Tolerance 6.3.1. Tutorial The tutorial page needs to reduce the number of words it uses to explain the search process. The process should be explained in three to five distinct steps. The steps should be presented in a logical order. Sentences should be clear and concise. The use of highlighting and acknowledging keywords is encouraged. Hyperlinks should be used where possible. The information provided in this section should be reduced to the vital information required only. The results access section of the tutorial page should also be displayed in a user friendly way. Bullet points are advised. Screenshots and images of the website are encouraged where appropriate. It is recommended that any images included should be of high resolution for users to clearly see and interpret. A video tutorial should be included, in which a user performs a sample search. Each step of the search process should be clearly defined. The video should be no more than 45-90 seconds. As with all pages of GrantFinder, the Tutorial page should be proof-read.
  • 50. 42 The following wireframe shows the recommended format for the Tutorial page, based on the feedback from the participants (See Figure 26.) Figure 26. Tutorial Page Wireframe 6.3.2. Email Validation/Error Notification An issue detected during the testing was that users could complete a search without entering their email address and a successful job submission message was still revealed. Users may be misled by this message and assume that they have completed a search correctly. It is a recommendation that the successful job submission page does not proceed if email address has not been entered. Instead, an error notification or reminder should appear on the mandatory field i.e. email address which needs to be completed before proceeding (See Figure 27.) The website needs to ensure form validation. It is recommendation that inline validation is used here.
  • 51. 43 Figure 27. Inline Form Error Notifications A similar issue may occur on the results page: any Job ID can be entered with any email address. This breaches confidentiality as well as user trust and site credibility. It is important that user information and user searches remain confidential and only available to the correct user. It is suggested that this section also contains a password to improve security. If the incorrect combination is entered, an appropriate error message should be displayed similar to when an incorrect email address and password are entered (See Figure. 28.).
  • 52. 44 Figure 28. Combination Error. A similar approach to Google should be utilized on Results Page. 6.3.3. FAQ Section Currently, the FAQ page remains completely blank. In order to reduce user confusion and increase website use, it is a recommendation that the FAQ section be completed with relevant questions and answers. The FAQ page should avoid the use of jargon and should be written in simple user friendly language that potential users can understand. A suggested layout is illustrated in Figure 29. It is recommended that GrantFinder use keywords and phrases relevant to the website in the FAQ section. This can help to boost the sites SEO. Google and other search engines can therefore understand what type of business or tool the website is and rank the website accordingly for relevant searches.
  • 53. 45 Figure 29: Suggested FAQ 6.4. Writing & Content Quality 6.4.1. Language GrantFinder consists of a mix of languages across the site but particularly in the standard words section of the search page. There are duplications of standard words in more than one language which should be standardized to English. Having a dropdown menu for the purpose of selecting which language the user wishes to continue in before completing a search query is recommended (See Figure 21). User default language would be English 6.4.2. Ordering of Lists The standard words and the speciality field should be arranged in alphabetical order (See Figure 30 & 31). This will make it easier for the user to comprehend and access what they are seeking more efficiently.
  • 54. 46 Figure 30. Alphabetically Ordered Clickable Standard Words Figure 31. Speciality: Alphabetically Ordered Dropdown Menu 6.4.3. Glossary Users should be allowed to understand the meaning or definition of keywords used by GrantFinder. Providing a keyword glossary on the tutorial page can do this (See Figure 32.). Figure 32. Glossary of Keywords Example 6.5. Layout and Design 6.5.1. Home Page We recommend that the developers design a logo and display it on the homepage. This will allow users to automatically register that they have found the service that they are looking for. It is also important to include information on the tool itself for new users.
  • 55. 47 This measure will give users an opportunity to gain some insight into how the tool works and whether or not it is the right service for them, etc. Images not directly relating to the software should be removed. Findings from the testing indicated that it would be useful to incorporate some aspects of the tutorial page into the home page. As such, a video tutorial is included on the homepage. Links to the tutorial page and to a sample search page are included in the same area of the homepage. This will allow users to further practice in engaging with the software. As previously mentioned a newsfeed or carrousel of familiar funding bodies should be present, so that users will see a familiar name and be more likely to trust GrantFinder. Recommendations for alterations to the layout of the home page are illustrated in figure 33. Figure 33. Home Page Wireframe
  • 56. 48 6.5.2. Results Page As with the search page, simple but important alterations are required in order to improve the appearance and usability of the results page. In Figure 34, the display of actual word clusters has been removed. Participants in the testing regarded this as being confusing and cumbersome. Instead, what is shown on the results page is the title of the grant, the funding body, and a hyperlink that links the user directly to the webpage of the result. The number of word clusters and the score still appear, but are clearly labelled so that the user can identify them. Figure 34. Results Page Wireframe
  • 57. 49 7. Conclusion The Capstone project was completed through the work of four master’s students from the School of Information and Communication Studies over a nine-month period (See Appendix 20.). The project’s projected aims of identifying the most prominent usability issues with GrantFinder, developing a series of recommendations and ultimately evaluating the usability of GrantFinder itself were completed. Usability issues were clearly identified and outlined in this report. Ultimately, the data collected through these methods enabled the team to evaluate the usability of GrantFinder. Through the triangulation of data collection techniques, the team were able to make appropriate and valid recommendations that will improve the use and usability of the site. These recommendations will be communicated to the developers of the site.
  • 58. 50 References Alshamari, M., & Mayhew, P. (2009). Technical review: Current issues of usability testing. IETE Technical Review, 26(6), 402-406. doi:10.4103/0256-4602.57825 Atkinson, R. & Flint, J. (2001). Accessing hidden and hard-to-reach populations: snowball research strategies. Social Research Update 33. Retrieved from http://sru.soc.surrey.ac.uk/SRU33.pdf Becker, D. (2011). Usability testing on a shoestring test-driving your website. WILTON: ONLINE INC. Beall, J. (2007). Search fatigue: finding a cure for the database blues. American Libraries, 38(7), 46-50. Blake, R. (1989) Integrating Quantitative and Qualitative Methods in Family Research. Families Systems and Health 7:411–427 Bogdan, R. C. & Biklen, S. K. (2006). Qualitative research in education: An introduction to theory and methods. Allyn & Bacon. Caracelli, V. J., & Greene, J. (1993) Data Analysis Strategies for Mixed-Method Evaluation Designs. Educational Evaluation and Policy Analysis 15(2): 195-207 Duarte, E., Oliveira, Edson, J., Côgo, F., & Pereira, R. (2015). Dico: A conceptual search model to support the design and evaluation of advanced search features for exploratory search. Human Computer Interaction, 9299 87-104. Dumas, J. S., & Redish, J. (1999). A practical guide to usability testing (Rev. ed.). Exeter: Intellect. Elliott, A., Hearst, M., English, J., Sinha, R., Swearingen, K., & Yee, K. (2002). Finding the flow of web site search. Communications of the ACM, 45(9), 42- 49.Retrived from http://bailando.sims.berkeley.edu/papers/cacm02-final.html Emiko, D. (2012). Eight crazy kickstarter campaigns that should never have succeeded. The Guardian Retrieved from https://www.theguardian.com/technology/2014/jul/08/eight-crazy-kickstarter- campaigns-that-should-never-have-succeeded
  • 59. 51 Guarte, J. M., & Barrios, E. B. (2006). Estimation under purposive sampling. Communications in Statistics - Simulation and Computation, 35(2), 277-284. Doi: 10.1080/03610910600591610 Haney, W., Russell, M., Gulek, C., & Fierros, E. (Jan-Feb, 1998). Drawing on education: Using student drawings to promote middle school improvement. Schools in the Middle, 7(3), 38- 43. Hearther. N. (2009). Grantsmanship: Information Resources to Help Researchers Get Funding. University of Minnesota. Hornbæk, K. (2006). Current practice in measuring usability: Challenges to usability studies and research. International Journal of Human - Computer Studies, 64(2), 79- 102. doi:10.1016/j.ijhcs.2005.06.002 Hughes, M. (2011). Reliability and Dependability in Usability Testing. Retrieved from http://www.uxmatters.com/mt/archives/2011/06/reliability-and-dependability-in- usability-testing.php Idler, S. (2016, August 16). 5 key principles of good website usability. Retrieved from https://blog.crazyegg.com/2013/03/26/principles-website-usability ISO/IEC. (1998) "9241-11 Ergonomic requirements for office work with visual display terminals (VDT) s-part II guidance on Usability," ISO/IEC 9241-11 Jain, M. (2016, August 16). Important factor to consider when building a website. Retrieved from http://blog.clicktraffic.com/building-website-factors/ Kolbe, R. H. and Burnett, M. S. (1991). Content-analysis research: An examination of applications with directives for improving research reliability and objectivity. Journal of Consumer Research, 18, 243-250. Krug, S. (2005). Don't make me think: A common sense approach to web usability (2nd ed.). Indianapolis, Ind; London; New Riders. Krug, S. (2010). Rocket surgery made easy: The do-it-yourself guide to finding and fixing usability problems. Berkeley, CA: New Riders. Lewis, C. H. (1982). Using the "Thinking Aloud" Method In Cognitive Interface Design (Technical report). IBM.
  • 60. 52 Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2010). Intercoder reliability. Malchanov, D. (2014). 22 Principles of good website navigation and usability. Retrieved from http://swimbi.com/blog/22-principles-of-good-web-navigation-and-maximum-usability/ Nielsen, J. & Molich, R. (1990). Heuristic evaluation of user interfaces. Proceedings of the ACM CHI’90, pp.249-256 Nielsen, J. (1993). 10 usability heuristics for user interface design. Nielsen, J. (1994). Usability engineering. Elsevier. Nielsen, J. (1995). 10 usability heuristics for user interface design. Retrieved from http://www.nngroup.com/articles/ten‐usability‐heuristics/ Nielsen, J. (2012). How many test users in a usability study? Retrieved from https://www.nngroup.com/articles/how-many-test-users/ Nielsen, J. (2012). Thinking aloud: the #1 usability tool. Retrieved from https://www.nngroup.com/articles/thinking-aloud-the-1-usability-tool/ Porter, R. (2009). Research Management Review: Can We Talk? Contacting Grant Program Officers. University of Tennessee. 17(1), 2-5. Pena-Gracia, J., Den-hann, H., Caballero A., Imbernon, B., Ceron-Carrason, J.P., Vicente-Contreras, A., & Perez-Sanchez, H. (2016). GrantFinder, a web based tool for search of research grant call Ray, D.S. & Ray, E.J. (1999). Matching internet resources to information needs: an approach to improving internet search results. Technical Communication, 46(4), 569- 574. Rossman, G., & Wilson, B. (1991). Numbers and Words Revisited: Being “Shamelessly Eclectic.” Evaluation Review 9(5):627–643. Rubin, J. & Chisnell, D. (2008). Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests (2nd ed.). Retrieved from http://www.ac4d.com/classes/301/HandbookofUsabilityTestingHowtoPlanDesignand CondChapter8PrepareTestMateri.pdf
  • 61. 53 Schneiderman, B. (2005). Designing the User Interface, Fourth Edition. Pearson‐ Addison Wesley, New York. Sonderegger, A., Schmutz, S., & Sauer, J. (2016). The influence of age in usability testing. Applied Ergonomics, 52, 291-300. doi:10.1016/j.apergo.2015.06.012 Stemler, Steve (2001). An overview of content analysis. Practical Assessment, Research & Evaluation, 7(17). Retrieved from http://PAREonline.net/getvn.asp?v=7&n=17 Spencer, D. (2010). A practical guide to information architecture. Penarth: Five Simple Steps. Travis, D. (2014). 247 Web Usability Guidelines. Retrieved from http://www.userfocus.co.uk/resources/guidelines.html U.S. Department of Health & Human Services, (2016, 11th July). Personas. Retrieved from https://www.usability.gov/how-to-and-tools/methods/personas.html Usability Architect (2016, August 15). The components of Usability Retrieved from http://www.usability-architects.com/compnts.htm Wahm Articles (2016, August 15): The Importance of usability in Website design Retrieved from http://www.wahm.com/articles/the-importance-of-usability-in-website- design.html Weber, R. P. (1990). Basic Content Analysis, 2nd ed. Newbury Park, CA open- ended question (2015). (4th ed.) Oxford University Press W3C Web Accessibility initiative (2016, 16 August): Accessibility Evaluation Resources Retrieved from https://www.w3.org/standards/webdesign/accessibility” open-ended question (2015). (4th ed.) Oxford University Press. Multistage sampling (Ch13) - Researchgate.
  • 62. 54 Appendices Appendix 1: Heuristic Evaluation Checklist Example: Trust & Credibility
  • 63. 55 Appendix 2: Personas Example 1. Librarian Persona
  • 64. 56 Example 2. PhD Persona
  • 67. 59 Appendix 3: Test Protocol Testing protocol for GrantFinder Usability Testing This document contains the testing protocol for the usability of GrantFinder. Aims of testing: • Discovering usability issues faced by participants • Identifying possible solutions to issues discovered. The GrantFinder Software test aims to recruit 15 participants from the following groups, PhD students, postdoctoral students, administrative and academic staff from University College Dublin. The usability testing of the GrantFinder will take place within a 2 week period with each individual test not lasting more than 1 hour. Test Requirements: 1. Material needed: • Consent Form • Notebook • Pens • Printed version of six scenarios • Laptops • Audio Recorder 2. Two team members are required for each test: • Test Facilitator: One team member will be the test facilitator. They will facilitate the test and read the script as required. • Note Taker: The second team member will take notes from the testing. They will take care of the all the external material used during the interview such as an audio recorder. They will observe the mouse movements of the participant.
  • 68. 60 Appendix 4: Consent Form GrantFinder Software Usability Test I _____________ understand my role as a participant for the usability testing of the (Print Name) GrantFinder web software. I have been given the opportunity to ask questions in relation to the research methods, as well as my role as a research participant. Thus, I hereby agree to participate in a usability test that will be audio recorded. I understand that these subsequent recordings will be listened to by the members of the research team and their supervisor. I understand and give consent that my comments and interactions will be used in the final research report and subsequent publications using anonymising codes. I am aware that interview information will remain confidential at all times and that I am entitled to withdraw from the research process at any stage. Signed: ________________________ Date: ________________________
  • 69. 61 Appendix 5. Pre-Test Questionnaire PRE TEST QUESTIONNAIRE 1. Do you have any experience in seeking or applying for grants? IF ANSWER YES: • How frequently do you seek or apply for grants? • Do you use any particular websites? • What aspects of those websites do you particularly like? IF ANSWER NO: • In terms of discovering resources, how skilled would you describe yourself? • What websites or search tools would you use to discover resources? • What, do you consider, to be important aspect of a search engine tool? 2. Have you ever used GrantFinder before? If answer yes, was your search successful?
  • 70. 62 Appendix 6. Scenarios Scenario: Tutorial Task: You are unsure how to perform a search using GrantFinder. Where would you find more information about how to use this site? Steps: 1. Start on homepage: http://193.147.26.44/GrantFinder/ 2. Click on Tutorials 3. Scroll down to ‘3. Results Access’ Assumption: The participant will see the word Tutorial in the menu buttons at the top of the page. We assume the participant will then read the tutorial information. Scenario 1: Search: Mandatory Words Task: You are interested in conducting research on Georgian Buildings in Dublin. You are particularly concerned about the closing dates and necessary criteria for funding applications in this field. Conduct a search that takes this into account. Steps: 1. 1.Start on homepage: 2. Click on Search 3. Select Architecture from the drop down Speciality menu. 4. Enter up to 5 mandatory words and their scores including “application deadline” “application requirement” “application guide” etc. Assumption: The participant will understand the correlations between the information given to them and the standard words available to them. We assume the participant will choose “application deadline”, “application requirement” and “application guide”. Scenario 2: Search: Mandatory/Excluded Words Task: You are a European citizen seeking funding for a research project concerning the general field of health and nutrition. Your main priority is to discover whether or not you are suitable for the relevant grant. You are not concerned with who the investigator might be or whether the funding is a fellowship. Conduct a search taking all of these issues into account. Steps:
  • 71. 63 1. Start on homepage: 2. Click on Search 3. Select Nutrition from the drop down Speciality menu. 4. Enter up to 5 mandatory words and their relevant scores such as ‘non-U.S.citizens’, ‘eligibility’, ‘eligib’. 5. Click green plus icon for excluded words. 6. Enter up to 3 excluded words such as ‘principal investigator’, ‘investigator’ and ‘fellowship’. Assumption: Participants will use mandatory words such as ‘eligible’ and ‘non-U.S. citizens’. However, there are four variations of the word ‘eligible’ and its score. This may confuse some participants. Participants will also exclude words based on information provided. Scenario 3: Search: Mandatory Words Task: You are seeking funding for a new research project in the field of pharmaceutical compounds. In this instance you are interested in accessing funding from international bodies. Steps: 1. Start on homepage: 2. Click on Search 3. Select Drugs from the drop down Speciality menu. 4. Enter up to 5 mandatory words and their relevant scores such as ‘new proposal’, ‘any country.’ and ‘international funding’. Assumptions: Users are expected to consider the choice of country and funding bodies Scenario 4: Search: Open Speciality Task: You are interested in conducting research on the treatment of minor injuries. You wish to carry this research out in the country where you are currently residing and would prefer to do so in conjunction with an institute of higher learning. Steps: 1. Start on homepage 2. Click on Search 3. Select either Medicine/Sports from the drop down Speciality menu. 4. Exclude standard words related to foreign funding bodies. 5. Use any mandatory words they feel would be relevant. Assumptions: It is assumed that the participant will have to decide between specialities ‘Medicine’ and ‘Sports’. It is assumed that the participant will exclude words related to foreign funding.
  • 72. 64 Scenario 5: Results Access Task: Earlier today, you conducted a search using GrantFinder. You have just received the following email: “You have received this email because you submitted a job to the GrantFinder server. A summary of the results can be obtained via the Results page on GrantFinder. Results will expire in two weeks. In case of questions please reply directly to this email.” Access your search results using the Experiment ID: 237 and email address: majella.haverty@ucdconnect.ie. Earlier today, you conducted a search using GrantFinder. You have just received the following email: [give it here in exact form]. Access your search results. Steps: 1. Start on Homepage 2. Click on Results 3. Enter experiment ID in experiment field. 4. Enter email address in email field. 5. Click Submit Data. Assumptions: Users will be able to access results using relevant email and experiment ID. Scenario 6: Results Evaluation Task: Using the search results you have just accessed, choose the highest-scoring result link that contains 9 word clusters. Retrieve this information based on the results section of the tutorial page. Steps: 1. Click Results. 2. Enter experiment ID and email address (from previous task). 3. Click Submit Data 4. Select the link _______________________. Assumption: It assumed that the user will select the link with the highest score and the suggested number of word clusters. Participants may choose to refer back to the tutorial page.
  • 73. 65 Appendix 7: Post-Test Questionnaire POST TEST QUESTIONNAIRE 1. What is your overall opinion of the website? **if they have already answered it in Q1, don't ask Q2 & Q3. 1. Is there anything you like about the website? 2. Is there anything you dislike about the website? 3. What one major improvement would you make to the website? 4. What did you think of the use of standard words provided by the website? 5. Would you use this website again, if you had to search for grants? Why/ Why not? 6. Do you have any other comments about the website?
  • 74. 66 Appendix 8. Usability Test Script Test Script **Each participant must be read exact same script in order to create fair testing conditions. • Introduction i. First, we would like to thank you for agreeing to take part in our usability test. The research will be conducted by postgraduate students and supervised by Dr Judith Wusteman, all from School of Information and Communication Studies at UCD. ii. The purpose of this Capstone project is to evaluate the usability of the GrantFinder tool. GrantFinder is a web-based tool that aims to help researchers find research grant calls. iii. The aim of this test is to assess how easy the website is to use. The test will consist of a pre- test questionnaire, you will be given a series of predefined tasks to perform using GrantFinder. You will be asked to comment aloud while completing the tasks. The test will conclude with another brief questionnaire. The whole session will take less than an hour. • Explain the Process i. In terms of the test, you will complete 6 pre-defined tasks and one warm up task. We ask you to use a think aloud approach, in which you will be asked to verbalise any opinions or thoughts while you are completing the task at hand. ii. The thoughts you verbalise simply allow us to know what you are thinking, why you are making such a decision or how you feel about the website. These thoughts will be the primary source of our data collection and our ability to evaluate the usability of GrantFinder. iii. Following the pre-test questionnaire, we will ask you to complete a warm up task, in which I will help and guide you through the process. For the remainder of the test, I may use verbal prompts to elicit more thoughts and responses from you in relation to the website and tasks. iv. If you have any questions, please do not hesitate to ask. • Consent Process i. Your participation is voluntary and you can withdraw from the research at any time. With your permission, your comments and your interaction with the computer during the session will be recorded. The recording will only be viewed by members of the team and their supervisor. ii. My research partner will also be taking some notes whilst you are completing the tasks. Along with the recordings, the notes won’t be seen by anyone except the people working on the project. iii. Your identity will not be disclosed to anyone outside the student team and their supervisor. In the project report and any subsequent publications, anonymising codes (e.g. ‘Librarian’, ‘academic’, ‘administrator’, etc.) will be used. **Ask Participant to read and sign the consent from. • Reassurance
  • 75. 67 i. Before we begin, I want to ensure you that it is the website we are testing and not you. This test does not reflect on you, there are no right or wrong answers. ii. We encourage you to be as honest as possible, so please don’t be worried about offending anyone about any difficulties or issues you face whilst using the website. Our aim is to provide the developer with recommendations that can help improve the site by using the information provided by you. iii. Again, we are using a think aloud approach, so the more you can talk about what you are thinking when using the website, the better. iv. Feel free to ask questions during the task. However due to the nature of the test, we may not answer them straight away as we wish to see how you complete the tasks without any aid. Once the test is completed, we will try to answer any questions you have. • Pre Test Questionnaire We will first begin with a few questions: PRE TEST QUESTIONNAIRE 1. Do you have any experience in seeking or applying for grants? IF ANSWER YES: • How frequently do you seek or apply for grants? • Do you use any particular websites? • What aspects of those websites do you particularly like? IF ANSWER NO: • In terms of discovering resources, how skilled would you describe yourself? • What websites or search tools would you use to discover resources? • What, do you consider, to be important aspect of a search engine tool? Have you ever used GrantFinder before? If answer yes, was your search successful? • Initial Reaction **Prior to beginning the tasks, open the GrantFinder Homepage and ask participant the following question: • What are your first impressions of the website? • Demonstration Task We will start with a warm-up task, in which we encourage you to ask any questions about the think aloud process. We may prompt you and encourage you to verbalise your thoughts during this task. Scenario: Tutorial Task: You have arrived on the GrantFinder website for the first time. You are unsure how to potentially view your search results. Find more information. You are unsure how to perform a search using GrantFinder. Where would you find more information about how to use this site?
  • 76. 68 • Test Tasks **For this stage, participants should complete tasks without any assistance from either member of the research team. ** Researcher may prompt participant using non-leading questions such as: What are you thinking? Why did you take that step? We will now, begin the usability test. For this part, we will not be able to help you complete any of the tasks. We may encourage you from time to time to think aloud. Scenario 1: Search: Mandatory Words Task: You are interested in conducting research on Georgian Buildings in Dublin. You are particularly concerned about the closing dates and necessary criteria for funding applications in this field. Conduct a search that takes this into account. Scenario 2: Search: Mandatory/Excluded Words Task: You are a European citizen seeking funding for a research project concerning the general field of health and nutrition. Your main priority is to discover whether or not you are suitable for the relevant grant. You are not concerned with who the investigator might be or whether the funding is a fellowship. Conduct a search taking all of these issues into account. Scenario 3: Search: Mandatory Words Task: You are seeking funding for a new research project in the field of pharmaceutical compounds. In this instance you are interested in accessing funding from international bodies. Scenario 4: Search: Open Speciality Task: You are interested in conducting research on the treatment of minor injuries. You wish to carry this research out in the country where you are currently residing and would prefer to do so in conjunction with an institute of higher learning. Scenario 5: Results Access Task: Earlier today, you conducted a search using GrantFinder. You have just received the following email: “You have received this email because you submitted a job to the GrantFinder server. A summary of the results can be obtained via the Results page on GrantFinder. Results will expire in two weeks. In case of questions please reply directly to this email.” Access your search results using the Experiment ID: 237 and email address: majella.haverty@ucdconnect.ie. Scenario 6: Results Evaluation
  • 77. 69 Task: Using the search results you have just accessed, choose the highest-scoring result link that contains 9 word clusters. Retrieve this information based on the results section of the tutorial page. • Post-Test Questionnaires POST TEST QUESTIONNAIRE 1. What is your overall opinion of the website? **if they have already answered it in Q1, don't ask Q2 & Q3. 1. Is there anything you like about the website? 2. Is there anything you dislike about the website? 3. What one major improvement would you make to the website? **If they have answered already do not ask again 1. What did you think of the use of standard words provided by the website? 2. Would you use this website again, if you had to search for grants? Why/ Why not? 3. Do you have any other comments about the website? • Debrief & Thank You i. We have come to the end of the usability test. We want to thank you for participating in today’s test and giving us your time. ii. If you have any questions now or later, we would be happy to try and answer any questions you may have. iii. Please feel free to contact us later, at any stage. Thank you, again.
  • 78. 70 Appendix 9. Email to Participants ‘Dear XX, Many thanks for agreeing to help us evaluate the GrantFinder web tool. Attached is some further information about the testing sessions. Would Thursday, Xth of June at Xam be convenient for you? The session shouldn't take more than an hour and we will bring a laptop with us so we just need a quiet space in which to carry out the testing. Would you like to carry out the test in your own office, a quiet space in your own building, or would you like to come to the School of Information & Communication Studies? Although GrantFinder is publicly available, we would be grateful if you didn't try it out before the test session, as we are interested in exploring participants' first impressions of the site. Regards John Kiely, Majella Haverty, Satish Narodey, Kunal Kalra’
  • 79. 71 Appendix 10: Information Leaflet Usability Testing of the GrantFinder tool Information Leaflet, May 2016 What is GrantFinder? GrantFinder is a web-based tool that aims to help researchers find research grant calls. Who is conducting the research? The research is conducted by four postgraduate students and supervised by Dr Judith Wusteman from School of Information and Communication Studies at UCD. What are the aims of the research? The aim of this Capstone project is to evaluate the usability of the GrantFinder tool. What will participation in the research involve? Two members of the student team will meet you, either in your office or in the School of Information & Communication Studies. After a brief introduction and a questionnaire, you will be given a series of predefined tasks to perform using GrantFinder. You will be asked to comment aloud while completing the tasks. The test will conclude with another brief questionnaire. The whole session will take less than an hour. With your permission, your comments and your interaction with the computer during the session will be recorded. The recording will only be viewed by members of the team and their supervisor. Your identity will not be disclosed to anyone outside the student team and their supervisor. In the project report and any subsequent publications, anonymising codes (e.g. "librarian", "academic", "administrator" etc.) will be used. What if I change my mind about participation in this research? Your participation is voluntary and you can withdraw from the research at any time. What if I have questions? If you have any questions, please email Dr Judith Wusteman judith.wusteman@ucd.ie.
  • 80. 72 Appendix 11: Usability Testing Timetable
  • 81. 73 Appendix 12: Ethics Form Human Subjects Exemption from Full Ethical Review Form Including Access to UCD Students & University-Wide Surveys An Exemption from Full Ethical Review is not an exemption from ethical best practice and all researchers are obliged to ensure that their research is conducted according to HREC Guidelines. Depending on the nature of the study described below your study may require a preliminary review by the HREC Chairs and may be subject to further clarification. Please do not alter the format of this form and submit it as a word document. Section A: General Information I apply for Exemption from Full Ethical Review of the research protocol summarised below, on the basis that (select Yes or No): a. All aspects of the protocol have received ethical approval from an approved body (e.g. Hospitals, hospices, prisons, health authorities) No a. The research protocol meets one or more of the criteria for exemption from review as detailed in Section 3 of Further Exploration of the Process of Seeking Ethical Approval for Research (HREC Doc 7) Yes I am also requesting permission to access UCD Students for one of the following (select Yes or No): a. I am accessing students from one school only and will seek permission from the Head of that school No a. I am seeking permission to access UCD Students from more than one school (accessing students in more than one school will require HREC approval) Yes, but PhD students only (max 10) and I will request permission from head of
  • 82. 74 school and supervisor in all cases. a. I am seeking permission to conduct a university-wide survey of UCD students (if the research is a campus-wide student survey and involves students from two or more schools, then permission to schedule the survey will be sought from the University Student Survey Board (USSB) on your behalf after this form has been reviewed by a HREC Chair and/or HREC Committee). No I have also read the following Guidelines (select Yes or No): (i) HREC Guidelines and Policies specifically Relating to Research Involving Human Subjects http://www.ucd.ie/researchethics/policies_guidelines/ Yes (ii) The UCD Data Protection Policy http://www.ucd.ie/dataprotection/policy.htm Yes (iii) The Data Protection Guidelines on Research in the health sector, (if applicable) https://www.dataprotection.ie/documents/guidance/Health_research.pdf Yes For all the latest versions of the HREC Policies and Guidelines please see the research ethics website: http://www.ucd.ie/researchethics/policies_guidelines/ 1. PROJECT DETAILS a) Project Title: Evaluating the Usability of the GrantFinder Tool b) Study Start Date: (dd/mm/yy) 28/01/16 Study Completion Date: (dd/mm/yy) 01/09/16 c) Start Date of Data Collection: (dd/mm/yy) 01/05/16 Completion Date of Data Collection: (dd/mm/yy) 01/09/16
  • 83. 75 NOTE: In no case will approval be given if recruitment and/or data collection has already begun 1. APPLICANT DETAILS Name of Applicant (please include title if applicable): Majella Haverty UCD Student Number: (if applicable) 15200660 a. b. Applicant’s position in UCD (please put ‘yes’ in relevant space): Staff Postgraduate Undergradu ate No Yes No a. b. Academic/Pr ofessional Qualification s Bachelor of Religious Education with English (Hons) a. b. Applicant’s UCD Contact Details UCD Telephone (if applicable) UCD Email (bearing reference to applicant’s name) majella.haverty@ucdconnect.ie e) Applicant’s UCD Address (UCD school address NOT home address.) School of Information and Communication Studies, Belfield f) Name of Supervisor (please include title if applicable): Dr. Judith Wusteman
  • 84. 76 g) Supervisor’s UCD Contact Details UCDe Telephone UCD Email: 7612 judith.wusteman@ucd.ie h) UCD Investigator( s) and affiliations (name all investigators & co-investigators on project) All Masters students in UCD School of Information and Communication Studies: Majella Haverty Satish Narodey, Kunal Kalra, John Kiely i) Funding if applicable Source Amount j. EXTERNAL APPLICANTS ONLY (if study is not associated with any UCD staff member or school) a. xternal Investig ator(s) if applicabl e Yes / No If YES, please provide name(s) a. ame of Organiz ation Relationship with External Organization a. ddress of Organiz ation a. xternal Investig
  • 86. 78 k. INSURANCE Please note that UCD’s existing insurance policy providing cover in relation to research work and placements, being undertaken by UCD staff and students, is currently limited to Public Liability only. Provisions of other types of insurance cover, as listed in the table below, are the sole responsibility of the researcher. Please select Yes or No and provide details, where required. Please do not assume that you do not require insurance. NOTE: This section is mandatory – your application will not be processed unless this section is completed. i. Does this study require medical malpractice or clinical indemnity insurance? No Is relevant insurance cover already in place? (Yes/No) Insurance Holder’s Name: i. Is this study covered by Clinical Indemnity Scheme (CIS)? No Healthcare Provider’s Name: i. Is there any blood sampling involved in this study? No Who will be taking samples? Insurance details: i. Are there other medical procedures involved in this study? No Details of Procedures: i. Does this study involve travelling outside of Ireland? If Yes, please name the country/countries where the researcher will travel in the field below No Name country/countries outside of Ireland: The Office of Research Ethics will liaise with the Insurers and will advise you of any specific requirements, if necessary.
  • 87. 79 Section B: Research Design & Methodology 1. RESEARCH PROPOSAL a. Methods of data collection (please select the appropriate box and provide brief details) i standard educational practices No ii standard educational tests No iii standard personality tests No iv standard psychological tests No v interviews or focus groups Yes Usability tests. The duration will not exceed more than one hour per test. The usability test will comprise tasks that the participants will be asked to complete using the website plus brief pre and post questionnaires. vi public observations No vii persons in public office No