Breaking the Kubernetes Kill Chain: Host Path Mount
Budget Usability without a Usability Budget
1. Budget Usability
without a
Usability Budget
Suzanne Chapman
User Experience
Department
Julie Piacentine
Reference Department
Image by flickr user alancleaver_2000
3. “The purpose isn’t to
What is Budget prove anything; it’s to get
Usability? insights that enable you
to improve what you’re
building”
aka "discount” or “informal” or
"do it yourself"
– Steve Krug
It’s not just about the money…
It's also about the time and
effort.
It’s qualitative, informal, and
unscientific.
Our definition:
Anything that you can do with
low overhead that involves
users interacting with a site.
Image by flickr user sarabc
4. Image by flickr user electrofantastic
Why Use Budget
Methods?
• Quick answers to simple
questions
• Faster
• Easier
• Cheaper
• Targeted
• More staff participation
5. When to Use Budget Techniques?
When you just want a
quick reaction to
something.
• Link label
• Placement of
something
• Findability of
some piece of
content
• Attitude
towards a
design
• Do users get it?
Image by flickr user s2photo
6. How to Use Budget
Techniques?
• Early and often
• Alongside usage
statistics & user
feedback
• In conjunction (or in
preparation for)
larger evaluations
• With a grain of salt
Image by flickr user sergesegal
7. Participants
• Anywhere from 6-100+
• Where & how to find
participants:
o "in the wild" &
on-the-fly!
o links from website
o emails sent to
departments via
Subject Specialist
Librarians
• Incentives:
candy, MLibrary
gadgets, or a few "blue
bucks" each by flickr user faultypixel
Image
8. Lessons Learned & Tips
• Test the test. Time spent piloting the test is time well
spent.
• Articulate your expectations but be flexible.
o Just want general feedback? Ask an open question.
o Want to solve a specific problem? Ask a direct
question.
• Iterate. Know when to admit that something didn't work
well. Refine and repeat.
9.
10. Participatory Description:
Design X/O & Ideal Design
Actively involve users in
the design process.
(inspired by Nancy Foster)
11. Participatory Design
X/O Instructions:
1. Circle the things
you find useful
2. Put an X through
the things you
don't find useful
3. Add a note for
anything that's
missing
20. Description:
Card Sorting Did a combination of
sessions with individual
participants and groups.
158 Participants:
• 18 Undergrads & Grads
• 140 Library Staff
Materials Cost: $0 / $125 for
online tool.
Incentives Cost: $90
Ask users to sort a series of Set up time: ~3hrs
cards, each labeled with a piece Test time: ~2hrs
of content, into groups that make
sense to them. Analysis: >10hrs
21. Card Sorting
- Services/Departments/Libraries
Group paper card sort
26. Description:
• Print out web page
Guerrilla Testing • Approach someone “in
the wild” & ask if they
can spare 5 min.
• Ask 1-2 short questions
Participants:
• 20 undergrad/grad
Materials Cost: $0
Incentives Cost: $0
Set up time: ~2hrs
Quick and short answers to quick
and short questions. Five minutes Test time: ~2hrs
is our goal!
Analysis: ~4hrs
27. Guerrilla Testing
Contents:
• Removed/added links
Labels:
• “Quick Links” is good
• Some link labels revised
Location:
• Not good! Needs to be
more prominent
28. Description:
Online Guerrilla “Survey” distributed via
Testing Subject Specialist Librarians,
news items, and directly from
access system interface.
Participants:
• In progress
Materials Cost: $0*
Incentives Cost: $0
Set up time: ~1hr
Automated version of paper
guerrilla test to reach a larger Test time: 0
audience.
Analysis: ~1hrs
29. Where would you click to find more information about the 1st
item in8. Where would you click to go
the list?
directly to an article?
Traditionally most usability work done through this committee but recently created UX department will now also be focusing on this type of research.Nice to mention that UTF members volunteer and rarely have prior experience so budget techniques are easy to learn and easy to take back to their departments to use for other purposes!
This is OUR definition and it’s pretty loose.
Ken – maybe you want the slide to just say “Faster” and fill in the bits in parens?Faster (less time investment for prep)Easier (less time designing evaluations, cut out time-consuming things like recruiting participants)Cheaper (don’t need fancy software or facilities)More targeted (have a question, answer it directly)More staff (with less expertise) can take it on. (and maybe just as reliable results)Doing a couple of budget tests is better than doing nothing. The ramp up to doing formal testing can be prohibitive to actually getting it done at all.
[this seems better at end, but I like the idea of the last slide being the online guerrilla.]Good example would be to describe our first attempt at guerrilla:We were looking to relabel the link to our various delivery services… we asked 9 people what they’d call it and got 9 different answers. It was still interesting and useful but we had to redesign and redo the test to get a solid answer.
Suz introask how many people have done “formal” vs. “informal”
Search language
talk about how we analyzed data (user groups, sep out areas) to identify trends
Group Paper Card Sort w. Students 18 participants: undergrads, grad students (divided into 4 groups) Organized 84 cards representing half of this content Allowed us to see interaction among students, hear thought processes, and better understand confusing labels Individual Online Card Sort w. Staff Purchased license to OptimalSort allowing us to place in front of many individuals140 staff completed exercise Provided more data, but didn't expose the thought process
Exploring the results can be tricky Task Force also came up with "unified" categories, based on the categories the participants created, as well as the comments they made during the card sort. Several similarities between categories surfaced across the various participant groups performing the card sort, whether performing a paper sort or using the online tool. Both the similar groupings across participant groups and the "unified" categories the Task Force came up with were suggested as bases for further tests. Implementing changes will be a large-scale change that would add significant complexities for users and staff.
Exploring the results can be tricky Task Force also came up with "unified" categories, based on the categories the participants created, as well as the comments they made during the card sort. Several similarities between categories surfaced across the various participant groups performing the card sort, whether performing a paper sort or using the online tool. Both the similar groupings across participant groups and the "unified" categories the Task Force came up with were suggested as bases for further tests. Implementing changes will be a large-scale change that would add significant complexities for users and staff.
Dark spot:#29 Scholarly Publishing Office + #30 UM PressLight spot:#Asia Library (15) & Area Programs (13): mediumSerials & Microforms Services (41) & Shapiro Undergraduate Library (20) & Askwith Media Library (16) = med dark
Two Questions, One TestAdvantage: Made good use of participants’ timeDisadvantage: Spent more time analyzing results
URL of survey is at http://umichlib.qualtrics.com/SE/?SID=SV_3rZvKvGPvIkS1msStill need to set up a TinyURL. And add URL to this slide.
Set up that we want them to do a hands-on handoutsInstructionsAsk them to compare with neighborAsk to raise hands if they had identical marks as neighbor
Now, part 2InstructionsRaise hands if they marked the “Books” link (with screenshot)Those who didn’t, where did they click?-say something interesting about % who got it rightBroader discussion about applying this methodSo, what’s something on your library website that you think users might have a hard time finding?
Now, part 2InstructionsRaise hands if they marked the “Books” link (with screenshot)Those who didn’t, where did they click?-say something interesting about % who got it rightBroader discussion about applying this methodSo, what’s something on your library website that you think users might have a hard time finding?