With all the data that usability testing produces, it can be difficult and time consuming to uncover the golden nuggets, and unfortunately you don't always have enough time to wait for the formal usability test report to impact the design process. Sound familiar? At Vistaprint, we've been piloting new ways of capturing and transforming test data into actionable insights to speed up the iterative design process. As a result we're finding that it not only increases our reaction time but improves collaboration and engagement across stakeholders.
To change the layout of any slide: Right click on the slide and scroll down to layout Choose the appropriate layout option If images and text need to be reset, right click and choose “reset slide”Slide Name: Title Slide
Who We Are – 2 min (never starts on time)Identify the Problem – 3 minDescribe the Idea – 7 minShare Our Results – 13 minQuestions + Discussion – 15 min
Vistaprint N.V. (Nasdaq: VPRT) empowers more than 15 million micro businesses and consumers annually with affordable, professional options to make an impression.
UX CoE - Created in October 20117 UX Designers with over 75 years of combined UX experience2 UX ResearchersWe focus on delighting our customers by creating an effortless enjoyable experience based on user centered design, customer research, and iterative improvements.
As our design process became more iterative, it meant that we needed customer feedback… FAST!
We could no longer wait for the formal usability report, although still useful for documentation, executive buy in, etc.
We needed design direction immediately to stay on schedule. Yet, each UX designer didn’t have the luxury of watching all 10 usability sessions.
What to do…?
High Level before getting into the details Utilize people and processes to save time in the iterative design processPeopleLow tech processCapture dataSort dataCapture trends across participants
High Level before getting into the details Utilize people and processes to save time in the iterative design processPeopleLow tech processCapture dataSort dataCapture trends across participants
We already have a lab at VPWe already have a lot of interest from stakeholders in observing usability testingSO why not make better use of their time!
Collaboration – Sharing the workload to obtain resultsEngagement – Stakeholders are more involved in the usability testing processOwnership – Everyone owns the findingsSpeed! – Faster, better, more comprehensive feedback that can be used in an iterative design process – cuz we’re still a lean UX team!
High Level before getting into the details Utilize people and processes to save time in the iterative design processPeopleLow tech processCapture dataSort dataCapture trends across participants
To be clear, we are not the first ones to think about this. There is a lot of information out there on Data Logging techniques. Many leading to digital tools for quickly capturing data and turning it into a usability test report. There are different techniques for logging Usability Test Data in general. Often done by the usability practitioner him/herselfAt UPA 2004, Dr. David Dayton presented details of how 6 different practitioners (including himself) logged usability data. These were the 4 most popular types of Data Logging PracticesProblem Coding Record predictable events and sort them on the fly into one or more categories. Analyse the resulting quantitative data with statistical methods and compare to pre-defined benchmarks to assess the usability of the product. Event Description Records free-form handwritten notes to capture significant events and/or usability problems. Analyze notes post-test, group and categorize events, and rate their severity. Event Description with Problem Coding Code events into certain pre-set categories, and enter descriptive notes for later team review and discussion of the most significant problems.Event Description with Problem Coding & Video Time Stamps Capture the "story" of a test session in shorthand notes.
With varying degrees on pros and cons depending on which approach taken
Then there’s the Rainbow Speadsheet by Tomer Sharon – hi tech approach which uses a Google Doc for capturing test data. Each test stakeholder has access to and is responsible for entering observations. With it, you will be able to collaboratively observe UX research sessions with team members (or clients). You will be able to conduct research that involves the entire product team, with results that are turned around quickly and that team members will be committed to acting on. And all of this without writing a formal exhaustive research report that no one wants to read.Requires “homework” for familiarity and access to Google Docs. Ultimately, we wanted something low tech with less mental overload for stakeholders.
But we were hoping for something more visible and interactive – where we could see trends throughout the course of the test sessions and quickly organize the visible data into a set of priorities
High Level before getting into the details Utilize people and processes to save time in the iterative design processPeopleLow tech processCapture dataSort dataCapture trends across participants
We took an iterative approach to field testing some ways of capturing data in a low-tech manner
Let all stakeholders record all observations on yellow sticky notes identified by participant #Easy for stakeholdersAt end of session, organize stickies by + and – by site area
Inconsistencies – example: funny quote documented by all, something deemed important only documented by UX Not consistently documenting across participants
One example where we separated features and had stakeholders document observations and comments by + and -. Another example where we had observers document by step in the flow.
Weren’t as engaged over time – due to redundancies
One example where we separated features and had stakeholders document observations and comments by + and -. Another example where we had observers document by step in the flow.
One example where we separated features and had stakeholders document observations and comments by + and -. Another example where we had observers document by step in the flow.
So now what? How do we learn from our pilot tests to remove the redundancy and improve our time spent with the data yet add value for stakeholders? How do we squeeze more out of them?
There are many different kinds of testsHere are a couple examples. We are mainly focused here in the middleYou may need to change your approach based on the kind of test, the research questions, and your place in the process
Kind of test, research questions, place in process
Kind of test, research questions, place in process
Kind of test, research questions, place in process