This document discusses the User Experience Index (UXI) as a way to quantify and compare end-to-end user experiences across complex tasks and systems. It describes using the PURE methodology to rate steps in a user's current experience and ideal target experience to identify friction points. A UXI score is then calculated as a ratio of the target friction to current friction to provide a quantifiable goal and way to track progress. Examples analyze replacing a broken laptop and setting up two-factor authentication. The UXI approach aims to facilitate cross-team cooperation by measuring experiences holistically rather than just individual applications.
Assessing friction in complex user journeys - the User Experience Index.pdf
1. The User Experience Index (UXI)
UXPA 2022
Assessing friction in complex user journeys:
2. Danny Hager, Ph.D.
Product Strategist & Senior UX Researcher
Office of the CIO, IBM
Jon Temple, Ph.D.
Design Principal & Senior UX Researcher
Office of the CIO, IBM
4. UX at a strategic level
• How good or bad are current end-to-end experiences?
• How much can we impact an experience?
• How do we get disparate teams focused on a common vision?
• How do we quantify and track progress (e.g., OKR’s)?
5. Understand your user tasks and their success rate
Expose the pain points preventing task completion
Measure how users feel about your tool (e.g., NPS)
Automate data collection and presentation
Communication channel between user and team
IBM | CIO Design
Program goals
Voice of the Employee (VotE) program
Adoption
• Enabled over 450 applications to collect feedback.
• About 50,000 end user responses over the past year.
Project team value
At a glance, each project can view sentiment, trends,
goals, and blockers towards goal completion.
Business value
• Support OKR’s for individuals, projects and domains
• Ensure a level of quality across all IBM offerings
• Productive environments help retain talent
6. End-to-end experiences versus project focus
Simone needs a
new laptop
Simone returns
old device
Selects
replacement
device
Receives reminder
emails
Schedules a
return
Experience from a PROJECT perspective
Asks manager
about device
refresh rules
Reads policy info Selects
replacement
device
Receives
device
Follows setup
process
Figure out how to
migrate data
Learn how &
then transfer
bookmarks
Manually
migrates data
Manually
installs missing
software
Wipes data /
passwords /
packs up old
device
Receives Dunning
emails
Schedules a
return
Experience from a USER’S perspective
7. High level
approach we
have been
exploring
Many tasks require multiple tools – even when each tool is good,
there is often friction when moving from one to the next.
Identify end-to-end tasks
Friction = any element of a workflow that impedes the user’s ability
to complete their goal (screen design, content, business rules, etc.).
Estimate friction for tasks
UXI Scores indicate how much avoidable friction is in the system –
the difference between a current experience and an “ideal” one.
Compute a “User Experience Index” score for each task
10. Journey maps CAN show ”friction” …
…but a stack of journey maps is cumbersome to summarize –
we needed to quantify these journeys.
11.
12. Quantifying friction using the PURE methodology
Pragmatic Usability Rating by Experts
From NN/g Nielsen Norman Group: https://www.nngroup.com/articles/pure-method/
Christian Rohrer, April 2017
14. A researcher constructs one or more journeys for the task
Spreadsheet listing each step
Includes notes about UX issues, existing user
data, channels, etc.
Presentation containing screenshots
(useful for the team and for stakeholders)
15. As a group, team member/researcher rates each step from 1 - 3
Count down 3… 2… 1… and each person types their rating into chat (WebEx, Slack, etc.)
16. 1
1
1
1
As a group, team member/researcher rates each step from 1 - 3
• The ratings for the step are discussed – any outliers make a case for their rating.
• Any UX issues or ideas are documented.
19. The final ratings for all steps are summed to create the PURE score
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
2
2
2
1
2
2
2
1
2
2
2
2
1
2
2
2
2
1
1
2
1
2
2
1
2
2
2
3
2
2
2
2
3
3
2
3
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
2
3
3
3
3
2
3
3
3
3
3
2
3
3
3
3
3
3
3
1
1
1
1
1
PURE Score = 41
You might be wondering – what does a 41 mean? Is that good or bad?
Hold onto that thought!
20. Summary of our experience with PURE analysis
• Does not replace user research.
• 3-point scale is good enough.
• Team rating and consensus is critical.
• Becomes faster and more efficient over time.
• PURE score itself is problematic.
22. Felicia Fix-it-herself has
a problem with her
company provisioned
Mac – several keys have
stopped working.
She wants to get it
repaired or replaced and
would prefer to avoid
contacting the help desk.
23. Felicia wants to get
her Mac repaired or
replaced.
(She’d like to avoid
the help desk.)
Felicia successfully
submits request for a
replacement device.
Trigger
Success
Different teams with different goals,
metrics, backlogs, roadmaps and
stakeholders – but all part of the task.
40. 13 steps (20 friction) for the user to find the
machine’s serial number, discover its warranty
status and determine next steps.
This sub-task amounts to 29% of the friction in
the “replace a broken Mac” workflow.
1 1 1
1 1
2
2
2 1 2
2
2
2
46. Felicia wants to get
her Mac repaired or
replaced.
(She’d like to avoid
the help desk.)
Felicia successfully
submits request for a
replacement device.
Trigger
Success
47. Felicia wants to get
her Mac repaired or
replaced.
(She’d like to avoid
the help desk.)
Felicia successfully
submits request for a
replacement device.
Trigger
Success
PURE = 68
How good or bad is a 68 PURE score?
49. Doing your taxes Posting a cat photo to social media
PURE scores vary based on task complexity, not just design complexity
Even with the absolute best design, one task will have a much higher PURE score.
50. To interpret a friction score,
we need a point of reference.
Our approach is to compare the current friction
to a target version of each experience.
51. Consume existing UX research
2
Creating a User Experience Index to compare experiences
6 Define the target (to-be) flow
Create artifacts to represent
workflows
4
Identify task
1 Task analysis
(identify scenarios)
3
5 PURE analysis for the
current (as-is) flow
Current (As Is)
7 PURE analysis for
target flow
Target (To Be)
8
Calculate User Experience
scores for scenarios
53. A “target” Journey for how Felicia COULD get a replacement for her broken Mac
1. Felicia visits the intranet home page and searches for “broken Mac keyboard.”
2. At the top of the search results, she sees an “IT Support” section that [shows her registered devices].
3. She clicks on the Mac that is experiencing the problem and selects a hardware problem.
4. She see selects “keyboard or pointing device” from the list.
5. The system [indicates that the device is out of warranty] and offers to request a replacement.
6. She agrees to receive a replacement refurbished device that she will select later.
7. The system [generates a ticket number] and [opens Devices@IBM with the initial fields pre-filled].
8. She selects the Mac that the system [indicates is the closest match to her current machine].
9. She enters her shipping address.
10. She confirms the information and submits the request.
55. Just enough detail to estimate target complexity and communicate the concepts.
Friction reduced by ~85%
(from 68 to 10)
56. # Short Description
Current
PURE
Target
PURE
1 Software issue on Mac (Outlook not receiving email) 12 11
2 Software issue on Mac (Problem installing Sketch) 32 28
3 Hardware issue on PC (keyboard issues) 64 10
4 Hardware issue on Mac (keyboard issues) 68 10
5 Software issue on Mac (Outlook not receiving email) 39 30
6 Password issue – set up 2FA with authenticator 34 25
7 Password issue – user wants to change w3ID password 5 5
8 Issue with HR related tool (Travel@IBM questions) 60 42
9 Hardware issue on Mac (keyboard issues) 40 8
10 Hardware issue on PC (keyboard issues) 41 8
11 Modify an existing 2FA method (change phone number for text) 46 30
12 Hardware issue – user wants to determine if they can get a new one (refresh) 9 5
13 Ticketing – user wants to create a “web ticket” for a software issue 16 13
14 Re-open a closed ticket 5 5
15 Non-entitled Software issue on Mac (issue with Balsamiq) 20 16
Current and Target scores are computed for each journey
Need a simple, consumable score for communication and ranking.
58. Calculating UXI Scores
A ratio of target friction to current friction
Target Friction
Current Friction
UXI =
Produces a score of 100 if the current
friction matches the goal; infinitely
approaches 0 as the difference increases.
* 100
59. = 14.7
UXI score for replacing a Mac that has a broken keyboard
Target Friction
Current Friction
UXI = 100 * = 100 *
10
68
= 100 * 0.147
UXI Label
90-100 Excellent (A)
80-89 Good (B)
70-79 Fair (C)
60-69 Poor (D)
Below 60 Failing (F)
= 14.7
60. # Short Description
Current
PURE
Target
PURE
UXI
Score
Grade
7 Password issue – user wants to change w3ID password 5 5 100 A
14 Re-open a closed ticket 5 5 100 A
1 Software issue on Mac (Outlook not receiving email) 12 11 91.7 A
2 Software issue on Mac (Problem installing Sketch) 32 28 87.5 B
13 Ticketing – user wants to create a “web ticket” for a software issue 16 13 81.3 B
15 Non-entitled Software issue on Mac (issue with Balsamiq) 20 16 80.0 B
5 Software issue on Mac (Outlook not receiving email) 39 30 76.9 C
6 Password issue – set up 2FA with authenticator 34 25 73.5 C
8 Issue with HR related tool (Travel@IBM questions) 60 42 70.0 C
11 Modify an existing 2FA method (change phone number for text) 46 30 65.2 D
12 Hardware issue – user wants to determine if they can get a new one (refresh) 9 5 55.6 F
9 Hardware issue on Mac (keyboard issues) 40 8 20.0 F
10 Hardware issue on PC (keyboard issues) 41 8 19.5 F
3 Hardware issue on PC (keyboard issues) 64 10 15.6 F
4 Hardware issue on Mac (keyboard issues) 68 10 14.7 F
UXI score is one factor in prioritization – frequency, criticality and other factors are also considered.
61. UXI Dashboard
Provides a ”report card”
for the workplace and
includes linkages to
applications that make
up each workflow.
62. UXI Dashboard
Ability to view
research artifacts
associated with the
task and the current
data associated with
applications utilized
for the task.
63. Benefits of this
approach
UXI
Measure end-to-end experiences -
not just applications.
Compare across experiences
to drive prioritization.
Provide quantitative goals
and ROI estimates
Facilitate cross-team and
cross-organization cooperation.
65. An adaptable approach
More than just expert evaluation
UXI can utilize multiple UX metrics, for example:
Sentiment ratings
Goal completion
Error-free rate
Time-on-task
If a target for the metric can be defined, the UXI score can be calculated
from any combination of metrics via a weighted average.
66. Ease of use ratings
can be used with
or instead of PURE
ratings
Difficulty ratings added for 100+ key workplace tasks
Goal for
ease
ratings
90% of
responses
will be > 4.
=
67. UXI Scoring of Tasks: Do our methods converge meaningfully?
Compare the derived UXI scores for
“getting common accessories”
Expert (PURE) ratings of complexity
Actual: 61, Target: 27, UXI = 44.3
UXI Score
0 100
10 20 30 40 50 60 70 80 90
68. UXI Scoring of Tasks: Do our methods converge meaningfully?
Compare the derived UXI scores for
“getting common accessories”
Expert (PURE) ratings of complexity
Actual: 61, Target: 27, UXI = 44.3
User ratings of difficulty
Actual: 41%, Target: 90%, UXI = 46.0
Final UXI = 45.2
UXI Score
0 100
10 20 30 40 50 60 70 80 90