There's more to learning evaluation than surveys and smile sheets. In this recent webinar, Andrew Downes laid down practical, straightforward advice on how to take your learning evaluation further and measure whether your learning programs are having the impact they were designed to achieve.
Here's the slides!
3. key
takeaways
Good learning evaluation is more complex
than simple surveys
A seven step model (with worksheets) that
you can implement.
Practical advice and case studies you can
follow to get started.
7. evaluation
models
01Kirkpatrick
Four levels of evaluation
03Phillips
Return on Investment
02Kaufman
Five levels of evaluation
04Anderson
Value of learning
05Brinkerhoff
Success case method
8. Kirkpatrick
1. Reaction – did the training feel useful?
2. Learning – did learn what they were supposed to?
3. Behavior – did they change how they worked?
4. Results – did the business metric improve?
9. Kirkpatrick
• Useful, well known starting point
• Higher levels are more important
• Lower levels give faster warning of problems
• All four levels are important
10. Kaufman
• Based on Kirkpatrick
• Considers societal and customer consequences
• Splits Kirkpatrick’s Level 1 into
• Input – quality of resources
• Process – delivery of learning experiences
11. Kaufman
• Useful to evaluate learning resources separately from
experiences
• Societal/customer impact is usefully either too far
removed to evaluate or already included in business
metrics
12. Phillips
• Adds a fifth level to Kirkpatrick – Return on
Investment (ROI)
ROI = $ benefit of training - cost of training
cost of training
Costs include:
• Technology costs
• Learning team time
• Time attending/completing training
13. Phillips
• ROI should underpin the level 4, business goal.
• Figure out ROI at the start of the project.
• If the ROI is not going to be acceptable, either the
project budget is too high or your business goal is
wrong.
14. Anderson
3-stage cycle
1. Determine current alignment
2. Assess and evaluate learning’s contribution
3. Establish most relevant approaches
• Is a learning program that meets it’s goals successful
if those goals are bad?
• Goals need to be aligned to the organization’s
strategic priorities
15. Anderson’s
Value of Learning
What metrics do my stakeholders need?
Emphasis
on short
term
benefits
Emphasis
on long
term
benefits
Trust in the
value of
learning
Need evidence of
the value of
learning
Learning function
effectiveness measures
Return on
expectation
Return on
investment
Measures against
industry benchmarks
16. Anderson
• It’s very important that Learning’s goals support
organizational priorities
• Use the model alongside other methods that evaluate
specific learning programs and experiences.
17. Brinkerhoff
Any learning program, no matter how good or bad will
have good and bad elements
1. Identify the very best successes and very worst
failures of the program
2. Conduct interviews and other research around those
successes and failures to learn lessons
3. Promote success stories to market the program
18. Brinkerhoff
• Best to use alongside, not in place of, previous
evaluation models
• Use success case model to dig deeper, learn lessons
and shout about successes
20. Step 1
Align
Identify program goals and evaluate
alignment with strategic priorities.
• Define program goals
• Evaluate how closely goals align with strategic
priorities
• Decide whether or not to proceed as planned
22. Step 2
Define
Identify success metrics most
appropriate to the organization.
• Identify reporting stakeholders and metrics that
will meet their needs.
• Define the set of metrics to monitor and
analyze the program
• Expect these metrics to change and reduce in
scope during the discover and design steps
23. Business goal
What do people
need to do to
achieve that?
What do people
need to learn
to do that?
What program is
needed for them to
learn that?
Was the goal
achieved?
Are people doing
what they need to
do?
Are people
learning what they
need to learn?
Is the program
working?
4 3 2 1
…your evaluation metrics
design
Your learning design is mirrored by…
24. Step 3
Discover
Identify what learning is already
happening that supports the program’s
goals.
• Identify formal and informal learning within
your organization that relates to the program’s
goals.
• Evaluate how effective these learning
experiences are.
• Determine the extent to which the learning is
positive (are people learning the right thing?)
25. Step 4
Design
Design how evaluation metrics will be
captured, aggregated, and displayed.
• Evaluate the feasibility of evaluate metrics and
finalize your list.
• Design monitoring dashboards and analysis
reports.
26. Step 5
Monitor
Continually monitor success and
progress towards the program goal
• Identify any problems early on.
• Quickly implement fixes for the problems.
27. Step 6
Analyze
Analyze data in detail at key points in
the program
• Determine whether or not program goals were
achieved
• Evaluate the reasons why
• Collect success stories
• Document lessons learned
28. Step 7
Explore
Research further into particularly
successful and unsuccessful elements
• Create detailed case studies of success and
failure
• Promote the program within your organization
and beyond
30. “You can’t eat an elephant all at once”
Start with one or two ‘easy’ data sources to prove the
concept.
31. Level 1: Learning
resources and
experience
Data about utilization from
LMS, Intranet etc.
• xAPI integration
• xAPI Apps attendance tool
• CSV import
Quality of resources &
experience data from
surveys
• SurveyGizmo
• Recommended xAPI authoring
tool
• Google forms with Zapier
• CSV import from another
survey tool
32. Level 2: What did
they learn?
Data from learning assessments
• Recommended xAPI authoring tool
• Recommended xAPI LMS assessment
• In-house app/platform
• CSV import
33. Level 3: Did they
change?
Job performance data from
surveys
• SurveyGizmo
• Recommended xAPI authoring tool
• Google forms with Zapier
• CSV import from another survey tool
But: How accurate is job performance survey data?
See https://hbr.org/2015/02/most-hr-data-is-bad-data
Job performance data from
observations
• xAPI Apps
Integration with real job
tools
• xAPI integration
• Zapier integration
• Stand alone connector
• CSV import
34. Level 4: Business
metrics
Data about the business metric the
learning is designed to impact
• xAPI integration
• Zapier integration
• Stand alone connector
• CSV import
Often the data set is small enough
that CSV is the most sensible option
41. Seven steps
Read the blog & download the
worksheet
• https://www.watershedlrs.com/blog/watersheds-seven-steps-of-learning-evaluation
• https://www.watershedlrs.com/blog/define-metrics-for-learning-evaluation
• https://www.watershedlrs.com/blog/learning-evaluation-whats-happening
• https://www.watershedlrs.com/blog/design-evaluation-metrics-learning-evaluation
• https://www.watershedlrs.com/blog/monitoring-success-learning-evaluation
• https://www.watershedlrs.com/blog/analyze-outcomes-learning-evaluation
• https://www.watershedlrs.com/blog/the-final-step-of-learning-evaluation-explore-the-outcomes
https://www.watershedlrs.com/blog/
Editor's Notes
Correlation does not imply causation
BUT…using Brinkerhoff, identifying correlations can provide insight into where to focus research and build case studies.
Ensure your goals align with strategic priorities
Involve all appropriate stakeholders
Reassess the program if goals do not align
Ensure your goals align with strategic priorities
Involve all appropriate stakeholders
Reassess the program if goals do not align
Your metrics should cover:
Usage and quality of content.
Delivery and discoverability of learning experiences.
Improvements in knowledge and competency.
Improvements in job performance.
Overall achievement of the project goals.
You should:
Ask learners to report their own informal learning experiences.
Look at peer and manager observations.
Explore the reasons for changes in performance metrics or variations in the performance of different groups.
You should:
Design your evaluation alongside the design of the program.
Weigh up the cost/benefit of each metric to be captured. Prune the nice-to-haves from the essentials.
Consider both ongoing monitoring and analysis at specific points in time.
You should:
Be able to spot problems quickly from your dashboard.
Launch the program with a small pilot group.
Make changes to the program in response to data.
Keep stakeholders updated.
You should:
Be able to tell the whole story from your reports.
Celebrate and share evidence of success.
Explore and learn from problems and failures.
Implement Brinkerhoff’s Success Case Method to:
Identify particularly successful and unsuccessful cases.
Interview those involved and document their stories.
Promote and market the successes.
Business Problem
AT&T needed to provide effective and engaging compliance training for 243,000 employees across 3,934 job titles. They wanted to identify which training investment produces the most effective outcomes on retention and behavior because:
Compliance training occupied significant employee time.
Improved training programs were costly, and senior management needed proof that the continued investment significantly impacted retention and performance.
Solution
AT&T engaged in a proof-of-concept to test a new approach to compliance and ethics training. They leveraged Watershed LRS and xAPI to examine which training investment produced effective outcomes on retention and behavior. This involved two levels of situational simulations that were randomly assigned to learners who chose to participate. Watershed aggregated data from the simulation, assessment, and training path systems into the LRS. Interaction-level training data was collected and immediately available through a statement viewer. These statements powered top-level dashboards in Watershed that displayed real-time reports of learner engagement and retention.
Outcome
Time Saved:
The ability to monitor learner interaction through Watershed provided insights for real-time course improvements. By updating the Employee Code Course to support mobile deployment and streamlining the experience, they saved 160,380 employee course hours.
Knowledge Improvement:
High fidelity content resulted in more frequent correct answers during follow up surveys. Additionally, high-fidelity content kept engagement 25% longer than previous low-fidelity content.
Behavior improvement:
AT&T was able to track individual responses to questions, and realized when a user responded incorrectly it was overwhelmingly in favor of the more conservative response. This indicated that the improved interactive simulation encouraged better employee ethics, not just compliance.
How can you leverage technology to identify what’s already happening in your learning program?
An organization that provides training to Credit Unions (Community Banks in the U.S.) already had training and learning in place, but with no knowledge of utilization or how users were interacting with it. Everything was hosted on a custom built portal instead of using an LMS.
Using xAPI enabled technology, they were able to:
Automatically track learning activities without need for self-reporting
Allows self-reporting for learning activities outside of custom learning portal
For the individuals:
Shows your hr department or regulators what you’ve learned
For admins:
Helps you track recertification needs without the need for manual record keeping
Immediately transforms your information into meaningful data, pinpointing what your people are doing and where there are gaps