Evaluation models seem to focus heavily on traditional learning and development, when training is done face to face in a classroom like setting. But with the explosion of social and mobile learning, LMS systems and other eLearning methods, where does evaluation fit? What if ALL of your training and development is done solely on a computer or mobile device?
I think the answer will lie in metrics and analytics – and interpreting and translating this data into some formal structure like Kirkpatrick.
Here are 2 solid articles and that help place evaluation in this growing world of digital learning.
- Evaluating Mobile Learning – 4 Ways to Measure the Effectiveness of Mobile Learning – this is a good short read with some tips on how to check learning when the user is mobile –
- Meeting the Challenges in Evaluating Mobile Learning: A 3-level Evaluation Framework: A very thought provoking paper that originally appeared in the International Journal of Mobile and Blended Learning – it outlines 6 challenges in evaluating mobile learning – but it does go on to ask “Has Anyone Learned Anything”
There seems to be a lot of confusion over which model is best -I would argue it depends on your organization’s culture, the business model and strategic planning process..and who controls the purse strings.
ROE I feel can be a tough sell to many parts of the business – but from a Learning Experience Design point of view, it is a more learner centric as it the focus is on expectation and performance over concrete dollars and cents.
This is an interesting Slideshare powerpoint that illustrates in case study both ROI and ROE.
There is a lot out there on the 4 levels of Evaluation – I find this infographic from http://elearninginfographics.com/kirkpatricks-levels-evaluation-infographic/ best represents Kirkpatricks’ New World Model – it also goes into Phillips level 5, and offers suggestions on evaluation strategies that could be implemented.
ADDIE is the standard, yes, but I am not a fan of the model. I prefer Hibbits and Travin ( a new find) but I designed in SAM/AGILE for years – it seems to support the IT world better, and was the standard project flow for the entire business.
Yes, ADDIE can support digital learning, but I agree with Tony Bates when he says,
“Another criticism is that while the five stages are reasonably well described in most descriptions of the model, it does not provide guidance on how to make decisions within that framework. For instance, it does not provide guidelines or procedures for deciding how to choose between different technologies, or what assessment strategies to use. Instructors have to go beyond the ADDIE framework to make these decisions.”
However, his Infographic below is the best I have seen for ADDIE in Digital learning.
I was given a copy of this book recently, and have devoured it! It walks through the brain science of why we forget – mostly due to bad study habits, and how to recall information – to Make it Stick. Approachable and relatable examples are used to present hard data – I was drawn in by the real life examples.
It did change my mind on testing: I was not a fan of testing for testing sake, however, the book makes some solid arguments that targeted testing has a strong impact on the brain and how it can aid in recall.
A great read!
This is a little gem of website: https://magic.piktochart.com/embed/1993702-evaluation-models
It has some great little diagrams for Kirkpatrick, Scrivens and Brinkerhoff, but I like the Phillips one the best – it helped cement the ROI process in my mind as a path to follow with specific metrics to gather:
Anderson’s model of Evaluation is very high level – or ‘big picture’, and targets the organization as a whole. This is critical if you want to foster a culture of evaluation in L&D and when you target the organization as a whole you can create a cycle of evaluation.
So how do you measure an entire organization? This chart is- an organized method to determine which metrics and measurements are needed to help guide decision making on evaluation for an organization. It is again a high level overview, but I think it could be applied departmentally as well.
As I was looking for some tips specific for workplace learning I came across this article: http://evaluationfocus.com/formative-summative-assessment-an-explanation/
A tip on how to remember the distinction between the two:
Formative = “…‘formative informs”. The idea is that I am informed about how the trainee is performing at various stages of the training….The aim of formative assessment is to improve the amount of learning that occurs.”
Summative = “…summative is a summary”. The idea is that it summarises how the trainee has performed at the end of the training….The aim of summative assessment is to prove that learning has occurred.”
The addition of the ROI at the very top highlights the importance of this model for all areas of the business. Many L&D departments are given shoe string budgets, but a proper evaluation structure that can feed into and help to calculate an ROI in dollar value can help market L&D to a leadership team.