Just so you know… I’m thinking of the tradition view of Instructional Evaluation – the 4 Levels – and the other slight variation – the 5 Levels – splitting hairs IMO – but my focus here is on what’s often labeled as Level 3: Transfer.
And I believe that you can adapt this for any Performance Improvement Intervention as well.
It’s the only way to Level 4/5: Results/ROI. And it does depend on Level 2: Mastery. It may or may not have anything to do with Level 1: Reaction.
So – let’s first begin with the end in mind – on our way to a focus on Transfer.
Level 4/5: Results/ROI
In 1979 I was taught that Level 4 – Results – was ROI.
What else could it be – when your Instructional paradigm is focused on Performance – as in Performance Improvement using what is called by some Human Performance Technology, or Human Performance Improvement or just Performance Improvement?
And then I learned of other Performance Improvement Inventions, such as Strategic Planning, Business Planning, Product Planning, OD, HR, Six Sigma, Lean, Theory of Constraints, and measurement of Results via Financial Metrics, Key Performance Indicators, and that any and all of that should lead to ROI downstream, because however you measure results – it should fit into an ROI framework – so that opportunities to invest – or spend – can be compared to some common yardstick. Like Common Core.
Not because it’s perfect. But because it levels the playing field for predicting expenses and returns.
Level 4 results are 1st owned by the people deciding the targets for investments to be made – based on ROI or pure hunches – and then 2ndly by whoever owns the Response to that targeting. Sometimes that is the Training/ Learning/ Knowledge Management group, and sometimes it is elsewhere.
See my writings on Governance & Advisory Systems – 12 O’Clock – in the book cover below. It’s a free book as a PDF – or you can purchase it as a paperback too.
Level 3: Transfer
Level 3 is Transfer. Did what got taught/learned then make its way back to the Performance Context?
Sadly, most often not. There are many reasons for this IMO and from the Research. But my focus here will be Authenticity.
I’ve seen so many cases – in my 32 years as a Consultant – where the developers or designers had no clue whatsoever of the Learners Applications of their Instruction/ Learning Content and Modes. None. Zero.
They just had some nifty objectives… or some outline… or some cool story boarded… but they did not know how to define Performance Competence.
You know… back at the ranch. Applying what you’ve learned, mastered in some Learning thing or string of things.
Level 2: Mastery
If the measures of this are Knowledge-focused versus Performance focused, well, then, stop and fix that. Unless of course this is one of many Modules in a Modular Curriculum or Development Path and you need to look at a bigger picture than just one piece of the pie.
But sadly most Learning Content are seemingly random chunks, large or small, with gaps and overlaps and inconsistencies in language and models – that confuse the Learners – they are collections of content and not configurations of content. They have not been architected.
They have not been rationalized and prioritized and deliberate about what will be Formal, and what will be Informal, and how Learning will be reinforced artificially when/because it won’t be automatically – on the job – over time quickly enough and then the Forgetting Curve will win out.
Maybe because it just wasn’t that important in the final analysis. To really insure Master and then Transfer.
If it is important – focus on Transfer by focusing on authentic Performance for Mastery.
Level 1: Reaction
Most evaluations at this level – are worthless. Because they can be misleading.
The Century 21 story from Roger Chevalier is a prime indictment – er, example. The worst rated Instructor out of 100 in a Level 1 evaluation – actually had the best results when you looked at the measures of the students later – back on the job.
And what actually Transferred.
A link to that story is – here.
So if you have to have a Level 1 Evaluation – focus part of it on expectation of actual Transfer to the job – by focusing on expected Performance Competence after Transfer.
The Fulcrum of Transfer
If I learn it as proven by your end of Learning measurements but it does not transfer – then you need to look at why it did not transfer.
And fix that.
My guess is that your content wasn’t authentic enough and did not teach the Learners something that exactly – or close enough – resembled their Performance Context.
So I would suggest that if you have a Transfer problem – your look first to your authenticity – and then at the amount of applications practice with feedback, and finally at issues within the broad category of which media/mode was used.
But to understand Transfer issues – one needs to look at the authentic Performance Context – and understand how to measure Performance Competence. IMO.
Adapting For Other Interventions
If your Solution-set did NOT fix the problem by eliminating – or minimizing – some Problem’s Root Causes – then you should look first at the appropriateness from “a point of Transfer” view point.
Did the new Software or Hardware or Reward system actually fit smoothly into the Performance Content? Or was the retro fit bashed in with the proverbial hammer?
Did it have any unintended (unanticipated) consequences – when it attempted to Transfer? Was the Transfer attempted but then dropped because it just didn’t work smoothly enough and everyone reverted to Past Practices or something closer to that than the ideal Future State’s Practices that were targeted?
Focus on Transfer by focusing on “did it enable authentic Performance Competence” as measured equally, authenticity in both in Mastery measures and then in Transfer measures?”
# # #