Either way – for either the Kirkpatrick Taxonomy or the Phillips Taxonomy – flip it – as they are both backwards. In my view anyway.
There is a need to flip the thinking about and usage of these instructional evaluation taxonomies.
Where they could be used to pull in a diagnostic manner to drill down in an ever-increasing focused manner, they are too often pushed in a diagnostic manner that doesn’t provide a clear focus for exactly what to look at next.
Where as they could lead to a more logical inspection of better data as they proceeded through the numbered taxonomy, they currently lead to broad guesses about what to look at next when followed in the all too prevalent numbered sequence or levels.
And don’t even get me started on applications of these evaluation taxonomies on a course/module by course/module basis. How ridiculous! Well, I guess I just started myself…let me digress…
If I am participating in a highly modularized curriculum/learning path, and I learn “enablers” and get “evaluated incrementally” on Levels 1-4/5 for enabling skills such as Active Listening, Phone Etiquette, Our Customer Satisfaction Goals and Measures, Basic Phone Sales Skills, Intermediate Phone Sales Skills and the 87 different bundles/configurations of our 27 product and service offerings – then I will have evaluated to an overkill level in the extreme.
We should have waited to evaluate Levels 1-4/5 after the learners were through a significant amount of learning before applying expensive evaluations. Which of course is why most Enterprises stop after Level 1 – the cheapest thing to do. But also the most problematic.
Research has found where those Instructors with the highest Level 1 scores had the poorest performing students; and where the Instructor with the worst Level 1 scores had students outperforming their peers once in the field. Go figure? Or is that somewhat understandable when thought through?
Back to flipping the taxonomies.
The Prevalent Approach
This is the way most folks in the ISD/ID business (aka: Learning) think about and approach Evaluation (that E in ADDIE)…
1- Did the Learners like the learning experience? Also Known as: Level 1.
2- Did they learn what we intended them to learn regarding their authentic performance requirements? Also Known as: Level 2.
3- Did they start using that on the job and then did they continue to use it? Also Known as: Level 3.
4/5- Did it make a difference to one or more important business metrics, and was it ultimately worth the investment of Enterprise time and money? Also Known as: Level 4 and Level 5 (Phillips).
The PACT Processes Preferred Approach
1- Did it make a difference to one or more important business metrics, and was it worth the investment of Enterprise time and money?
If it made enough of a difference on important business metrics and those who paid the piper call the tune about evaluations say enough, stop here with any post deployment/access evalutations…but…
If not…why not?…
2- Did they start using that on the job and then did they continue to use it?
If it transferred to the job and was reinforced well enough by the environment and bosses and co-workers and it continued to be used as planned (adopted or adapted), stop here with any post deployment/access evalutations…but…
If not…why not?…
3- Did they learn what we intended them to learn regarding their authentic performance requirements?
If they mastered well enough in the Learning Event itself – or Events themselves – the specific awareness, knowledge and skills intended by the design and driven by the analysis of on-the-job performance competence requirements, stop here with any post deployment/access evalutations…but…
If not…why not?…
4/5- Did the Learners like the learning experience?
If the data shows that the Learners liked the Learning Event(s), stop here with any post deployment/access evalutations…but…
If not…why not?…
Determine the root cause or causes for the Learners’ dislike of the Learning Event(s) – and address as appropriate.
Don’t Over-React to Reaction Data!
However, the fact that I disliked many of the Learning Events from my 16-week+ boot camp experience in the US Navy back in the day (starting one sunny December 27, 1972 morning) – wasn’t an appropriate reason for those who ran such institutions to change anything.
You see – they were probably looking this from that preferred perspective – and stopped themselves short from over-reacting to the likes and dislikes of their learner populations. They had stopped after #1 and #2 – in the preferred taxonomy/ approach.
Even I could look back once on my ship and see the logic of what was covered that I would not have understood until I was actually on-board (too late) – and why – including “stowing your clothes away properly at all times” – you know, so they don’t clog any pumps that may be required to empty sea water from inside the shop to back outside the ship to keep the whole thing afloat. And other Navy Reasons.
Good sailing to you in your Instructional Evaluation efforts!
# # #
Pingback: The Lively and Disagreeable Podcast – References & Resources | The Pursuing Performance Annex
Pingback: The Lively and Disagreeable Podcast – References & Resources | The Pursuing Performance Annex
Pingback: Back to the Future: Backward Chaining in ISD Isn’t New – HPT Treasures – for Evidence Based Performance Improvement
Pingback: Backward Chaining in ISD Isn’t New | EPPIC - Pursuing Performance
Pingback: A Popular Strawman – The Evaluation of Training/ Learning/ Knowledge Management « EPPIC – Pursuing Performance
Pingback: The 1-2-3-4 of Evaluating Learning Investment Opportunities Controversies – Continues « EPPIC – Pursuing Performance