For me “Instructional” analysis – or “Performance” analysis has always been much more than an analysis of the gaps.
Gaps, “Compared to What” I would ask?
To me it’s always been about – 1st understand ideal performance – and not some blue-sky utopian view, but something actually being accomplished somewhere by someone. And I frame that in terms of Performance Competence. That’s Mastery Performance. Then I determine if their approaches can be, should be taught to others.
2nd – understand the gaps from that ideal and understand the “why’s” for those gaps.
3rd – understand the enabling Knowledge/Skills.
And – how to avoid the dreaded analysis paralysis that seem to bog down most analysis efforts. And yeah – how to make the cycle time and touch time more predictable, as well as the burden on all involved and the costs.
The two key data-sets of PACT (my ISD methodology/process-set) and EPPI (my HPT methodology/process-set) are Performance Models and Enabler Matrices. They respectively capture the model of mastery performance, and the enablers of that mastery.
The Performance Model and the Enabler Matrices are two, linked set of data that are produced for the current state view by current Master Performers, who have proven that high performance levels are attainable. The Performance Model and the Enabler Matrices can also be produced for a future state view.
These are just 2 of my 4 typical types of analysis – shown here in the context of a Curriculum Architecture Design effort (one of the 3 PACT Processes)…
The Performance Modeling effort documents the requirements of the performers within the scope of the intended project and creates Performance Models. The Performance Model is the device used to capture ideal performance requirements; and it documents identified gaps from that ideal performance and their probable causes.
The Performance Model has two components: 1- AoPs (Areas of Performance) which are the “segments” of overall performance; and 2 – Performance Model Charts capture the data details for each AoP segment.
Performance Models may be developed for an organization, a function, a job, a task, or a process.
The information in a Performance Model includes a segmentation of overall performance into “Areas of Performance/AoP” segments, plus AoP details regarding the expectations for outputs and their measures and standards, the tasks per output, and the roles and responsibilities per task, for all of the performers involved. That ideal, mastery performance is documented on the left side of the Performance Model chart. It is then used to facilitate a structured and systematic gap analysis on the right side.
Note the outputs and measures columns – and ask yourself where do these measures come from and what do they focus on? I focus on process and product (task and output) measures – and on the Stakeholders – more than but including the customers. This is where it can get tricky – and it’s important to understand these perspectives none-the-less. What would you teach if they, the Stakeholder Requirements, were in conflict? Hint: Master Performers know which supersede which.
The entire Performance Model data-set, with the ideal and gap analysis data, is then used in the specification and analysis of both the human enablers, and the environmental enablers that are required to enable mastery performance.
The Enabler Matrices document the human and environmental asset enablers required for mastery performance.
Human Asset Requirements Enabler Analysis is where the requirements for the human assets are determined via a systematic review of the documented mastery performance outputs and tasks. Human assets categories are:
- Awareness, Knowledge, Skill
- Physical attributes
- Intellectual attributes
- Psychological attributes
- Personal Values
Environmental Asset Requirements Enabler Analysis is where the requirements for all non-human assets are determined, again, via a systematic review of the documented mastery performance. Environmental assets categories are:
The systematic review of the Performance Model charts’ data facilitates the systematic generation of the various enablers, by the enabler categories above and their sub-categories. For example, I use 17 sub-categories for the analysis of the “Awareness, Knowledge, Skill” category.
This captured data later facilitates additional analyses such as validation of any complex interpersonal behaviors, root-cause problem solving, or further assessment of the adequacy of various enterprise entities in the enterprise value chains that are in place to ensure that the right human and environmental asset systems are in place, at the right process place, at the right times necessary to achieve and sustain peak performance.
Other Potential Analyses Required
While it is my claim that the two sets of data in the Performance Models and Enabler Matrices are at the heart of analysis for any improvement effort, there are other analyses, such as financial, competitive, marketplace, Strengths-Weaknesses-Opportunities-Threats, legal, ethical, benchmarking, process mapping, activity based costing, and so on that may also be necessary at times. At other times, they may not be necessary at all.
For the first published view of this methodology – see this November 1984 article – from the NSPI (Now ISPI) PI Journal – here.
# # #