I believe that Instructional Analysis can and should occur in every phase in the development of performance-based Instruction for High Stakes performance.
Long Story Long
In early 1981 at the firm where I did Instructional Systems Design (ISD) work, someone suggested that we create Design Documents so detailed and tight that a Developer could generate all the Instruction without any additional input.
The goal, it appeared, was that the Developer would not need to interact with anyone at all to complete their assignment.
“As if they were in a closet,” I thought to myself at the time, resisting the notion.
At that point, I had been in the ISD field for two years and I had been involved in project planning, analysis, design, development, and implementation efforts for several dozen ISD efforts. To hand a document to a developer who didn’t get to conduct any additional data gathering just didn’t sit right with me.
This approach, of completing all of one phase before moving to the next, is often called a Waterfall model.
I felt strongly that there was no way during the analysis phase that I could have captured everything needed for a comprehensive final product and have reflected that in the design.
I even said so in response to those suggesting we use this Waterfall approach. I went as far as to cheekily say that we should just create the content and skip the design effort if it had to be that complete before the hand-off!
I saw it like this. In every phase of the process I learned additional information that added to the overall result. Data gathering, essentially, was more analysis. It started in project planning efforts from what could have been called analysis – by some – but most people didn’t do Project Planning – and did a poor, partial job at doing Analysis – IMO/IMX.
And in my design efforts, I gathered more new information that was the result of analysis. At this point, the new information was targeted on specific aspects of the job tasks, rather than every task and output. Some tasks fell to the wayside to be left to Informal Learning and no longer needed to be addressed. Next, during development efforts, we obtained additional details and nuances from our sources. In fact, it seemed that the most details came during development efforts. And when we did the initial deployments in our Pilot Testing efforts? Once again, we learned additional facts that were added in during the updates.
Analysis, then, seen as a single-shot effort was a silly notion. “One and then you are done” was more than just a bit misleading. My experience was that analysis happened throughout the entire undertaking. The same might be said of design, and the design efforts, but that’s perhaps another book for another time.
In 1982 I joined a small consulting firm and became an ISD consultant. I began to structure and formalize my ISD methods to develop Project Plans for clients using fixed fee or time and expense pricing as well as to build the consulting staff that would plan and conduct those projects. The last thing I needed was my team working as an artist colony, with everyone doing their own thing, their own way. I wanted more of an engineering approach.
So when I became an ISD Consultant in 1982 – I started framing my ISD methods (I have 3) to include both Project Planning & Kick-Off – as well as Analysis.
My 3 Levels of ISD…
Here is my ADDIE-like approach: MCD. Not IAD is similarly structured.
By 1983, I was being asked to help develop ISD staff for several of our clients. They had seen our approach in action on their projects and they wanted to go fast and produce performance-based Instruction. They saw my Analysis Approach as the key. It grounded our efforts immediately in a performance orientation. And some observed that we kept adding to that performance orientation throughout the effort.
Note – I must again acknowledge that the Performance Analysis approach that I first learned in 1979 was at that time a derivative of a derivative of a Geary A. Rummler approach (or so I was told by people who had worked with his brother) to Instructional Analysis.
My intent with my recent (November 2020) book, is to show ISDers and their management how Analysis can (and should) be a part of every step or phase in a systematic approach to Instructional Development.
This book at Amazon: https://amzn.to/36swZ8f
It’s possible to read that book and think it reflects an approach to ISD that is simply overkill-in-the-extreme.
I get it.
However, I believe that too many in the ISD field embrace a philosophy that it is appropriate to meet every uncovered need with Learning Content, regardless of its Value compared to its Costs.
In turn, I would proffer that when we too willingly convert cash into content that might create awareness or knowledge but that doesn’t go the last mile to authentic application, it is most often a wasted investment, depending on the prior knowledge of the learners/Performers. We then have squandered shareholder equity, are negligent as a steward of shareholder equity, and generate nil or negative returns on the investments of that equity.
My ISD philosophy in an Enterprise Learning Context is performance centric. If we do not address and improve the Performance Competence of the Target Audience, then why bother? When I created the graphic below, I decided to borrow Tom Gilbert’s word “competence” from Human Competence, a book I was given on Day 1 in my Training Developer job back in 1979.
I could have used other alternatives, such as Capability. Or Capacity. But I wished to link to Gilbert’s thoughts about worthy performance and worthy outputs. So, I borrowed many of his concepts and that word, Competence.
In addition to the people already mentioned, many others were my initial influences, shaping the foundation of both my philosophies and practices in ISD, including the late Geary A. Rummler, Joe H. Harless, and Robert F. Mager. May they all rest in peace. I was privileged to know and learn from them both directly and indirectly as I started up the learning curve in the ISD world.
I worked with Geary Rummler when I was at Motorola (he was my consultant on my projects – meaning I carried his pencils) as well as on initiatives with him through the International Society for Performance Improvement (ISPI). But it was through ISPI (then NSPI) that I met Geary and both Joe Harless and Bob Mager in April 1980.
And there have been many others whose ideas I have borrowed and used—adopted or adapted—from the worlds of ISD and Total Quality Management (TQM).
For example, I have borrowed many ideas regarding data relationships (parent-child/originals-derivatives) that come from Materials Requirements Planning (MRP), and Manufacturing Requirements Planning (MRP II), and Enterprise Requirements Planning (ERP).
Many of my ideas regarding modular content and reuse of existing content (As Is, or After Modification) come from the Product Management, Engineering, and Manufacturing worlds I was exposed to early in my career.
They sure didn’t come from my studies at Kansas for my Radio/TV/Film degree. My detailed planning bias came from that however as I hated when fellow students wasted my free time because they couldn’t plan ANYTHING!
But I digress.
Let’s explore ISD and the Waterfall concept a bit.
The Waterfall approach to development is often framed and represented as a set of linear, sequential phases. Each phase begins with what the last phase dropped on them, or tossed over some imaginary wall, caught and then run with. Think of a waterfall as it deposits water into a pool at its base. In this process, the output from each prior phase falls over the edge to form the beginning of the next phase. This repeats as the water continues down the mountain to some endpoint, usually an ocean.
A Waterfall effort in ISD analogy would be to create the design, then hand it off to a designer who would seek out a closet with no access to anyone in which to develop the Instruction, and then hand that to the next phase for delivery/access.
In my view, true Waterfall approaches hand outputs to downstream phases, or stages, of the overall process, and have a total disconnect from the prior efforts. Nothing new is ever added to the previous outputs as they simply inform as is, and then restrict as they guide all downstream efforts.
While ISD or other development efforts can be run like that, most are not, even if the model looks as if it does. The notion that ISD projects follow a Waterfall process is often used as a proverbial strawman in attacks on ISD.
In my 41 years as an ISDer, I’ve seen the Waterfall complaint used to attack ISD efforts as if that were the reason for ISD’s woes. In my view, it’s the lack of a performance orientation that is the cause of most of ISD’s problems. And that lack of a performance orientation is due to poor practices in Analysis.
Analysis Paralysis, a term that came to be used in ISD (I believe) because our analysis efforts took too dang long, and resulted in very little value-add, and often produced long lists of tasks (without outputs) that may have had face validity, but resulted in little impact back-on-the-job. So, why bother?
I conduct Analysis in each and every phase of my ISD methods for performance-based Instructional Content development—which I have branded and written extensively about as Modular Curriculum Development/Acquisition (MCD).
MCD is an ADDIE-like methodology-set. It produces Instruction, including Job Aids and Training.
As a preface to reviewing them, while these approaches have never reflected a Waterfall approach, I appreciate that the graphics I use undoubtedly could be read as such. Graphical representations of process flows are limited in how they can illustrate iterative data additions without becoming quite messy. I won’t attempt to show that messiness with a graphic.
Not that I don’t “layer” new content – or update it. I do. This next graphic shows both in Development – when my first draft (Alpha version) is reviewed and updated to a second draft (Beta version) and reviewed and updated to a third draft (the Pilot Test version). Your language might vary. Or – your Practices. These are mine – used since 1982.
The next graphic depicts the six phases of MCD, which facilitates project planning and management efforts. It certainly isn’t a “design model.” The upside-down traffic lights, also known as Stop Lights, represent Project Steering Team (PST) Gate Review Meetings (GRM), where the effort and data generated to that point are reviewed and then approved, modified, or rejected.
To me, they are Go Lights. That’s why they are upside-down. These provide a way to ensure the outcome is about worthy performance that matters to the customer and stakeholders.
The GRMs give my client and their clients the Command & Control mechanism they want—and the Empowerment mechanism I both want and need. They also allow me to pose Business Decisions I come across to an authority group at critical points in the process, so stakeholders can make the Business Decisions that are inherent in almost every ISD effort of consequence. Those often involve tricky trade-offs, business trade-offs.
I need the client and stakeholders to own the effort. After all, it is for them, and they live with the consequences of the success or failure of the initiative. They need to resource the project with the right people, to review and understand the deliverables (outputs) produced along the way BEFORE I use them in the next Phases. Or, to understand and modify them, if needed.
Or, to reject them, because I and their handpicked Master Performers and Other Subject Matter Experts somehow got it wrong. And it needs to be gotten right before continuing.
I also need to be empowered to do their bidding. And for me to operate going forward in alignment with the PST’s business decisions that are always inherent in an ISD effort addressing High Stakes Performance.
For example, I need their sanctioning, or redirection when I declaratively state to them in the GRM in the Analysis Phase that, “I am going to take everything that is typically put into a Group-Paced approach, and make it Self-Paced so that your management and their learner/Performers can get it when they need it, and not have to wait until the schedule and seat availability offers it.”
I may get questioned about that statement. It’s happened a lot over the past 40 years, as it is a declaration that I have been making in almost every project I’ve done since 1982. The PST either buys it, nodding their heads up and down, or they challenge me as to why. And then they nod their heads up and down after I respond to their questions as to “why.”
This is important as I’ve always been opposed to ISDers making Business Decisions in any ISD effort. I am also opposed to ISD folks targeting their own efforts, and deciding what is important and what is not. The establishment of and work for a PST resolves both issues.
The decision to create Instruction—standalone Job Aids for guidance in the workflow, and Training for memorization and to hone critical skills through practice with feedback—are (or should be) business decisions themselves. They should be produced only when warranted by the potential impact on key business metrics, or in support of Critical Business Issues (CBIs) as deemed appropriate by Enterprise Leadership. In over forty years in the ISD field, I’ve never seen anything good come from an ISD staff running amok and targeting their own efforts when they are disconnected from their enterprise leadership.
I have completed other books, articles, presentations, and several dozen Blog Posts that address the formal alignment needed between ISD functions and Enterprise Leadership, via a Governance & Advisory System. That is beyond the scope of this Blog Post. It is, however, an important key to success, in my opinion.
A PST, as I present it here, is at the third level in the Governance & Advisory System I use. The first two levels—Advisory Councils and a Governance Board— are permanent; the PST is a temporary entity that disbands once the project ends.
I have seen too many Instructional efforts that were focused on Topics rather than Tasks & Outputs; Topics with Face Validity—but not with Performance Impact Validity.
I can well imagine clients when asked to sign off on these Topic-Centric efforts, who concluded that the audience certainly needs to learn the listed topics. What they didn’t know to look for and demand was how that topic would inform critical tasks. Nor did they know where and how the content would address the application of those tasks to produce outputs that met the stakeholder’s requirements.
I learned from many, including Thomas F. Gilbert’s 1978 book Human Competence, to avoid the “great cult of behavior” and instead to focus on Worthy Outputs. And I’ve watched too many of my colleagues, the vast majority it seems at times, focus on Topics rather than Tasks & Outputs.
And, so, across the discipline of ISD, Topic-Centric Content with little-to-no potential impact is often developed and deployed, which results in nil or negative Return on Investment (ROI). Unless, of course, the Target Audience had enough prior knowledge from their education and experience that that Topic was all they needed to now perform adequately.
It sometimes seems that bridging from Topics to Tasks & Outputs is a bridge-too-far, for far-too-many ISDers. This book and several of my prior books provide an approach to adopt or adapt as needed to remedy that lack of a Performance Orientation. And of course, there are many other books that also help ISDers take a Performance View.
MCD – 6 Phases & 25 Sub-Phases
MCD’s six phases are further segmented into the following sub-phases, which allows for better project planning and management efforts.
Note: I’ve written two books previously on MCD: Lean-ISD in 1999 and Conducting Performance-based Modular Curriculum Development in 2011. They fill in the gaps for the tasks in the sub-phases in the graphic above. This book touches on many aspects of MCD but is primarily focused on Analysis in and beyond each of those six phases.
Analysis efforts occur in each phase of an MCD project. So – NOT a WATERFALL approach.
Fer sure, dude.
The Analysis effort begins during the Intake Process. That’s when we start to focus on terminal performance competence, even if that requires attempting to shift a request for Instruction with a topic focus to one with a task focus. This involves interviews with the requestor and enough of the key stakeholders to ensure that the goal and constraints are determined and then reflected in the Project Plan.
Analysis continues in the Project Planning Effort, as the PACT Project Planner/Manager determines what to adopt and what to adapt from the standard Project Plan templates and to draft a Project Plan. Next, there is more Analysis in the PST Kick-Off & GRM. The PACT Project Planner/Manager, and perhaps the PACT Analyst, run the gauntlet, so to speak, and presents the draft Project Plan and answers any questions and challenges of the PST.
The Analysis data anchors the eventual Instruction back to the authentic performance requirements from back-on-the-job. Its source, then, is both critical and political. Because of this, I always ask the PST to handpick the Master Performers and Other Subject Matter Experts for the Analysis Team and the Design Team.
I also prefer to use an FGP to conduct the Analysis and Design efforts. The other option is the more traditional approach, with interviews of individuals and small groups, observations, and document reviews.
Formal Analysis begins in the Analysis Team Meeting. This generates the Target Audience data, the Performance and Gap data, the Enabling K/S data, and data regarding the Assessment of Existing Training content for its reuse potential. Analysis continues in the PST GRM when all the Analysis data is reviewed before use in the next phase, Design, and is possibly modified by the PST.
Analysis occurs during the Design Team Meeting. Here, the Analysis data is processed, sorted, and sequenced into the three levels of design templates for modular T&D Events, and the modular T&D Lessons, and finally, the modular Instructional Activities. The Analysis efforts continue in the PST GRM if and when that team approves or modifies the Design data.
Analysis also occurs during the Development Team efforts, or when a single Developer starts the development of the Instruction, as outlined in the Event, Lesson, and Activity designs. Additional Analysis occurs during the Alpha Test when others complete a quick review of Version 1—the Alpha Version—to catch any obvious errors before the next test of Version 2 during the Beta Version.
The Pilot Test Version, Version 3, is created unless the Lesson has been previously designated during the Design phase as a “Lesson From Hades.”
Lessons From Hades?
If so designated, those Lessons might be planned and managed to go through four or five versions and updates to prepare for the Pilot Test. Lessons From Hades are like that.
Analysis continues during formal Pilot Test efforts—which I believe should always be a FULL DESTRUCTIVE TEST. I’m yelling about that because that is a critical effort in my experience to ensure both transfer back-to-the-job and a Performance Impact.
Analysis occurs in the PST GRM of the Pilot Test results and the review of Revision Recommendations. That’s where the client and key stakeholders review and approve/modify the post-Pilot Revision Recommendations and create Revision Specifications. These guide the efforts in the next, final phase of an MCD effort, unless additional Pilot Testing is deemed necessary before release of the Instructional Content to the Instructional Access/Deployment systems and processes.
Analysis, yet again, occurs during the Revision efforts before Release to the Deployment or Access (Push or Pull) systems and processes, as any issues regarding content accuracy, completeness, and appropriateness, discovered earlier, are addressed. Analysis continues in the ISD Project Team Lessons Learned/Postmortems if that is part of the ISD systems and processes.
Future maintenance efforts require insights into what previous decisions were made and why as guidance for any future updates, as needed, as signaled by the Enterprise systems in place to do so.
Note: Appendix C includes all MCD effort tasks.
Hand-offs occur between phases, as well as within phases. Carefully planned hand-offs are both effective and efficient; they should be conducted both at the end of a phase and for the beginning of the next phase.
The “throw the data over the wall” and then “let the next group run with it” kind of hand-off seems a pure Waterfall approach, in my opinion. I prefer to use the following two types of hand-off mechanisms:
- Kick-Off Meetings—with everyone
- Briefings—for individuals and small sub-teams
I use these when I have employed a divide-and-conquer strategy for the various roles across an ISD effort so that the development effort may be accomplished by multiple people, working in parallel, and not in series. Either one ISDer does it all, or the ISD development roles and responsibilities are divided between two or more people.
Sometimes, downstream players overlap with earlier phases and tasks; this is particularly useful when the performance is Complex and High Stakes. For example, the designated Designer may attend the Analysis Team Meeting and the PST GSM to approve the Analysis data.
When the Instructional content is tricky and High Stakes, and there is a rush to get the Instruction ready for learner/Performer Access and Deployment, the Lead Developer may participate in the Design Phase and possibly also in the earlier Analysis Phase. Of course, the Lead Developer could have performed in the Designer and the Analyst roles as well.
The person with the assigned project role (but not necessarily the job title) of Project Planner/Manager has the responsibility to facilitate the hand-offs. They are the continuity—the glue—from one phase to the next in the six-phase MCD framework.
The other roles—or hats as I sometimes refer to them—include those in the next graphic:
One person can wear more than one hat or even wear all hats.
As noted, Briefings and Kick-Off Meetings are key to facilitate hand-offs.
Generally, in the way I’ve laid out the tasks within each phase and sub-phase of an MCD effort, the preparation for the hand-off Briefing is done at the end of one phase. The Briefing itself, if needed, is conducted at the beginning of the next phase. Which, of course, might seem entirely arbitrary.
The reason is so the Briefing materials are prepared as soon as possible after the PST GRM. This helps ensure any nuances are captured and reflected in the Briefing content before they slide away and down the Forgetting Curve.
And then the Briefing happens, sooner or later, per the planned schedule.
The PST GRMs are where critical business decisions are made. It’s also where modification of whatever data and plans presented to them are made, including in the:
- Project Plans and schedules.
- Analysis data.
- Design data.
- Pilot Test Revision Recommendations data.
And so on.
Again, it’s important to capture and transmit those plans and data, and the nuances surrounding those plans and data, using some reliable mechanism before it all slides down the very steep Forgetting Curve. Prepare the needed Briefing as soon as possible.
Assign the Analysts the task of taking copious notes during the Analysis Phase GRM as the Project Planner/Manager facilitates the meeting. The Project Planner/Manager, then, takes copious notes if and when the Analyst facilitates a portion of the GRM. A sort of Tag Team, as it were. These two collectively have the insights as to what needs to be Briefed to the PST and to any new players coming on to the team effort.
Use Kick-Off Meetings whenever a new group comes into a project, as well as at the beginning of every key effort.
Employ a front-end Kick-Off element to larger meetings, such as the Analysis Team or Development Team Meeting, to calibrate everyone to the overall game plan, the outputs, tasks, and schedule, and to communicate the prior decisions of the PST from upstream in the effort.
The prize is Performance Competence for the learner/Performers, their management, leadership, and the shareholders.
This includes sustained performance as people, who are the performers, come and go, and as new-to-the-job folks who need performance-based Instruction for OnBoarding or OnGoing development take their place.
Performance-based ISD is all about ensuring a focus on the performance that is essential to the Enterprise.
Performance-based ISD is all about enabling Performance Competence Back-on-the-Job.
That’s the prize at stake.
If you’re not going for that, WHY BOTHER?