One of my pet peeves – for decades now – is the strawman argument that some people create when bashing an Instructional Systems Design approach – an ADDIE-like approach – to Instructional Development – by calling it Linear, or a Waterfall Approach.
Some then defend an ADDIE-like approach by calling it Iterative versus Linear.
I would call it Additive versus Iterative.
Let’s explore ISD/ADDIE and the Waterfall concept a bit.
The Waterfall approach to development is often framed and represented as a set of linear, sequential phases. Each phase begins with what the last phase dropped on them, or tossed over some imaginary wall, caught, and then run with. Think of a waterfall as it deposits water into a pool at its base. In this process, the output from each prior phase falls over the edge to form the beginning of the next phase. This repeats as the water continues down the mountain to some endpoint, usually an ocean.
A Waterfall effort in ISD analogy would be to create the design, then hand it off to a designer who would seek out a closet with no access to anyone in which to develop the Instruction, and then hand that to the next phase for delivery/access.
In my view, true Waterfall approaches hand outputs to downstream phases, or stages, of the overall process, and has a total disconnect from the prior efforts. Nothing new is ever added to the previous outputs as they simply inform as is, and then restrict as they guide all downstream efforts.
While ISD or other development efforts can be run like that, most are not, even if the model looks as if it does. Again, the notion that ISD projects follow a Waterfall process is often used as a proverbial strawman in attacks on ISD.
In my 43 years as an ISDer, I’ve seen the Waterfall complaint used to attack ISD efforts as if that were the reason for ISD’s woes. In my view, it’s the lack of a performance orientation that is the cause of most of ISD’s problems. And that lack of a performance orientation is due to poor practices in Analysis.
Analysis Paralysis, a term that came about because analysis efforts took too long, resulted in very little value, and often produced long lists of tasks (without outputs) that may have had face validity but resulted in little impact back-on-the-job.
My goal is to convey in this post how I conduct Analysis in each and every phase of my ISD methods for performance-based Instructional Content development—which I have branded and written extensively about as Modular Curriculum Development/Acquisition (MCD) – and am now referring to as Instructional Development.
Note 1: my overall set of ISD methods is branded as PACT. In 2002 I brought that brand to EPPIC when I went solo. It had been my brand since the late 1980s at Svenson & Wallace, Inc. (SWI), and at CADDI Inc. before EPPIC Inc.
MCD is Becoming ID…
MCD is an ADDIE-like methodology set. It produces Instruction, including (take your pick): Job Aids & Training – or Performance Support & Learning Experiences – or Resources & Courses.
Note 2: Again, I am beginning to shift my language/labels a bit and in more recent books am referring to the MCD of old as ID – Instructional Development.
Same diff as we used to say back in the day.
As a preface to previewing them, while these approaches have never reflected a Waterfall approach, I appreciate that the graphics I use undoubtedly could be read as such.
Graphical representations of process flows are limited in how they can illustrate iterative data additions without becoming quite messy. I won’t attempt to show that messiness with a graphic.
The Analysis effort begins during the Intake Process. That’s when we start to focus on terminal performance competence, even if that requires shifting a request for Instruction with a topic focus to one with a task focus.
This involves interviews with the requestor and enough of the key stakeholders to ensure that the goal and constraints are determined and then reflected in the Project Plan.
Analysis continues in the Project Planning Effort, as the PACT Project Planner/Manager determines what to adopt and what to adapt from the standard Project Plan templates and drafts a Project Plan.
Next, there is more Analysis in the Project Steering Team Kick-Off & Gate Review Meeting. The PACT Project Planner/Manager, and perhaps the PACT Analyst, runs the gauntlet, so to speak, and presents the scope of the effort, a draft Project Plan, and answers any and all questions and challenges from the Project Steering Team.
The Analysis data anchors the eventual Instruction back to the authentic performance requirements from back-on-the-job. Its source, then, is both critical and political. Because of this, I always ask the Project Steering Team to handpick the Master Performers and Other Subject Matter Experts for the Analysis Team and the Design Team.
I also prefer to use a Facilitated Group Process to conduct the Analysis and Design efforts. The other option is the more traditional approach, with interviews of individuals and small groups, performance observations, and document reviews.
Formal Analysis begins in the Analysis Team Meeting.
This generates the Target Audience data, the Performance and Gap data, the Enabling K/S data, and data regarding the Assessment of Existing Training content for its reuse potential.
Analysis continues in the Project Steering Team to handpick the Master Performers and Other Subject Matter Experts for the Analysis Team Gate Review Meeting when all the Analysis data is reviewed before use in the next phase, Design, and is possibly modified by the Project Steering Team to handpick the Master Performers and Other Subject Matter Experts for the Analysis Team.
Analysis occurs during the Design Team Meeting. Here, the Analysis data is processed, sorted, and sequenced into the three levels of design templates for modular T&D Events, modular T&D Lessons, and finally, modular Instructional Activities. The Analysis efforts can continue in the Project Steering Team Gate Review Meeting where that team approves or modifies the Design data.
Analysis also occurs during the Development Team efforts, or when a single Developer starts the development of the Instruction, as outlined in the Event, Lesson, and Activity designs. Additional Analysis occurs during the Alpha Test when others complete a quick review of Version 1—the Alpha Version—to catch any obvious errors before the next test of Version 2 during the Beta Version.
The Pilot Test Version, Version 3, is created unless the Lesson has been previously designated during the Design phase as a “Lesson from Hades.” If so, it might be planned and managed to go through four or five versions and updates to prepare for the Pilot Test.
Analysis continues during formal Pilot Test efforts—which I believe should always be a FULL DESTRUCTIVE TEST. I’m yelling about that because that is a critical effort in my experience to ensure both transfer back-to-the-job and a Performance Impact.
Analysis occurs in the Project Steering Team Gate Review Meeting of the Pilot Test results and the review of Revision Recommendations. That’s where the client and key stakeholders review and approve/modify the post-Pilot Revision Recommendations and create Revision Specifications. These guide the efforts in the next, final phase of an MCD effort, unless additional Pilot Testing is deemed necessary before release of the Instructional Content to the Instructional Access/Deployment systems and processes.
Analysis, yet again, occurs during the Revision efforts before Release to the Deployment or Access (Push or Pull) systems and processes, as any issues regarding content accuracy, completeness, and appropriateness, discovered earlier, are addressed. Analysis continues in the ISD Project Team Lessons Learned/Postmortems if that is part of the ISD systems and processes.
Future maintenance efforts require insights into what previous decisions were made and why as guidance for any future updates, as needed, as signaled by the Enterprise systems in place to do so.
Hand-offs occur between phases, as well as within phases. Carefully planned hand-offs are both effective and efficient; they should be conducted both at the end of a phase and at the beginning of the next phase.
The “throw the data over the wall” and then “let the next group run with it” kind of hand-off seems a pure Waterfall approach, in my opinion. I prefer to use the following two types of hand-off mechanisms:
- Kick-Off Meetings—with everyone
- Briefings—for individuals and small sub-teams
I use these when I have employed a divide-and-conquer strategy for the various roles across an ISD effort so that the development effort may be accomplished by multiple people, working in parallel, and not in series. Either one ISDer does it all, or the ISD development roles and responsibilities are divided between two or more people.
Sometimes, downstream players overlap with earlier phases and tasks; this is particularly useful when the performance is Complex and High Stakes. For example, the designated Designer may attend the Analysis Team Meeting and the Project Steering Team Gate Review Meeting to approve the Analysis data.
When the Instructional content is tricky and High Stakes, and there is a rush to get the Instruction ready for learner/Performer Access and Deployment, the Lead Developer may participate in the Design Phase and possibly also in the earlier Analysis Phase. Of course, the Lead Developer could have performed in the Designer and the Analyst roles as well.
Note 3: The content above was culled and adapted from an earlier version of my 2020 book…
See all 27 of my books on my Amazon Authors Page: https://www.amazon.com/-/e/B08JQC4C4V