Note: not every chunk of content needs to be pilot tested.
As always – it depends.
It Depends On…
The Rewards at Risk … or … the Risks at Risk.
And – as one chunk of instruction may be part of a greater whole set of instruction – ranging the spectrum from most formal to most informal – the Pilot Test data need to be examined only within that larger context … and the data representing that. The ultimate Performance Competence goal may be larger than this one chunk, or a string of chunks of Instruction/Learning.
Anyway … My general ADDIE-like model:
Your model may differ.
Note my Phase 5 – above – and the key outputs.
One phase of my approach to ISD is the Pilot Test Phase.
Instructors, facilitators, and administrators prepare materials and themselves to conduct the pilot test. My version of ISD via an ADDIE-like process is shown in the above diagram.
Pilot-Testing is conducted in Phase 5 in both Modular Curriculum Development and Instructional Activity Development, levels 2 and 3 in my 3-level approach to ISD.
As part of the Pilot-Testing preparation, the learning experience is described to managers of pilot participants—especially if they have a role to play before, during, and/or afterwards.
The managers are provided information and/or instruction for any activities that must be completed to ensure successful transfer of knowledge and skills to the actual job, thus insuring PERFORMANCE COMPETENCE:
Before the Pilot-Test Phase Back in Development/Acquisition
Alpha and Beta Testing – Most products, training or not, are tested during their development.
The first round of formal testing is generally called alpha testing. The second round of organized testing on the more finished product is called beta testing. Beta testing is what ISD professionals sometimes call Pilot Testing. But – as there is little consistency in how ISD is conducted in actual practice at various enterprises, better check local practice – and whether that is informal or formal or what … for your adoption/adaptation.
I feel that training developers should also consider whether or not to perform some Alpha Testing – internal and informal developmental – or more formal alpha tests during this phase – as they see fit. See Risk & Reward above.
For example, it’s usually worthwhile to try out exercises to ensure that instructions are complete, that learners have enough information to answer questions, that exercises are not too difficult or not too simple, and so forth.
However, some of the time the structure of the content—and the way it’s expressed—is rather arbitrary; one approach will work just as well as another.
Be aware that if you ask for opinions on content and expression during a developmental test, you will surely get those opinions, along with the consequent rework (and potential schedule slippage).
Unless you feel there are substantive issues on which you would like interim feedback, it may be better to let the pilot test in Phase 5 give you the feedback you want and need.
I suggest that back in Phase 4 – that you subscribe to the realistic notion that you will during the Pilot-Test – deploy imperfection, and then continuously improve – rather than deferring deployment for perfection.
That continuous improvement is what Phase 5 is all about.
But still, don’t use Pilot-Testing to deploy junk (a technical term).
I always expected my developers to produce Pilot-Test ready instruction that should require less than a 10% fix post-Pilot-Test in my Phase 6.
Overview of Pilot Testing
Description – In Phase 5 of Modular Curriculum Development, the Learning/Training content and is delivered during a pilot test, and extensive evaluations are conducted – as warranted.
Key Activities/Tasks – Project activities in this phase include preparing for the pilot delivery (conducting train-the-trainer sessions, as appropriate); conducting the pilot test; evaluating the results of the pilot test; documenting the evaluations; and developing revision recommendations for the Project Steering Team.
The Project Steering Team turns the recommendations into revision specifications.
Key Outputs – The outputs of this phase include
► The Pilot-Test Report
► A Project Steering Team Gate Review Meeting presentation
Tasks for MCD Phase 5 – Pilot Test
The tasks of Phase 5 for Modular Curriculum Development are organized into five subphases.
MCD Subphase 5.1 – Prepilot
In this subphase, the project team prepares for the pilot test. Preparations include coordinating logistics, producing materials, coordinating the personnel required for the pilot, setting up the pilot-test location, and doing final readiness checks.
MCD Subphase 5.2 – Pilot Deployment
During Subphase 5.2, pilot testing is conducted in circumstances that replicate how the T&D will be deployed once it’s ready for general release to the marketplace. Basically, the project team conducts the pilot test, coordinates the resolution of issues that arise, and conducts evaluations on the T&D being pilot-tested. Tasks in this subphase will vary depending on the chosen deployment platform and media and need to be adjusted accordingly by the project manager.
MCD Subphase 5.3 – Postpilot Revision Recommendations
From this subphase comes a draft of the revision recommendations of the project’s ISD professionals, based on a thorough review of the data collected during the pilot test. These revision recommendations are later reviewed and processed by the Project Steering Team.
MCD Subphase 5.4 – Pilot Phase Gate Review
In this subphase, the last formal meeting is held with the Project Steering Team for the Modular Curriculum Development effort. The Project Steering Team receives an overview of the phase along with the ISD Team’s revision recommendations.
The Project Steering Team’s decisions and reactions to the recommendations evolve into a set of revision specifications for use in MCD’s sixth phase, Revision & Release.
MCD Subphase 5.5 – Post-Gate Review
In this subphase, the project manager distributes the revision specifications and obtains sign-offs on the progress completed during the phase.
The Pilot-Test Deployment Team
The purpose of the Pilot-Test Deployment Team is to conduct a pilot test following the initial development of the T&D. The Pilot-Test Deployment Team includes instructors, facilitators, and/or administrators who conduct the pilot session. The types of roles depends on the type of deployment and media being used.
The Pilot-Test Deployment Team is used in both levels 2 and 3 of my 3-level approach to ISD: level 2 MCD—Modular Curriculum Development/Acquisition, and level 3 IAD—Instructional Activity Development/Acquisition.
ISD Team members coordinate all logistics for facilities, equipment, media, food and beverages, invitations, and confirmations for the attendees. They also deliver the instruction or oversee the instructional delivery/participation for the purposes of pilot testing. Finally, they conduct written and verbal evaluations and debriefings to gather feedback for revision purposes.
Along with the roles of facilitators and instructors, another role is crucial for the conduct of a pilot test: the role of the pilot-test participants. Participants attend and evaluate the initial delivery of the T&D for the purpose of generating evaluations and revision recommendations.
The Project Steering Team considers these evaluations and recommendations. They own all final decisions.
The Pilot-Test Team
Pilot-test participants are ideally handpicked by the Project Steering Team to create a balance between
► Target audience representatives
► Management representatives
Target audience representatives are from the pool of eventual learners who will participate in the T&D after the pilot. They are used to measure the amount of learning that occurs.
Management representatives (a.k.a. management spies) are handpicked by the Project Steering Team to participate in the test. They are used to determine whether the right “learnings” are being taught – and learned to an acceptable level.
Right as in: appropriate/ authentic – and not “mickey mouse/close but no cigar. Sounds like ….”
In combination, the two perspectives give the ISD Team the right data to determine what happened well and what did not. And what to improve.
And whether those improvements indict some earlier phase in the process.
Lessons Learned and then doing something about them.
The mark of a mature L&D outfit IMO.
Conducting the Pilot-Test
During the pilot-test session, written and verbal evaluations are collected and debriefings are conducted, hopefully with minimal impact to the “flow” of the instruction.
After the Pilot-Test session or sessions, the evaluation feedback is assessed by the ISDers so as to create “revision recommendations” which are generated for consideration by the Project Steering Team.
The Project Steering Team in their final “Gate Review Meeting” may accept, modify, or reject the revision recommendations. See the graphic below.
The final results constitute the “revision specifications” used in the final phase of an ADDIE-level ISD project:
Revision & Release.
After Phase 5 the PST is out of the project and the Training/Learning professionals finish up the project … unless members of the PST are involved in the Release (Roll Out) activities. Sometimes the Release is a big deal – other times not.
The types of evaluations I use in Pilot-Testing come from this family of evaluations:
► End-of-Lesson Written Evaluations and Performance Tests …used after every one, two-or-three lessons
► End-of-Day Debriefings…done at the end of each day or PT period
► End-of-Event Written Evaluations and Performance Tests …used after every T&D Event
► End-of-Event Debriefings…done after every T&D Event
Note that if there are Coaches or Facilitators/Instructors involved – they have their own versions of the above.
Of course how you implement the above evaluations will depend on the delivery platforms you must use in your Pilot-Test.
The best evaluations are centered on specific performance competence—the ability to perform specific tasks to produce specific outputs to specific stakeholder requirements. That may require more than self-assessment. Or not (for now).
They – the tests of Performance Competence – should be/have “high fidelity and reliability.”
In other words – get real.
Welding isn’t best tested via a paper and pencil test or a multiple choice test on a tablet.
Authenticity is key.
Of the Tasks, the Outputs and the Requirements.
What else could be important?
Typically – that’s not simple for high Risk and high Reward areas of Performance Competence.
Summary & Close
The purpose of the Pilot-Test is to conduct a “full destructive test” of the Learning Content – following the initial development of that T&D content … when that makes sense from a Risk and Reward perspective.
The full destructive test occurs before the Learning Experience/Content is finally updated and then released – “pushed to” and/or made available to be “pulled by” the various target audiences … Push and Pull.
Your models and language may vary.
Of course, ongoing evaluations may occur after every On-Going delivery, or much less often immediately or over time, again depending on the Risks and Rewards associated with the Performance Competence being addressed by the Instruction and Information content.
Risks and Rewards should almost always be considered in most decision making in L&D – IMO. Even after your version of ADDIE, SAT, or SAM, MCD, etc.
Knowing how to do that and communicate that – that local R&R thing – communicating both status and what to do about it – is a personal best practice – IMO.
# # #