Is training the solution?
When it comes to designing e-learning courses, the very first question to ask is whether training is the effective solution we’re looking for. Identifying the gap between the desired performance and the actual performance is critical at this stage.
Work with the client to set realistic, clear, and measure goals. And then conduct a job/task analysis to have a clear understanding of the components of competent performance. Ask what the learners are exactly expected to do to deliver expected performance and why they are not doing it. Is the poor performance caused by motivational or environmental factors or is it caused by lack of knowledge and skills? Cathy Moore’s flowchart is a great resource to help you identify the best solution to tackle performance problems.
No need to say, if poor performance is caused by lack of motivation or issues with the environment, designing training will not usually produce value. Most often besides addressing the root causes, providing job aids can help improve performance without expending a lot of effort and resources on training which is not needed.
When the unsatisfactory performance is caused by lack of knowledge, knowing whether or not the learners need to have the information in their memory can help you determine if training is needed or a simple job aid can solve the problem.
When training is determined to be the right solution, the next step is to choose an appropriate design model. The design process followed here is based on Dick and Carey systematic model as shown below.
First, set the instructional goal and conduct instructional analysis. A clear goal statement should have three main components;
- Who is the audience?
- What will the learners be able to do in the performance context?
- What tools and resources will the learners have at their disposal in the performance context?
Instructional/ Task Analysis
If you have already conducted instructional/ task analysis to identify what the learners are expected to do to deliver ideal performance, you can go ahead and conduct learner and context analysis. If not, you should first conduct instructional/ task analysis including goal analysis and subordinate skills analysis. You may come up with a table like this one:
Learner and Context Analysis
To conduct learner analysis, define two information categories for the learners; general characteristics and characteristics directly related to the instructional goal(s). To perform context analysis, collect information about the performance setting, i.e. the real world setting in which the learner will perform the tasks being taught and the learning setting, i.e. the setting in which the learner is learning the skills. To facilitate efficient transfer of knowledge, it is very important that the performance setting and learning setting be similar as much as possible.
You can collect the information about learners and context by running interviews and surveys or observing the learners in the actual performance context.
Information Categories for Learner Analysis
Education and ability level
Characteristics directly related to the instructional goal(s):
Attitude toward content
Attitudes toward potential delivery system
Motivation for instruction (ARCS)
General learning preferences
Attitude toward training organization
Information Categories for Context Analysis
Physical Aspects of Site
Social Aspects of Site
Relevance of skills to workplace
Number/ nature of sites
Site Compatibility with Instructional Needs
Site Compatibility with Learner Needs
Feasibility for Simulating Workplace
Performance objectives can be used in writing assessment items and to communicate what may be learned from the unit of instruction. A clear performance objective has three components: Behaviour, Condition, and Criteria.
The purpose of developing criterion referenced assessment items is to determine how thoroughly a student has mastered a specific skill/objective. Therefore, it is critical that the assessment items be aligned with the objectives. There are three types of assessment items:
- Pretest items to determine if learners already possess some of the skills that are to be taught
- Posttest items to determine if learners have achieved the objectives
- Practice items to determine if students are acquiring the skills and knowledge being taught
You now need to develop an instructional strategy and decide what information, examples and learners’ practice and feedback to use for each performance objective. Consult Gagne’s Nine Events of Instruction to develop an effective instructional strategy.
Gagne’s Nine Events of Instruction, The Conditions of Learning (1965)
- Gain attention
- Inform learners of objectives
- Stimulate recall of prior learning
- Present the content
- Provide “learning guidance”
- Elicit performance (practice)
- Provide feedback
- Assess performance
- Enhance retention and transfer to the job
Before developing instructional materials, first consider if the existing materials could be adapted to meet the defined needs, if not, you may need to develop original materials from scratch. Chances are the content you need is already out there and you can save a lot of your time and budget if you know where to look (see: 6 types of learning content). The choice of design features including visual features and writing style depend heavily on the outcome of the learner analysis conducted earlier in the design cycle.
It is critical to make sure that everything support the ultimate goal set at the beginning of the design process.
Conducting formative evaluation throughout the process helps identify the inconsistencies or other problems with the course before it is made available for the learners to use. It’s the best time to make necessary revisions before expending too many resources that will later need modifications. The learner performance and attitude data collected during formative evaluation can be used to identify the areas that need improvement. There are two key phases to conduct formative evaluation:
- One-to-one evaluation: The main purpose of this type of evaluation is to identify problems such as poor wording, ineffective or misleading visuals, poor examples, etc.
- Small group evaluation: The main purpose of this type of evaluation is to identify the effectiveness of the revisions made and the effectiveness of instruction in absence of the designer or facilitator.
Summative evaluation provides valuable information about the results in the performance context and makes it clear whether the transfer of knowledge was successful and the goals were achieved. Summative evaluation provides the following information:
- Transfer of learning and the contributing factors: Are the learners applying what they have learned in the real-world setting and if not, why?
- Results: What changes can be contributed to learning? Did the instruction result in reduced costs, lower rates of employee turnover, increased sales, etc.?
- ROI (Return On Investment): What £value have these results contributed to the organisation?