by Ethan Edwards, chief instructional strategist
Earlier this week I conducted a webinar on the topic “The 5 Most Important Analysis Questions You’ll Ever Ask.” Analysis is a critical activity in the process of creating any instruction, and it is particularly vital in making it possible to create engaging interactivity in e-learning projects. We covered a lot in the session, but I was unable to address all of the questions asked during the hour. As I was responding to those questions to be sent to the participants, it occurred to me that these responses also might be of interest to others who didn’t attend, so what follows are some brief answers to questions posted during the webinar.
First, a brief recap of the content:
Analysis activities in any e-learning development process uncover all the details and project constraints that will fuel the design. It’s particularly important to find all the information that will contribute to success early in the process. e-Learning projects are often at a disadvantage in organizations where insufficient time is allotted for analysis, where SME’s use the opportunity to overwhelm the designer with too much content, or where it is assumed that analysis is unnecessary.
While getting access to content and understanding the constraints are of great importance, I’ve found that these 5 analysis questions are critical in giving designers the information necessary to create e-learning that makes a difference.
Questions:
- What do you expect learners to be able to DO after completing the course that they can’t do now?
- What are the consequences TO THE LEARNER if the learner fails to master the intended outcomes?
- Can you show me an active demonstration, a detailed simulation, or provide an opportunity to directly observe the desired performance?
- What specific performance mistakes do new learners regularly make?
- What tools, resources, job aids, or help do successful performers (or even experts) use to do these tasks?
Here are the questions from webinar attendees and my responses:
Analysis is not part of the Allen Interactions process, so aren't you contributing to analysis going away?
Analysis is DEFINITELY part of the Allen Interactions Savvy Process. We contend that the most appropriate process for creating e-learning is an iterative process that carries out the necessary activities in smaller chunks, designing ideas and testing them using rapid prototypes and cycling back to conduct more analysis and design as the project requires. It is unproductive to demand that all analysis should occur at one time before you are fully engaged in the project; sometimes you are not even aware of the questions that need to be asked until you begin design. If anything, the Allen Interactions Savvy Process increases the importance of analysis and design rather than eliminating it. I do think many rapid e-learning efforts tend to ignore analysis activities.
For Q1, what if you don't get a performance problem. What if you are developing a basic chemistry class for entry level power plant workers or factory workers?
and
What would you do if the client cannot articulate or agree to performance outcomes?
It is truly rare that an organization is going to invest in training on a topic that has no performance component whatsoever. If it seems that is the case, first you need to push back with the SMEs. SMEs often spend their lives becoming content experts and simply lose track of how the information is actually used. It’s sometimes hard work, but usually there is some performance issue of importance. (Sometimes, you need to talk to someone else…the project owner or managers, etc., who actually “own” the problem.) But if you still are stuck without performance outcomes, then you are going to have to design some performance associated with the content. e-Learning only works if learners DO something meaningful, memorable, and observable (so feedback can be delivered).
Risk/consequences to learner -- having experienced some of Allen Interactions’ e-Learning, I'd say you use the learner's risk/consequences to create the consequences & really the drama in the scenarios -- true?
I would agree that risk/consequences are a really powerful tool in creating a sense of challenge and excitement to an e-learning piece. There are other ways as well, by building suspense through narrative, delaying judgment, and developing compelling intrinsic feedback. Risk is really great at creating a sense of urgency (instead of complacency) in the mind of the learner.
Can you provide that 10% - 20% - 70% that you just gave about competency?
This question refers to the idea that in many corporate environments, there is a general pattern where about 10% of what a successful performer is done is learned in formal training, 20% through mentoring and networking with co-workers, and 70% learned on the job. You can find this idea referenced and supported in a large number of human performance contexts, but I believe it originally came from research conducted by the Center for Creative Leadership. Their website (and a Google search) reveal many references to it. I think it isn’t so much a research finding as a guiding principle that makes a lot of sense.
How can similar results be achieved with lower end production values?
The key is to recognize that the instructional value is not in the production values, but in the interactions that engage the learner’s attention. Of course, the production values have some impact but are by no means the primary reason for success. Replace an animation sequence with a slide show of still images. Dispense with audio if necessary. There are a number of ways to achieve the same intent. But while you can achieve some results with lower-end production values, I think it is nearly impossible to do with ZERO production values. In most cases and for most audiences, visual imagery of some sort is absolutely essential. And that imagery almost always has to be customized to make it specific to your content. That can be done reasonably, but it can’t be done without some little investment in media talent.
How do you come up with the performance outcomes? Is it you or the company?
The performance outcomes have to come from the stakeholders within the company (either willingly or with prodding!)
How do you convince a client expecting only explication of inert knowledge (company history and organization structure) to accept that the training highlight actionable behaviors?
It’s a hard task, especially given how difficult it is to get people to let go of preconceptions. We find the most powerful method is simply by helping decision makers experience the difference between inert, content-bound e-learning and engaging, compelling learner-centered training. Often, it isn’t the case that decision makers want to make poor choices, but rather that they simply have never encountered alternatives. Of course, it is difficult to change perceptions at once, and legacy decisions can limit what you can do on any given project, but try to make small steps, documenting successes s you can.
Please speak more on how to counter the "productivity" arguments of e-learning software providers who claim/advocate that SMEs develop their own training (bypassing instructional designers in the process)?
This is really a philosophical issue. If e-learning is viewed as a publishing project, there’s really no way to counter this argument. The view is that the demonstrable existence of e-learning programs equals success. So measures of efficiency are simply a measure of how quickly PowerPoint decks can be converted to online delivery (for example). If, on the other hand, e-learning is measured by outcomes, then any even superficial investigation into training reveals that content access is not sufficient to accomplish change in performance. In general, SMEs don’t possess the skills in instruction and learning to expect them to be able to create those learning experiences.
I usually ask them to identify the performance outcomes first, then the content associated with it – you are suggesting the opposite, is that right? Identify content first, then outcomes, then map the content to the outcomes?
Well, yes, that’s what I generally find to work better, but the order doesn’t really matter. The reason I tend to get the content out first is that the SMEs are bursting at the seams to tell you this stuff. If you try to force the discussion exclusively to performance objectives, the content is going to be constantly inserted in the discussion. It seems better to get it documented and out of the way. But truly, this can vary quite a bit based on the environment, so my advice would be flexible. The main goal is to get enough information to be able to create the grid.
I work for a hospital and cannot use audio. I have found this problematic because it requires additional slides and I feel call-outs are distracting. Any suggestions?
Honestly, audio is not always desirable. To the extent you are delivering content, it is most useful when under user control. So instead of focusing on how to force the content on the learner, create a challenge that requires the information to solve. Then provide access to the content you would normally be trying to parse into small call outs. When the reader is reading for a purpose, you can be more general in how you provide access to the learning content to the individual.
How and where do you communicate the consequences to the learner? In Objectives, as WIIFM?
Use whatever means seems most effective. While screens that simply list objectives rarely are effective in communicating much of anything (students don’t read them), it is important to let the learners know “the rules” so they know what to expect. Laying out consequences in introductions and as you lay out a task is usually a good practice. Just note that you don’t have to explain EVERYTHING. Learners will experience the consequences quickly enough and learn more effectively that way. Tell just enough to get the learner engaged and working. Let the rest flow through the interactions.
So for content driven compliance training, are scenarios the best bet?
It’s difficult to make a blanket statement about any content. Scenarios tend to be valuable in creating a context to make sense of information that might otherwise seem meaningless in the abstract. Sometimes, though, I have seen scenarios made so elaborate that the actual content is made obscure. So, always try to keep the end goal in mind and not let the design get out of control.
The online and e-learning workshops I manage are not developed within an enterprise that's motivated to train their employees. Our professional membership is self directed learners, and the #2 consequence is that they may not be effective at their jobs (either as an consultant or within the organization). Can't think of any other motivation…please?
Individuals still ought to be motivated by the same issues, if not more so, since they are more directly responsible for their own success. These questions will still be very relevant in creating e-learning programs that motivate and have value. The biggest hurdle may be that your central organization may not actually know about these motivations, so part of an effective analysis might be to go to the end users for the training (your membership) rather than just relying on the administrative knowledge held within your organization.
Comment