by Ethan Edwards, chief instructional strategist
One of the frustrations in designing instruction for e-learning delivery is that the interactions that almost all authoring systems let you build easily are not the interactions that you need to build in order to teach. The traditional standardized question formats (like the context-neutral ubiquitous 4-option multiple choice or true and false) that are so prevalent come from the world of evaluation and testing, not from the world of good teaching practice. And those formats are useful not so much because they are particularly effective in teaching, but rather because they are uniform and are easy to score.
But it's really important to understand what you are trying to accomplish in your interactions before you can design your instruction. There are, indeed, situations where the purpose of a test or quiz is simply to assess knowledge. In these cases, especially when the test results need to be reported into a pre-existing scoring system, or when specific comparisons and rankings between students need to be done without bias or unfairness, then issues of uniformity of format, unambiguous scoring, prevention of cheating, etc. are important. That's what most interaction templates provide, and unfortunately, has given the impression that that's all that e-learning interactions are about.
But a far more important role of interaction in e-learning is to encourage active thinking through meaningful challenges. This is, in most cases, what designers are trying to achieve by inserting interactions into the middle of a lesson rather than in pre- or post-tests, but they are using limited context-independent interaction formats that are ill-suited for the purpose. The intent of these interactions needs to be not nearly as much on determining if the learner gets the right answer, but rather on what opportunities for meaningful processing and critical thinking are being created. Some of the best learning outcomes occur specifically when the learner makes mistakes-trying alternatives, testing hypotheses, comparing consequences of parallel decisions. In these cases, strengths of the standard interaction styles for assessment and scoring can become impediments to good teaching.
The best learning interactions often have characteristics that might actually make them bad test questions. They often delay judgment, allowing opportunities for learners to engage in extended self-analysis of their work. They may even ask the learner to judge their own correctness. Question phrasing and feedback is less concerned with avoiding "giving away" the answer and more focused on how to make the challenge and consequences most impactful.
Exploring lots of different options can be really useful and so learners often choose to repeat interactions multiple times. Feedback is provided extrinsically in the context, which may require more effort on the learner to interpret but provides much greater significance than simplistic judgment phrases. It can even be a benefit to retain a modicum of obscurity or mystery about exactly how to accomplish the intended task since it makes it less likely that learner actions are thoughtless stimulus /response patterns instead of thoughtfully-constructed attempts to solve a meaningful problem. (A phrase from Robert Frost's poem "Choose Something Like a Star" always seems appropriate to reinforce this idea: "Some mystery becomes the proud." Google it; it's a great poem!) It even helps to require learner effort in learning interactions; this, again, ensures that learners need to work with intention to produce appropriate responses, and also significantly increases the meaningfulness and memorableness of the learning experience.
There's a phrase that is used often in training circles, "Telling ain't teaching." As part of that mantra for designers of e-learning, especially given the way that authoring systems seem to recommend and facilitate the creation of thoughtless testing questions as good e-learning design, I think we should add "Testing ain't teaching" as a reminder to ourselves when designing learning interactions.
Comment