Any of you who’ve been in one of my classes have probably seen this photo of my Mom in our field of irises. I spent part of this weekend replanting a portion of this iris garden, accomplishing a project I’ve been planning for years. The American Iris Society has given an award, the Dykes Medal, nearly every year since 1928 for the best iris new iris cultivar. I’m a freak about collecting sets of things and so naturally I’ve made it a goal to get all the Dykes Medal winners; now that my collection is nearly complete, I’m putting out a garden exclusively of winners in which they are planted in chronological order. As I was arranging the plants in order, I was thinking of the varieties and marveling at how consistently these represent an almost unbroken progression of improvement in clarity of color, substance, size, bud count, and flower form. Except for a few exceptions, there’s a very clear tradition of consistent improvement over time. You could pretty much use the Dykes Medal winners as a ruler to place almost any other iris into the decade it was introduced simply because progress has happened in iris hybridization with such regularity.
Naturally, it made me think about the progression over time of improvement in e-learning. I’ve been involved in designing and building e-learning for nearly 30 years, having started out in graduate school at the PLATO Laboratory at the University of Illinois at Urbana-Champaign. In many ways, some of those lessons created on that somewhat primitive platform 30-40 years ago still rank among the very best e-learning I’ve ever encountered. Of course there have been gems of training created consistently through the years since then on a wide range of platforms, but there also continues to be a significant amount of nearly worthless e-learning that seems to be churned out at the same pace. I would bet that a random sample of e-learning lessons created in 1990, 2000, and 2010 would all probably represent nearly the same range of good and bad e-learning in terms of the learning experience accomplished. The tools and media might give some indication of when the piece was built, but I don’t think there would be an identifiable trend toward consistently better instruction.
How can this be? Shouldn’t the bulk of e-learning be demonstrably better after 30 years? It’s perplexing. I don’t think I’ve ever met anyone who has defended the typical page-turner-content-dump-with-inserted-trivia-questions style of e-learning as good training, yet at the same time, the tools people regularly choose to use to create e-learning are primarily geared toward making it very convenient (even irresistible) to create exactly this kind of training. In fact, it seems like the one universal feature of authoring systems is an automatic PowerPoint Import option.
What’s lacking, I think, is a shared agreement across the industry (among designers, tool companies, IT departments, and management) that learning and performance change are ultimately the measures of e-learning success. For too many organizations, simple production or output of lessons seems to be the goal. I understand the desire for rapid and cheap e-learning process, but there is little value to churning out lessons quickly if they don’t facilitate learning. Of course there will be students that will learn under the worst of circumstances; and there will also be students who will fail to learn even under the best of circumstances. This conflict is what makes educational assessment so challenging and makes it difficult to pinpoint precise weaknesses. Unfortunately, it’s pretty clear we’re not going to make progress until it’s as easy to create interactivity as it is to churn out thoughtless content-centered approaches. The paradox is that I firmly believe that it is the instructional design that creates value in e-learning, but as long as the development tools restrict the design ideas so severely, e-learning quality indirectly becomes a question of getting the right tools in the hands of designers.
Comment