The allen i Blog
The fastest blog in the west

Filter by Category
Filter by Category

Microlearning: Overcoming 4 Assumptions

shutterstock_504223963_assumptions_Microlearning_Blog.jpg

Ellen Burns-JohnsonBy Ellen Burns-Johnson, Instructional Designer / @EllenBJohnson 

If we’re not focused on the right problems, we can’t be sure that microlearning is the right solution.

The training industry’s interest in microlearning shows no signs of slowing. We’ve seen an enormous response to the ebook and blog posts we’ve shared on the topic, and an increasing number of clients are asking us to help them create microlearning solutions.

I’m not surprised. Microlearning corresponds with cultural shifts in how we engage with media. There’s an increased desire for immediacy in the way we use technology, especially mobile. In a 2015 study by Google, 90% of respondents reported using their mobile devices to make incremental progress towards longer-term goals (PDF, p. 12). Many instructional designers see these behaviors as opportunities to boost learning and skill development.

But so-called micro-moments aren’t always the best place for learning. Shorter isn’t always better. It depends on what you want to accomplish.

We probably all know people who read article after article on workplace productivity, but never seems to get any more done with their day. Shorter articles will not help these folks. Rather, they need help putting their learning into practice. So, when learners say that they want microlearning, we can’t just assume it’s the right solution. We need to dig deeper and figure out what’s truly preventing them from accomplishing the kind of performance they want.

shutterstock_504223963_assumptions_Microlearning_Blog.jpg

4 common assumptions about microlearning:

Aside from direct requests from learners, I find there are typically four assumptions that pique organizations’ interest in microlearning:

1. Learning content needs to be shorter

2. Training should have higher completion rates

3. The organization needs more learner data

4. Training takes too long and is too expensive to produce

In the rest of this post, we’ll examine four analysis questions you can use to dig past the surface of these assumptions and figure out what your learners truly need to sustain improved performance.

Assumption #1: Learning content needs to be shorter.

I think when organizational leaders ask for microlearning, they’re expressing a desire to get learners to a certain level of competency faster. Making training content shorter might serve that purpose, true. Making training better is a route to consider, too.

It seems that many microlearning initiatives begin because someone with pull in the organization simply decides that courses should be shorter. Maybe learners themselves are the source of this impetus. You might be getting feedback that courses are too long, and would be easier to work through in 5-10 minute chunks.

Be cautious about taking these directives at face value. When learners ask for shorter courses, they might be communicating that they don’t find the existing material useful. They might not feel like the courses are helping them do their jobs better. If that’s the case, then taking an existing training strategy and re-organizing it into smaller sections will not solve the problem.

Instead, ask: How will shorter content help learners perform their jobs better?

Find out whether learners find existing courses valuable. Hopefully, you have some data that point to your course’s impact on performance. If not, interview learners (anonymously if needed) and find out what they really think. Before you convert a significant amount of content to a microlearning format, you might run a pilot test to determine whether the impact of the switch resulted in better performance outcomes.

Assumption #2: Courses should have higher completion rates.

There are a variety of reasons why learners don’t complete courses. Here are a few:

  • Learners don’t believe the course is useful.
  • The course actually isn’t
  • They know they won’t have enough time to reflect upon or practice what they’ve learned.
  • The course is too long to fit into the learners’ workday.
  • They aren’t in an environment that’s conducive to traditional training methods – they’re on the road, away from a computer, in a busy classroom, etc.
  • Learners will want to skip training if expectations around completing the training are unrealistic. My husband worked an hourly retail position right out of college. He was required to take training modules on new products, but also required to be out on the sales floor all the time. He wasn’t allowed to log overtime in order to take the training, but he was reprimanded if he took time off the sales floor or failed to complete the modules. Nothing kills motivation like unfair expectations and mixed messages!

Microlearning can only help with some of these issues. Training – including microlearning – should help solve a specific performance problem. In order to make the right solution, we need to start by studying the constraints and challenges that prevent people from achieving the desired performance.

Instead, ask: What constraints are learners under?

“Completion rates are low” isn’t a problem that can be directly addressed by design. We need to get more specific, determine why completion rates are low, and focus on that – or better yet, on the constraints and challenges that prevent people from achieving the desired performance. Then we can go through the process of determining whether microlearning is a good solution.

For help on identifying whether training is the right solution, I recommend Ethan Edward’s 5 Critical Design Activities whitepaper, Ann Iverson’s blog post on unpacking learning objectives, and Cathy Moore’s action mapping approach.

Assumption #3: More learner data is better.

Completion rates can dominate an organization’s learning strategy because those are the easiest data to get from learning management systems, but that’s starting to change. We have more options and access to more data, but that also means we should ask more nuanced questions in order to cut through the stream of information and attain meaningful insights.

Theoretically, you could track all kinds of data. You could find out where learners typically are when they access the microlearning events, or what times of day are peak hours for learning. You could track what percentage of learners use certain features in the learning application, like glossaries, notes, or calendar reminders.

However, just because we can track this data doesn’t mean we should. Data take time and effort to analyze, and if that analysis doesn’t lead to ideas about how to improve the training or boost job performance, then what’s the point?

Instead, ask: What data are we tracking, and why?

Imagine you have a mobile learning application that includes a growing number of microlearning lessons. You might track which lessons are most often re-visited by learners, and whether there’s a correlation between the act of repeating those lessons and higher results on KPIs. This information could guide your decisions about which lessons you might want to emphasize in internal marketing materials.

Now, let’s imagine that your microlearning app also includes a glossary, and that the glossary database requires a good deal of effort to maintain. It might make sense to track how often learners use the glossary so you can decide whether it’s providing enough value to warrant its maintenance.

These are just a couple examples of how data can guide a program’s approach to microlearning.

Assumption #4: We need to make training faster and cheaper.

Another purported benefit of microlearning is that it is lower-cost than traditional e-learning courses. Maybe it is, but as with any learning solution, we have to weigh cost against effectiveness.

Some microlearning takes the form of videos or web pages. These are often passive experiences in which the learner watches a video, listens to audio, or reads content. This type of microlearning is easy to make, but unless someone has a job in which they read a web page and take a quiz, it doesn’t really match up with what we expect that person to do in the real world.

Like more traditional e-learning courses, performance-based microlearning represents a larger up-front investment than template-driven, “click and twitch” designs. However, since performance-based approaches approximate real actions taken on the job in order to increase learning transfer, they might be less expensive in the long run.

Ineffective training eats up the cost of development and learner’s time away from their work. Microlearning might seem to take less time for learners to complete, but if it’s ineffective, it doesn’t really save the organization money and time. Five minutes per lesson adds up across an organization of 200 or 2000 people. If learners’ efforts in training don’t result in improved performance, then that time, effort and money is wasted.

Instead, ask: What constraints am I under?

We all experience pressure to do more with less, but faster and cheaper training isn’t always best for learners or for the organization. Sometimes it’s worth pushing back and asking for more resources. Sometimes it’s not.

If microlearning appeals to us as instructional designers, we need to be honest with ourselves about why. Only then can we figure out what’s truly the best approach.

Instructional design still applies

 I’m sure some of you were thinking “These are typical instructional design questions!” as you read this post. You’re right! They are! Just because microlearning is short, doesn’t mean we shouldn’t apply the same analytical rigor to its design as we would for longer courses. With microlearning, our strategy shouldn’t simply be “make it shorter.” Our strategy should always be to create experiences that help people learn to do the right thing at the right time.

SHARE THIS READY-MADE TWEET:

If we’re not focused on the right problems, we can’t be sure that #microlearning is the right solution. Read blog: https://ctt.ec/z678a+

 

Microlearning Kit

How Toyota Leveraged Six Technology and Process Keys to Improve Call Center KPIs
Talented performers often do that extra special something. Do you know what it is?

About Author

Ellen Burns-Johnson
Ellen Burns-Johnson

Ellen Burns-Johnson has over a decade of experience in the education and training industries. She has crafted the instructional strategy and design for dozens of major initiatives across diverse topics, from classroom safety to IT sales. Emphasizing collaboration and playfulness in her approach to creating learning experiences, Ellen’s work has earned multiple industry awards for interactivity and game-based design. Ellen is also a Certified Scrum Master® and strives to bring the principles of Agile to life in the L&D field. Whether a client is a Fortune 100 company or a local nonprofit, she believes that the best learning experiences are created through processes built on transparency between sponsors and developers, empirical processes, and respect for learners. Outside of her LXD work, Ellen plays video games (and sometimes makes them) and runs around the Twin Cities with her two mischievous dogs (ask for pictures).

Related Posts
Since 2013, Allen Interactions Has Consistently Been Named A Top 20 Company In Multiple Categories By Training Industry. In 2024, We Were Again Recognized As A Top 20 Experiential Learning Company.
Since 2013, Allen Interactions Has Consistently Been Named A Top 20 Company In Multiple Categories By Training Industry. In 2024, We Were Again Recognized As A Top 20 Experiential Learning Company.
Make a Learning Love Connection: 5 FACTS for Better e-Learning Courses 
Make a Learning Love Connection: 5 FACTS for Better e-Learning Courses 
5 Questions You Need to Consider for High-Impact Learning
5 Questions You Need to Consider for High-Impact Learning

Comment

Subscribe To Blog

Subscribe to Email Updates