Few things are trendier right now than artificial intelligence. AI shows up in marketing, transportation, logistics, performance management, and pretty much everywhere else.
So it’s no surprise that the learning and development industry is talking about it too.
But how might AI impact L&D? Will it help employees learn more effectively? (The answer isn’t always clear. Keep reading to find out why that is.)
AI will definitely have an effect on corporate learning. But what form that effect takes is yet to be seen.
Let’s take a look at how AI could change learning and development, as well as a few things to remain cautious about.
(Author’s note: artificial intelligence is a technical field. For the sake of brevity, I’ll skip over technical aspects. I’ll link to resources if you want to dig deeper. Also, “artificial intelligence” and “machine learning” are technically different concepts. But I’ll be using them interchangeably here.)
Why We’re Excited About AI in Learning & Development
Encapsulating the potential of AI in learning and development isn’t easy. It’s a complicated solution in a complicated space.
The best way to sum it up is this:
AI lets a system update itself to better serve and assess learners.
As we’ll see, it still requires human input. But in day-to-day operations, these systems can update the presentation of content (and in some cases, the content itself) to better serve learners.
That results in a lot of time saved for your instructors and course designers.
Let’s take a look at some of the exciting things that people are working on in AI-powered learning today:
Much of AI’s potential in learning and development falls under the label “adaptive learning.”
This application uses machine learning (alongside some established educational practices) to tailor a learner’s journey through learning material.Here’s a great visual from McGraw Hill’s discussion of adaptive learning:
By analyzing the learner’s strengths and weaknesses, adaptive learning algorithms display the most-needed content and skips the rest.
It can also change the order in which that content is presented. (Notice in the image above that some of the learning tracks are presented in two different blocks.)
As Zach Posner points out in the article linked above, this has benefits for students, instructors, and course designers.
Students get the content they need when they need it. Instructors get detailed analytics on course progression and what’s working. Designers see which of their content is having the greatest effect.
This all sounds great, but does adaptive learning software really work? As more research comes out, the answer appears to be yes. At least sometimes.
While we haven’t seen much research on this approach to corporate L&D, initial results are promising.
The field hasn’t pinned down best practices or the most effective algorithms yet. That will likely come with time. But at least for now, adaptive software appears to be effective in boosting learning.
It’s good practice to include text, visual, auditory, interactive, and other content types in your training. AI can take that idea to the next level.
Adaptive learning isn’t just about which content to deliver to students. It’s also about how to deliver it.
Machine learning can help training programs understand which types of content each student responds to. If a student learns best from video content, they’ll see more videos. If they respond better to text, they’ll see more articles. If they absorb information from audio files, they’ll get more of those.
Because this works with students’ individual needs, training becomes even more customized. And that results in better outcomes.
Of course, it also means that course designers will need to create a lot of content. This approach works best when there are multiple options for each learning target. And that takes a lot of work.
But adaptive learning can help there, too. By giving feedback to designers about which pieces of content are working best, those designers start to create better content. And, ideally, create it faster.
(For a detailed discussion of adaptive learning, content delivery, and learning style, see Milosevic, Brkovic, and Bjekic, 2006.)
Assessment is at the core of adaptive learning. Without effective assessment, the system only has contextual signals to determine what’s working. It needs to be able to see which types of content, delivery, and progression are getting the best results.
Machine learning can also improve those assessments. There are two main ways in which this happens.
First is adaptive assessment. This is primarily used to determine learner knowledge before training is completed. It establishes a baseline for the adaptive learning, providing a better starting point for the learner.
(More details on adaptive assessment are available in Kingsbury, Freeman, and Nesterak, 2014.)
Each question presented to the student is influenced by the previous question. When a learner answers a question correctly, the next question will be slightly harder. When they answer one incorrectly, it will be easier.
This helps establish accurate baseline information faster than a traditional assessment. And in corporate learning, where time comes at a premium, it can cut down on post-training assessment time.
We recently added assessment capabilities to the Continu learning management system! Every Continu customer now has access to assessments. Click here to find out more about how it helps you improve your corporate learning.
Second is diagnostic classification modeling (DCM). I’ll skip the specifics here and give you Lou Pugliese’s description of DCM:
“Diagnostic classification models determine learner mastery or nonmastery of a set of attributes or skills. These assessment models are used to diagnose cognition, particular skill competency, or subcompetency of a defined outcome. Diagnostic models are particularly important in adaptive systems because they are used to align teaching, learning, and assessment—and to give timely diagnostic feedback by knowing students' weaknesses and strengths to guide teaching and learning in the adaptive processes.”
In short, they get at underlying knowledge. That helps instructors determine whether learners have truly reached competence.
There are, of course, many other possibilities for effective assessment with machine learning. But even a glance at adaptive assessment and DCMs makes it clear that the results of implementing this approach could be significant.
With DCMs, instructors have a much better idea of how well their students learned the material—not just how well they took the test. Combined with adaptive assessment, that allows for higher-quality instructional design.