New Series: AI in Learning and Development

by Megan Bowe (Partner, MakingBetter)

It has been exciting to hear the conversations about Artificial Intelligence (AI) in learning and development grow. Corporate learning groups are ready to create ecosystems where adaptive technology meets people’s needs, analytics predict where action can be taken to better support a person, and where personalization will be most effective.

I, admittedly, get a little nervous when I hear talk about adaptive learning because it’s more about adaptive technology, rather than learner centered personalization. We need to discuss this in terms of technology adapting to people rather than people receiving adaptivity. That might sound like a bit of a nit picky argument.

I’ve worked with huge adaptive implementations where the instructors and learners felt they were adapting to the technology much more than the technology was adapting to them. This is a giant red flag. People should be at the center of the system designs rather than endpoints. Their activities inform the technology of what they do or do not know. The recommendations they receive are based on both what they know and what they want to know. If they are only receiving outputs from whatever technology, nothing is being personalized and the technology is not adapting.

As you know, learning is a very soft subject which makes it hard to build algorithms around. There’s no hard and fast way to tell if a person does or does not know something.

xkcd algorithm comic

In order to build an algorithm you need to be able to explicitly tell the machine when x happens it means y. A technology which is supposed to adapt to what a person knows and wants to know has to do a bit of digging to make a good recommendation. Let’s imagine the technology is helping you learn to drive in the US and you’re asked this question:

“When are you allowed to turn right at a red light?”

By answering the question correctly, the technology would learn you have some understanding of traffic light operations. The machine can’t assume you know all of the operations. It can’t even assume you know all the ways in which you wouldn’t be allowed to turn right on red. It needs to ask similar but related questions to see how much you do know (do you know what to do if it’s a protected turn lane, etc). Then it can offer recommendations which build on that level of knowledge. When a new person starts using a technology which adapts to them, the algorithms inside need a base level of information in order to be effective. If you have played the game 20 Questions, it’s a lot like that. There’s no particular way this data must be gathered, but in order for the machine to learn it needs data.

There are a number of factors beyond just feeding the machine. It needs to know what topics are related and build on one another. There needs to be enough content that a person isn’t seeing repeated information (if I didn’t get it the first time I saw a particular video, why would watching it again help?) It also needs to know what a person’s goal is. What do they need to know? There’s a lot to this which depends on many, many factors. I’ll get into the relevant details in the context of different implementations.

xkcd machine learning comic

Let’s just say… it’s not you, it’s me. I need to lay out all the pieces so that we avoid subjecting anyone else to awkward, impersonal experiences with adaptive technology. But. One article is not enough. So this will be a series going down this list of giant rabbit holes:

What do they do? What forms of AI in learning are talked about now? There are several ways AI can have a role in learning applications. In this upcoming post, I’ll review several models  

  • Classroom Supplement – An instructor uses this to support curriculum which is being delivered. A personalized study guide which helps students get to the same level, an intelligent tutor.
  • Self Study – A person chooses to learn about a topic and they are delivered personalized recommendations which guide them most efficiently to their goal.  
  • Informal Learning – People participating in a social network (focused on knowledge sharing) receive recommendations about which people they should connect with, based on who has the most expertise needed.  
  • Content Remixes – A person (mostly instructors) loads their syllabus or competency map along with all of the content available. The tool analyzes the content to find patterns and other information. This supports their content management process as well as makes the content more accessible for personalization.
  • Analytics – Basic and predictive analytics are often coupled with different types of personalization. The AI looks at what data it’s receiving and what it’s been trained to assume from the learning activity. Then, the AI projects forward to make a prediction about what is likely to happen if the same patterns in the data continue.
  • Lifelong Learning – A person collects data about all of the things they have learned over time. As they approach a new endeavor, they can share this data with an adaptive technology so the learning experience is personalized to start where they left off in other places; ideally, letting them skip the stuff they know and only work on new things.  

Just to put this out there: modules in learning management systems are not adaptive no matter what spin is put on them. Creating modules with multiple learning paths does not personalize information for the user. The technology is not adapting to anything. This is a decision tree which can’t change anything to make the experience better for a person because it’s not learning anything about a person.

What should you expect an adaptive technology to deliver? Another post will describe forms of personalization.

  • Instructional personalization – This is often a self-study environment where a person is receiving instructional content to help them learn.
  • Assessment personalization – This is where many adaptive implementations focus. Getting people through tests faster, with more depth in the results, is the goal. Since the questions are either right or wrong, it’s a much more manageable model to build algorithms around. You can find these in MOOCs, test prep (SAT, etc), among other implementations.
  • Full personalization – This is a combination of instruction and assessment where the technology is delivering recommendations for both. As the technology determines a person is proficient in an area, it can move them to another area; instructing and assessing until they ready to move to another area.
  • Performance support – This is not necessarily a learning system, but a tool which knows about the person and the tasks they are intended to do in order to offer timely, relevant information. Like Clippy, but not dumb (Clippy pushed info without knowing a thing about the user).
  • Likeness – This is the most common form of personalization we see. It’s usually in the form of how people like you (who want to know x, who like y, who do z) benefit from this person or thing (that knows about x, is similar to y, that also does z).

How do you set up an adaptive technology to make sure it’s calculating what’s most important? This post will discuss the different ways you need to think about your content, learners, and system design.

  • Drivers – Which parts of the design are driven by humans (learners, instructors, others) or by machines?  
  • Scope and depth – What range of topics should be recommended? Should the learner know the topic inside and out or just enough to move to the next topic?
  • Outcome intended – What is the goal for the learner? What is the thing they need to learn?
  • Flexibility – What parts of the design must be delivered in a specific order or way? Which parts can change and be delivered in different orders?

What do you need to think about when you’re preparing to go down this path? Here I’ll describe the many considerations you will need to make in planning and execution.

  • Content strategy – What do you do with your existing content and how should you create new content?
  • Competency and skill taxonomy – What defines good, bad or mediocre performance? How are skills and competencies aligned to roles?
  • Data strategy – Where should data be captured and how should it be formatted? What is most important to your organization and where do you start?
  • Build vs Buy – There are many tools on the market, but do they do the thing you need them to do? When is it better to run it in house?

As you can see, there’s a ton of different ways to set up different kinds of AI in learning and several things to consider in each set up. I’ll expand on the sections above in a series of posts over the next few weeks. If you would like to follow along, sign up for email updates here.  

Finally, I know how much we love to add letters to the word learning. I brainstormed a few to get you started 😉 Tell me yours!

  1. aiLearning
  2. leArnIng
  3. ai.learning
  4. aLearning

Megan Bowe (Partner, MakingBetter)

Megan BoweData is in Megan’s blood. (And on her skin in the form of a Fibonacci spiral tattoo.) Perhaps this is why she is such an effective data charmer—connecting learning, work and algorithms with data. In addition to creating badass feedback loops, she creates systems with predictive analytics, personalization and adaptive technology at their core to help people grow.

Megan has been at the epicenter of the xAPI community since the beginning—helping launch xAPI at Rustici Software—and growing and adapting to this ever-changing market. She has worked as a product manager for and with learning tech companies, major publishers and large organizations, leaving a trail of products and standards evangelists in her wake. Megan also conducts research on data interoperability with the Data Interoperability Standards Consortium and studied Information Design and Technology from SUNY Institute of Technology as a graduate student. She edited the book “Investigating Performance: Design and Outcomes with xAPI” as well as speaks and writes on xAPI and AI in Learning and Development.

Speak Your Mind

*