I spend a great deal of time talking with instructional designers and learning & development leaders about xAPI, and I get a lot of questions. Most are new to the concept, and they’re taking in a giant fire hose of information all at once. It’s hard for these professionals to know where to start, particularly because adopting xAPI successfully requires a mindset shift.
There are two questions I frequently get when I introduce someone to xAPI.
- “What can I do with xAPI that I couldn’t do with SCORM?”
- “If I can track everything, I could get an enormous amount of data. How do I choose what data to track?”
To get started, those of us in the eLearning space are so used to working with what’s in the SCORM “box” that it’s sometimes hard to think about what else we might track with xAPI. The SCORM “box” contains a same small, neat set of somewhat useful data that are being tracked for every piece of eLearning in the LMS: Status, Score, Time, a Bookmark and some relevant dates.We all know that SCORM tracks a good deal more than this, but from the instructional designer’s perspective and from the learner’s perspective, these data are the most visible. This set is also what we can count on our rapid authoring tools to support and what we can count on our LMSs to report consistently.)
In a SCORM world, we can run the same report on the elearning we deliver for sales training as we do for compliance, and safety, and soft skills, and whatever course we might be offering in the LMS. We can do this because the same data is being tracked via SCORM for each one of those elearning courses, regardless of how different those curricula may be. When “course completion” is the goal, this makes perfect sense. When “learning” is the goal, this starts to fall down a bit. That’s where xAPI comes in.
This is why the first question I get when introducing xAPI is typically something like:
“What can I do with xAPI that I couldn’t do with SCORM?”
Admittedly, two years ago, my answer to this question was, “NOW YOU CAN TRACK EVERYTHING!!!! ISN’T IT AMAZING!?!?!”
I quickly realized that this isn’t a very helpful answer.
A better answer can be found in the popular 70/20/10 model. The model suggests that 70% of our knowledge comes from experience, 20% comes from interactions with others such as peers, mentors and managers, and a paltry 10% comes from formal learning. The percentages aren’t precise and probably vary widely depending on the situation. 70/20/10 is a popular model and a good starting point. The common conclusion after looking at the 70/20/10 model goes something like this: “Billions of dollars are spent on training each year on what amounts to only 10% of what people know and do on the job, and we can’t even measure what happens in the other 90%! What’s wrong with this industry?!?!!?”
This sorry state is partly a result of the tools we’ve chosen to have at our disposal. In a SCORM-based world, we have a uniform way to measure a small amount of data (5 pieces of SCORM) about some of what we offer (the eLearning) as formal education, which represents only 10% of what people are said to need on the job. There just aren’t commonly used useful tools to measure the rest at depth or scale.
A much better answer to this question, then, of what can be done with xAPI that couldn’t be done with SCORM is that, in an xAPI world, now we can track the other 90%, too. With xAPI we have the means to track far more about the classroom experience (part of the 10%) than we ever have before, and we can bring in data from the performance support tools (in the 70%), too. Social learning, mentoring (both in th in 20%), and actual job performance and learners’ reflections on their experience (in the 70%) can all be recorded, stored and measured with xAPI and a learning record store (LRS).
This is the argument for xAPI that tends to grab attention. Once we’ve gotten this far in the conversation, the second question is usually:
“If I can track everything, I could get an enormous amount of data. How do I choose what data to track?”
While the 70/20/10 opens up the landscape to a whole set of possibilities, it doesn’t provide a lot of help in answering the question of what to measure and track.
We can start answering this question by looking at why we measure learning in the first place. There are two big reasons:
- We measure so we can detect an increase in performance (or not).
- We measure so we can improve the learning experience (in order to increase performance).
When we publish an eLearning course to an xAPI-conformant output, or record the path through a PDF document or a video with xAPI, or export data from the online performance support tool in an xAPI format, we will generate activity statements that describe the learning experience in more depth than most instructional designers are used to. When we are tracking a social discussion, a mentoring relationship or the trial-and-error permitted in a rich simulation, we are tracking a different kind of learning experience at a depth and scale like never before. In all of these cases, we are measuring the learning experience itself, at a more granular level than ever before. However, we are not the resulting performance in the real world.
Ultimately the goal of most corporate learning is to improve on-the-job performance. This has been the often-elusive goal of the learning and development industry for decades, at least in any sort of repeatable, sharable, scalable — and measurable — way.
We do have models in place already that can help.
Cathy Moore’s Action Mapping approach is an excellent place to start. (Of course, Cathy provides a far more in-depth look at this on her site, so what follows is a quick overview for the purposes of this article.)
With Action Mapping, you start in the center with a business goal (not “we need training!” but a real goal that organizational leaders care about), and work your way outward in concentric circles. The circle closest to the business goal contains the observable behaviors (actions) that successful performers take as they work to achieve the business goal. (Note that this ring does not include what people know, understand or have a profound and abiding awareness of. It’s all about action.) Only after you’ve defined the goal and what attaining that goal actually looks like do you then start designing practice activities for a learning environment. These practice activities mimic the actual job performance as closely as possible.
By now every subject matter expert in the room is cringing because we haven’t asked them for their binders, books, presentations, notes and deep knowledge of the subject at hand. That’s because we are only going to provide the knowledge that is required to do the activities. As a result, you’re not stuffing heads with knowledge that isn’t necessary for the behaviors at hand. (And, for goodness sake, if the desired behavior can be achieved without training, don’t do it!)
The model looks like this:
Let’s get back to our question of what do we track now that we have xAPI? The Action Mapping model provides a few answers and helps us talk with developers about what we want to be measuring. First, we can track much more about the learning experience than we ever could before, so we cover these outer two layers of the model in much more volume and detail.
We’re now able to measure what goes on at the job behaviors level, as well, by connecting work systems and data and exporting or converting data to xAPI. Sales activity data from the CRM, service calls and repair data, customer data from the call center, and more can be exported in xAPI format and laid alongside learning data, connecting learning data and activity data on the same plane.
This model continues to hold true as we bring in results data from business systems to measure our goal attainment: revenues, accident data, customer conversions, compliance violations. This is where we are able to prove (or disprove) value to the organization in terms of our learning efforts.
A few other models offer similar structure around the question of what to measure, now that we can.
If your organization uses Kirkpatrick’s Levels of Evaluation, you’ll find that xAPI offers a consistent data format that can be used to measure, store and evaluate learning that was previously not possible using only SCORM. Taking a few liberties here by laying the Action Mapping approach linearly alongside a version of the Kirkpatrick levels, we can get some insight into the kinds of things we can now be measuring and evaluating with xAPI.
Conrad Gottfredson and Bob Mosher’s Five Moments of Learning Need (and my own Nine Moments) help bridge the gap between formal instruction and on-the-job application in what they call “the Moment of Apply.” In a SCORM world, though, there is a huge gap in our ability to measure what’s going on in that moment when an individual needs information to perform a task, one that xAPI can fill. Here’s that that looks like:
The bottom line
In a post-SCORM world, the formal learning we design and deliver will provide structure and context. Learners will engage with peers to confirm, troubleshoot, and share new ideas. And they’ll risk, try, fail, succeed and reflect on their experience on the job. All of this will be recorded and stored with xAPI. This data will be used to evaluate the learners, the learning experience, and, if we’re successful, demonstrate an increase in performance.
In a future article, we’ll address the next two questions as we come to grips with what xAPI looks like in the real world:
- “Can I do this with my current courseware developer tools?
- “Can I do this with my current LMS?”
MEGAN TORRANCE (President, TorranceLearning)
Megan Torrance is the Chief Energy Officer of TorranceLearning, an elearning design and development firm outside of Ann Arbor, MI. She has spent over two decades knee-deep in projects involving change management, instructional design, consulting and systems deployment. Megan thrives on design excellence and elegant project management. And coffee. She and the TorranceLearning team have developed the LLAMA project management approach, blending Agile with excellent instructional design techniques. TorranceLearning projects have won IELA and BrandonHall awards, and the 2014 xAPI Hyperdrive contest at DevLearn.