I came into instructional design almost 15 years ago, first doing freelance work for dominKnow, then joining the company full-time. Back then we were creating both custom eLearning for clients as well as more generalized off-the-shelf courses, using our own content tool.
It was one of the earliest Learning Content Management Systems (LCMS), and we introduced it into the market in 2002. We’ve used it for our content development work ever since as well. Being constant users of our own technology has been one of our most important drivers for features and improvements. With the addition of responsive authoring, we’ve taken the tool through seven major versions since 2002.
The core idea of an LCMS is content re-use.For a long time the Wikipedia article for Learning Object (LO) offered three or even four definitions, from just being any asset like an image that can be reused up to the idea of an LO as a “package” that includes learning content, practice exercises and an assessment for a specific learning objective or topic. The concept that our tool has always supported is that an LO is almost a mini-course on a specific topic.
Data is going to be pretty important in any re-use model. When we first began working with LMSs, the AICC data model enabled a learner to launch a course in an LMS, which would basically pull the course in directly from our LCMS to run it for the learner.
That approach enabled us to capture course activity data down to a very granular level, which was really brilliant for an Instructional Designer. We could pull up reports down to the choice level on test questions, for example, to identify response patterns to help us improve the course over all. Given this example, if 75% of people were not just getting a question wrong overall but actually selecting one particular distractor, that would be pretty good diagnostic information for an ID to use. It started a line of questions IDs could test for — was the question badly written, causing the response pattern? Or was the content not clear enough, leading to the response pattern?
And Along Came SCORM
Within a couple of years, though, we moved to the SCORM model, following the overall industry trend. And in that approach, SCORM packages were published out of our LCMS and uploaded to the LMS servers. That shift from content being launched directly from our LCMS to the publishing of content out of our system meant we no longer had direct access to the content at the time of use. With SCORM, we couldn’t track the same instructional design data.
The promise of SCORM 2004 was that an LMS could draw a greater level of data from a SCORM package, so it could then generate these types of reports.
In reality, that’s been a mixed bag overall, for a number of reasons.
First, not all LMSs have added this level of reporting. Even today, we hear from clients that they’re still publishing SCORM Version 1.2 packages for their LMS. The explanation is usually along the lines of “That’s what our LMS handles best.” So in some cases the SCORM 2004 data isn’t even available or being captured.
While some LMSs did implement that level of reporting, usually it is only for test questions created within their own assessment tools.
Lastly, some larger organizations have a separation of content authoring and LMS access, which means developers working on a course don’t have the ability to access reports and data directly even if the LMS can provide it. If it isn’t easy to access, the effect is that it’s really just less available.
So there are some real gaps in how possible this type or reporting can really be.
xAPI and A Broader Vision of Content Re-use
In response to the constraints we faced with SCORM, we began to look into using an Activity Stream approach as a way of supporting other tracking models when the work on xAPI started. With its existence and initial adoption, it just made so much sense to align with xAPI.
The timing was perfect for us as well, as we had begun evolving our idea of re-use to go beyond being limited to just courses in an LMS. Our model of a re-usable Learning Object as content package originally focused on making it easy to share these small content packages across different versions of a course, with the assumption that these would always be published to an LMS for delivery. In the last five years there’s been such an explosion of mobile devices along with a growing recognition that organizational learning can’t just focus on formal learning.
A content re-use model can easily be extended outside of being “just for” courses for formal learning to also include job aids or performance support for informal learning, or even print- or document-based content for either formal learning or informal learning needs.
For example, you may have a course that covers 20 topics, and say a half dozen of those are very specifically covering tasks the learner needs to carry out in their role. You publish out the LMS package of 20 topics, then create a new package of just the six tasks and publish that out to a web app. The learner takes the formal course in the LMS to learn the new tasks, plus has easy access to the more-focussed content package when they start to actually apply that new knowledge on the job, whether that’s two days or two months later. And they can hit that informal learning job aid from their computer or from a mobile device, whichever is easier or closer to hand in their time of need.
That broader re-use is enabled by xAPI, giving us a data model to be able to track the use of that learning content so we can see not just that specific learners have used it, but, again, to identify patterns of use to help improve the content – and do this no matter whether a particular piece of content is being used for formal or informal learning, or even both.
Back to the Future
This re-use model means we can know that the same piece of content can be used for many contexts, and xAPI allows us to include that context as part of what is being tracked. Because even though it’s the same content it’s pretty likely that learners will use or consume that content differently in a formal course than as part of a web-app job aid.
For example, a media file like a video on a page has the same identification for us no matter where it’s being used – we know it’s the same video, say, covering the steps to carry out a task. In a formal course in the LMS, maybe they watch the whole thing because the task is new to them. But in a job aid as part of a web app, maybe they only watch until they solve the step they need. xAPI adds the critical layer of context to the tracking and reporting we can do, so we can identify these patterns as well. If there’s a pattern that shows that users of the video in the job aid context typically only watch the first two minutes, well that could indicate that what they really need to solve is whatever’s covered in the video at or up to that mark. So maybe the process or task itself could be improved, to resolve the issue altogether.
So xAPI bridges a lot of different things for us, by providing a model for improving reporting on formal learning from the LMS through to tracking how content created in our platform is used when deployed as informal learning to help employees actually carry out the tasks that really what their job is all about.
And what’s really cool, from my perspective, is xAPI is helping us both return to an older value that we offered as well as a branching off into new value we can use to help organizations create better learning opportunities throughout their learning ecosystem.
CHRIS VAN WINGERDEN (Vice President Learning Solutions, dominKnow Learning Systems)
Chris Van Wingerden is a life-long learning geek who has had careers in bookselling, journalism and, for the past 15 ears, in the eLearning/mLearning/all kinds of learning world. Chris has degrees in Adult Education and English literature, and is currently a candidate for the Institute for Performance and Learning’s (formerly CSTD) CTDP designation. Chris is Vice President Learning Solutions at dominKnow Learning Systems.