There’s something very interesting about business intelligence and using analytics, in general, to pinpoint anomalies in an organization. Problems, efficiencies, deviations from expectations — these anomalies require a person to dive deep into many processes, roles, and activities to uncover without getting distracted and only chasing the data down one rabbit hole. In any organization, there are clues, opinions, and politics influencing first the collection, and then the interpretation of data. Skills Intelligence is a framework for exploring data about learning and growth.
There are a lot of counts in business intelligence.
- How many people in a role.
- How many hours they worked.
- How many lines of code they wrote.
- How many bugs were closed.
- How many story points in a sprint.
- How many results were hit in the OKRs.
These counts do sometimes point to performance, but they’re blunt. They don’t explain why or how, they just explain what. For example, lines of code. Sometimes people view more code as getting more done, while fewer lines of code (accomplishing the same thing as the larger set) is better code. You could say the same for hours worked or bugs closed: if the hours weren’t well spent we shouldn’t assign meaning to them.
Whatever the interpretation of data like this is, the training department receives instructions for blanket training approaches. Teach them how to be more productive. Teach them how to file better bugs. Teach them how to better estimate their work. Learning & Development groups (L&D) don’t know if these are the problem nor that teaching them more about these will help. In this model we don’t have enough detail or granularity to make the case for personalized skill development or team creation based on complementary skills sets, but it’s time to change that. The L&D team has worked in a reactive model. We need to shift this by bringing a new set of data to the business table — a new set of data for which L&D is the right team to deliver.
A skills based approach
The core questions here are: What skills are required to do the work at hand? Are those skill sets available and focused on the task? If we shift the conversation to be about workforce effectiveness,
- We need to know what’s being built.
- We need to know what roles we have on staff.
- We need to know which skills those roles are comprised of which can rolled up into competencies for organization.
- We need to know what a job task looks like when it’s done well.
- We need to know what a job task looks like when it’s could improve.
- We need to focus on skills which we can take action to improve and avoid getting lost in workplace disfunction. Time spent in meetings is not productive, but avoiding meetings breaks down communication. We can teach communication, we can’t teach the organization not to micromanage.
If we take a scenario and use a skills intelligence approach, we can tell the business a lot about what the counts they’re reading mean. We could show stakeholders that the people who have the most skills in database and API development are spending 60% of their week in meetings. We could also see when they’re tasked to interface development activities, where their skills are weaker. We might find the product manager consistently writing stories which are too big for the developer’s sprints, meetings are required to break these stories down. All of these people have different skill gaps and would appreciate targeted training to help them grow as unique individuals within the organization. Offering people specific training on the specific areas where they could grow is what a great learning department does. This is our opportunity.
Skills Intelligence allows the business to weigh growth time along business priorities. We have to assume people want to grow. They don’t just want to keep writing code for interfaces because they’re good at it. People like challenges. Working on challenges should be encouraged and the business should recognize such activities are skill building, but obviously when something needs to be done fast it should be quick to find out who is the right person for the job.
In the scenario where skills data was not present, the business might have issued mandates to the learning department. Teach them how to be more productive. Teach them how to file better bugs. Teach them how to better estimate their work. Well, considering the information gathered from skills intelligence above, following these mandates won’t change much of anything. Rather than letting this scene play out over and over again, let’s look at how we enable the new L&D. Let’s figure out how to escape from the business mandating formal learning and, instead, take charge of helping people learn and develop in their jobs.
It’s not what will they do? That’s a matter of management and personality. We’re starting to discuss — what could they do? And how can we help?
How do we measure skills?
Skills, the abilities to do something well, are elusive. With good data we can find meaningful indicators of success, struggle, and failure. This isn’t the same thing as competencies because a competency is a collection of skills focused on a particular context. A competency relates to a particular role within an organization. A competency is a collection of skills which has meaning to the organization. Many skills are shared across competencies. The point is, competencies are too coarse grained.
The thing is that learning technologists haven’t had an efficient and effective way to measure skills before. Please don’t tell me your LMS does this. It doesn’t. Answering a test question tells us scant little information about a skill. Reading content and answering an assessment question creates a piece of data, which I would call assumptive. This is an assumption that the person may have grown in that skill area through that interaction. Unless the skill being measured is literally about answering multiple choice questions or using a video player, the multiple choice question being answered or the video being played proves very very little.
So, instead, let’s talk about assertive data. This data comes from an activity where a skill is exercised. For a utility worker this could be gathering a reading from a meter box. For a developer, it could be writing a piece of code with a passing unit test. For a bee, it could be gathering the right amount of the right kind of pollen.
I swear I’m not tracking bees (but we could).
All assumptive and assertive data is highly contextual. To gather and properly identify activities we need both learning designers and people who know the roles very well. Obviously, jobs where people are required to do unique tasks everyday are not well suited to analytics and data collection, though in a well orchestrated environment, the ability to fish for information is a core skill and could be a key performance indicator (KPI).
This diagram outlines the flow from between assumptive and assertive data as building blocks for skills and skills to competencies and competencies to roles.
This is not easy to prescribe. Assumptive data is fairly easy to identify. The training department has been paid to create learning environments and content for years. So, we might ask, “What in that environment and content is useful to track?” Well… you might not like this answer, but the only data points that are useful to track address the fact that learners were there. They saw it, they completed it, they passed it, whatever. This isn’t to say that content isn’t important — it’s that the didactic, SCORM-based courses delivered in your LMS haven’t recorded much that’s helpful to evaluating skills, because that’s not what SCORM did or does, so your LMS never did and doesn’t do that now.
Having said that, let’s move onto a more informative question: what skills are important? For the role? For their goals? To a competency? This is where it gets interesting. Interviews are incredibly helpful here. You can ask what people do and what’s expected of them. You can identify suspected key performance indicators. Those indicators may not be key now, but later they may find more indication than expected.
You’ll start to find patterns. Days full of changing activities which are regarded as important. Skills underlying those activities which make a person capable of accomplishing them to varying degrees of success. These are the activities which should be tracked and the skills for which content should be designed.
This is the beginning of a feedback loop. Where a person interacts with content, where an assumptive statement is a made. For example, Megan watched a video about casting onto a knitting needle and answered a question about casting on correctly. The assumption is made that Megan increased her skill at casting on by watching the video and showing she knows some information in answering the question. This would be two assumptive statements. This needs to be followed by an assertive statement about Megan’s ability to actually cast a stitch. This could be made by a very smart needle, but more simply it could come from an observer recording the success of Megan’s attempt to cast on.
Here’s how this loop would be documented in the Skills Intelligence Map
In reality, this statement would record that Megan failed miserably at casting on and it would be reported by a cackling 9-year-old.
Alas, when this loop is repeated many times about many people, we can start to build analytics about the content’s effectiveness (or the needle’s ineffectiveness).
Skills Intelligence is the process of evaluating skills as parts of competencies and roles through tracking activities which are designed to increase a skill and tracking opportunities to exercise that skill. This loop of assumption and assertion allows for analytics to be developed about learning design, skill growth, and ability to perform a job role. When combined with Business Intelligence data, it creates a rich environment through which we can explore ways to affect business growth and formulate initiatives to achieve business goals.