The Generation Challenge: Articulating the Problem Definition

By Ben Erlandson (CTO, McKinsey Social Initiative)

Slide7

This article extends a conversation held at xAPI Camp Amazon during July 2015 in Seattle, where we discussed the role xAPI could play in the scaling of the Generation employment program.  I’ll start by quickly describing McKinsey Social Initiative and the Generation program, followed by a brief exploration of the Generation experience and associated skills mapping.

As we work towards achieving scale, I consider this issue from the perspective of learning, assessment, and technological platforms, starting with a context for learning systems design.  We explore assessment and xAPI as a mechanism for showing the efficacy of Generation (especially what and how our students are learning!), and then consider the idea that one mobile app can support the entire Generation experience — as a way to pragmatize our problem definition for scaling up to (and beyond) 1 million users.

McKinsey Social Initiative and Generation

McKinsey Social Initiative is a non-profit that brings  together expert problem solvers to develop and implement innovative approaches to complex social challenges.  The first program of McKinsey Social Initiative is called “Generation” and is focused on ending youth unemployment worldwide.  Specifically, Generation hopes to close the skills gap for young people by developing a program to place disconnected young adults in jobs, and giving them the skills and support they need to achieve lifelong personal and professional success to fundamentally change their life trajectories.

There are six components to the Generation approach:

  • Student recruitment based on several intrinsic factors, level of demonstrated effort (during the application process), and professional standards identified by employers in our chosen sectors
  • Direct employer engagement in the form of conceptual buy-in that leads to guaranteed interviews for program graduates, as well as a role as subject-matter experts in the formation of Generation curricula.
  • 8-12 weeks of “boot-camp” style training in technical skills, behavioral skills, mindsets, and employment essentials.
  • Individual mentoring for each student that continues as they graduate and launch a new career, as well as social support services during training.
  • A peer work-life community that follows the student after graduation into the workplace, both online and offline.
  • A demonstrable return on investment for both employers and students.

gen-news-cities2

Generation is currently in cities across five countries: India, Kenya, Mexico, Spain, and the United States.  Our intention is to continue expanding our locations in each country, and to increase the number of class cohorts per year in each location, helping us to achieve our goal of one million Generation graduates by 2020.

Generation is starting by training young adults for professions in four focused sectors: healthcare, retail/hospitality, technology, and trade skills.  Examples of existing programs include a certified nursing assistant program (healthcare), a cashiering program (retail), and a digital marketing program (technology).  Examples of potential trade skill professions include electrical, plumbing, and welding.

Across sectors, countries, programs, and cohorts, the desired end result is a proven Generation methodology that can be shared with others to expand the impact to millions more youth around the world.  Our lofty challenge: this methodology should work anywhere in the world.

Phases of the Generation Experience

From the student perspective, there are three primary phases of Generation: intake, training, and the job/career track that happens after graduation.  There is also, technically speaking, a precursor phase of awareness that involves substantial marketing and mobilization efforts to reach our targeted audiences in each of our countries.

phases of Generation

Intake is, essentially, a 2-4 week multi-stage funneling process of applying to Generation and going through several stages of screening tests and interviews, leading to a preliminary acceptance status where a student must complete several types of “pre-work” that indicate his or her readiness to participate in a training cohort.

The training typically takes 8-12 weeks depending on sector, profession, and geography.  The program is “boot camp” style, typically running all day Monday-Friday.

Upon graduation, Generation alumni interview with our employer partners, and almost all of them get placed in jobs.  At this point, our intention is to engage directly and support our alumni throughout the next fifteen years of their careers — engaging at intervals of decreasing frequency over time, gathering important data to measure the impact of the program on their long term well being.

When we plan for learning and assessment across these three phases of Generation, we can articulate between evaluation, assessment, measurement, learning, and instruction.  We can then think about when each of these processes is happening, or needs to happen, as a student moves through the phases.  The diagram above gives a broad stroke estimation of when these processes should be happening.  As we consider how they should manifest (what should be happening), we must consider the modalities available, as well as the relationship between the students and other key stakeholders: teachers, mentors, employers, and Generation program administrators.

Skills Mapping

We must also consider the skills we are teaching through the program, and how these relate to learning and assessment.  The three skill sets (technical, behavioral, and mindsets) for each profession — and especially the ways in which these overlap — is one of the things that differentiates Generation from other employment programs.

Technical skills are specific to a given profession (can you do this job?), behavioral skills are generalizable across most professions in a sector, or perhaps across sectors (are you generally employable?), and mindset skills are generalizable across all sectors and professions (are you career-ready and future-focused?).  In fact, we have four identical mindsets in all Generation programs.

This table shows a sampling of the three skills maps from one of our programs that is currently under development.

Technical SkillsBehavioral SkillsMindset Skills
Problem SolvingAttitudePersonal Responsibility
Workplace ApplicationsCommunicationFuture Orientation
Product KnowledgeEngagementGrowth
Standard Protocols and ProceduresTime ManagementPersistence
Physical Stamina

As noted above, part of the uniqueness here is the combination, or the overlap across maps.  When thinking about this in terms of learning and assessment — especially evidence-based performance assessment over time — how does a student’s demonstration of improvement in one or more technical or behavioral skills also demonstrate improvement in mindset skills?  To frame this question in more specific terms: what can we infer about a student’s future orientation or personal responsibility from evidence we gather about his or her performance(s) demonstrating ability to use workplace applications to solve real-world problems (specific to the job at hand) while engaging and communicating with superiors or clients in a professional manner?

In addition to the necessity of assessment to support learning and instruction, identifying these types of complex evidence are one great way to show the efficacy of Generation — as a form of return on investment for both learners and employers.  So, how do we position ourselves to scale the demonstration of efficacy? The answer: evidence-driven assessment baked into a solid learning system using xAPI.

Showing Efficacy: Four Processes, Four Spaces, and xAPI

On my personal website, I’ve explained the basic groundwork for learning systems design as four pillars: people, information, technology, and space.  Considering the Generation experience as a learning system where people use various technologies to access and produce information in a variety of spaces, we can then conceptualize how to use evidence-centered assessment design (ECD) and the four-process architecture for assessment delivery (4PA) in coordination with the four spaces of simulation-based assessment.

Briefly, the four processes of 4PA are: activity selection, presentation process, response processing, and summary scoring, all of which involve data moving back and forth between people and the task/evidence composite library (which in xAPI terms, includes the Learning Record Store).  Response processing leads to task-level feedback about a learner’s performance, and summary scoring leads to summative feedback.

The four spaces of simulation-based assessment are: problem, tool, solution, and response.  The problem space is the conceptual boundary for demonstration and the tool space is defined by the tools available for solving the defined problem.  The solution space is defined by a learner’s interactions with the tools at hand while generating a solution to the identified problem.  The response space is defined by the artifact(s) submitted (actively or passively) for assessment, as a complete or incomplete solution to the identified problem.

In the diagram below, we explore how the four processes and four spaces can be combined for the goal of authentic assessment of practice-based performance.  Here, I like to envision that I’m looking down upon the problem, tool, solution, and response spaces as a series of plates stacked on top of each other.

four process four space

Here, the result of the system’s activity selection algorithm(s) is a decision (based on the learner’s demonstrated abilities recorded in the LRS) about what the problem space should look like, and which tools should be available to the learner.  These are manifest to the learner through the presentation process, which also may include instructions for the learner as to how he/she should proceed with the given task.

As the learner interacts with provided tools, he or she is automatically creating the solution space, leading to the response space, and the system can engage repeatedly in response processing to assess the learner’s performance and generate task level feedback.  During response processing, the system can take into account elements of the problem, tool, solution, and response spaces as variables relevant to assessing the qualities of this learner’s performance.  These elements are recorded as data in the task/evidence composite library (the LRS).  The summary scoring process functions in essentially the same manner, once the learner is “done” with the solution and officially submits his/her response for review — or simply stops working on it.

Now let’s dive a little deeper and map a piece of the Generation experience across the four spaces and see how xAPI statements could be defined over several periods of time in a learner’s performance.  How many relevant statements could be generated during a given period?

4pa four space xapi task diagram

In this diagram, the learner engages in the same solution process three times (over two days) within a problem space, using the same tool space each time.  For each solution, the tool usage by the learner is indicated with horizontal lines over time, eventually transitioning to a response space that also occurs over time (as opposed to a static result).  The entire duration of each solution response time period can be processed as a response (of the 4PA variety), sent to the LRS as any number of xAPI statements, and immediately processed (as relevant) for task-level feedback.

As each tool interaction task is processed as a 4PA response, we can assume that, in xAPI statement terms, the actor is the learner, the verb is the action taken with the tool, and the object is the outcome of the tool use (completed or ongoing).  But what about the result, context, and attachments?  This seems to be where things can get a little complicated (and resource intensive) for an assessment system architecture.

And now, a few pressing questions, starting with one that I think nags us all: which tool interactions warrant xAPI statements?  What should our verb vocabulary look like?

During Time Periods 2 and 3, should we consider the revision of xAPI statements captured during Time Period 1 as the learner conducts similar tasks in pursuit of varied solution and response spaces?  Should we capture xAPI statements at a higher level, such as statements about patterns of task completion within a solution/response space, or patterns of solution/response engagement over time?

How can we use these xAPI statements to infer growth in our learner across our three skills maps?

How should we build this assessment system to best automate as many assessment decisions as possible, thus positioning ourselves to scale beyond human-driven bottlenecks?

Scaling the Experience: What Would the Generation App Look Like?

As we continue to look toward scale across our five Generation countries, I keep thinking about our goal of creating one methodology to enable others to expand the program’s impact to millions more people around the world (after we’ve reached our goal of 1 million by 2020).  I’ve begun several conversations with others on our global and country teams about how to conceptualize this methodological handoff as pragmatically as possible, and the idea of “Generation in a Box” keeps rising to the surface.

If we took Generation and put it in a cardboard box to package for shipment anywhere in the world, what would go in the box?  How big would the box be?  Could we make 10,000 boxes to ship to 10,000 organizations in 10,000 cities?

I’ve adopted a similar metaphor for conceptualizing how we can simply and beautifully articulate and solve the problem of scaling the Generation experience with technology.  If we had to make one app for all of Generation, what would it look like?  Can we make one app that scales the entire experience?  Or does it need to be a suite of apps?

the generation app

This sketch represents the mythical Generation App, and all the stakeholder relationships it should support.  People, information, technology, and space.  Content, community, learning, and assessment.

For which of these relationships is xAPI a useful framework?  How many of these interactions can be played out within the purview of four processes and four spaces?  Which parts of these relationships can be mediated with virtual machinery?  Which relationships are wholly between a person and a machine?

How do we keep people engaged with each other through this system for fifteen years or more?

Conclusion

Where are we today in our progress toward scaling the Generation experience with technology?

  • We have built a prototype student data tracking system with the intent of developing a full-blown analytics platform for a variety of stakeholders.
  • We are poised to launch a pilot implementation of an application/submission management system to begin scaling our student intake process.
  • We are moving toward a mobile-app based game/simulation assessment delivery framework, starting with the development of an experiential simulation targeting several critical skills in one of our sectors.
  • We are poised to develop and launch a pilot implementation of an in-house mobile app for supporting our trainers.

I’ve intentionally avoided going into too much depth in this article for two reasons: 1) simple brevity, and 2) I don’t want to inject too much bias into the ensuing conversation.  We are, without a doubt, still in the early design phase.

As such, here is our call to action: I want this to be the beginning of an ongoing community conversation.  Please comment here with your ideas, perspectives, and constructive criticism about how we are thinking of scaling the Generation experience through technology across five countries and one million users–and beyond.  A big challenge deserves an active, participatory audience.

You may also contact us via info (at) mckinseysocialinitiative.org


BEN ERLANDSON (CTO, McKinsey Social Initiative)

4c2d7ce9d674-web_DSC_2715_copyWith over ten years of experience designing and researching digital learning environments, Dr. Erlandson is passionate about building efficient and effective technological systems for learning and authentic performance assessment. As CTO of McKinsey Social Initiative, his primary responsibilities are to oversee the augmentation of MSI programs with technological infrastructure, especially from the perspective of enhanced learning and cost savings; to establish and maintain the UX and technological visions for MSI global and each of our programs; and engage and oversee vendors and tech partners at global and local levels.

Speak Your Mind

*