Learning analytics in the world of informal learning
The word "knowledge economy" has been repeated in various corporate contexts, ad-nauseum. With the advent of technology, we are seeing a drastic move away from group learning to individual learning. Peeling the onion further, we see that the "knowledge worker" is an autonomous, informal learner who will no longer be satisfied with a "one size fits all" approach to learning.
As this paradigm shift happens in the organizational learning landscape, traditional approaches to evaluation of training impact and ROI assessment may have to be re-examined. The moot question, therefore, is how learning and development professionals can measure the effectiveness, efficiency and impact of learning.
The need for a fresh approach to learning analytics
Traditional models of training impact evaluation are based on the overarching assumption of externally defined objectives, and externally driven ‘completion’ and evaluation of the program. The most widely used Kirkpatrick model, which was developed as early as 1959, assumes that the bulk of learning occurs through "point in time" learning event, an assumption inconsistent with current organizational reality.
However, this approach is philosophically and practically inconsistent with the basic premise of informal learning, where learners are in some sense “prosumers” (producers and consumers) of learning content.
In the new age learning domain, some levers that can be used for impact assessment include self-assessments, coaching discussions, process portfolios, skills assessments and the likes.
Since informal learning has no official beginning or end, or well-defined business / learning objectives, it is challenging, if not impossible, to assess return on investment (ROI). However, for certain informal learning experiences such as coaching and mentoring, there are mechanisms to assess return on investment based on the terms of the learning engagement. Certain organizations also use self-reported assessments of ROI where employees are asked to estimate the resulting business impact (such as an increase in revenue / reduction in cost) of informal learning experiences. These self-reported ROI measures often lack credibility and may be “inflated” in nature.
Integrating qualitative and quantitative measures
Therefore, given these challenges, it may be sensible to look at the larger context of informal learning by integrating quantitative and qualitative measures - for instance, tracking usage and quantitative ratings / feedback of employees along with descriptive data on impact of various informal learning experiences on individual employees on a monthly / quarterly / annual basis. The purpose of this data is not to demonstrate ROI on training spend (as is the case with formal learning) but rather to refine and optimize the various informal learning channels available for the employee and ensure he has access to the right resources at the right time.
Best Practices
For example, the Learning University at the Piramal Group uses a mix of quantitative and qualitative feedback at the end of every learning touch-point within a learning journey. Additionally, every Learning University program has a Business Sponsor who works closely with the design team to craft the learning experience, identify success measures and assess impact. These learning journeys are often interspersed with well-orchestrated "action learning" projects which help in linking learning with business impact. An impact assessment is done through a questionnaire supplemented with conversations with participants, people managers, and business leaders.
A similar mix of qualitative and quantitative measurements is used at Beroe Inc as well. This blended measurement is more conducive to the organizational context, where a large percentage of learning happens at the individual and informal level. While post training assessments happen for cognitive skills such as knowledge of a particular financial model, more qualitative feedback is sought for behavioral skills from managers, peers, subordinates, vendors, and clients. Productivity per employee is also monitored over a one-year period and correlated with training interventions that the person has been through. Almost all learning events are followed by a period of trainer hand holding, which again becomes a touchpoint for gathering.
Approaches such those above are often more conducive to integrated learning journeys. They often use a mix of both quantitative and qualitative measures to assess various dimensions of informal learning – such as qualitative open feedback mechanisms at various informal learning touch points, metrics on time duration spent on various portals, type and frequency of employee participation, ratings, rankings and other “social” metrics of the impact of social learning, in addition to performance metrics.
While the end goal of learning analytics is to ensure that mode, content, method and environment of learning are all optimal, it is very important to re-examine the approach to analytics in the new learning paradigm. Therefore, one must rely on multiple data points, both qualitative and quantitative, gathered across the learning journey.