Blog: Why training programs fail

#SkillUp

Why training programs fail

In many ways, training has come to represent a resource sink - an area where we continue to pump time, money and effort expecting results.
Why training programs fail

Training programs are an enigma. Despite the myriad of problems they face, they continue to find millions of dollars in investment. No person denies the need for a training program and yet no one can find a tangible way to measure the effectiveness of these. In many ways, training has come to represent a resource sink - an area where we continue to pump time, money and effort expecting results. It is not unlike flogging a dead horse; of repeating the same strategy time and again and expecting different outcomes. 

I too believed in training programs for the longest period of time, right until I was asked to talk about their effectiveness. I realized, in my many years of working a corporate job, there is not one training program that I have found worth my time. The more I talked about it to my colleagues, the more I realized that there probably exists no such program outside of the ones that focus on technical competencies. Why do we then continue to invest in training on leadership, management skills and more recently, unconscious bias without any knowledge of the return on investment? 

Since there is some evidence that technical skills-based training work, I will focus this piece on management training. Let us start by taking a moment to reflect on the vicious cycle that engulfs these training programs. Most, if not all, struggle with high dropout rates and low utilization. Training organizers are infamous for chasing businesses to send people for classroom sessions. Yet, despite best efforts, these sessions are canceled for low participation more often than we would like. One would assume that the low interest in attending these would be an indicator of effectiveness. Yet, instead, we choose to mitigate this problem by anticipating `50% dropouts and opening extra seats (guilty), sending out testimonials of those who attended (likely coerced), threatening businesses to exclude them entirely from any future programs or the like. 

When we request businesses to support us in the endeavor of improving turnout for classroom sessions or utilization of online classes, they begin by questioning the effectiveness of the sessions. At this point, we either reference Kirkpatrick’s level one evaluation and boast of a 4+ feedback or mumble an incomprehensive explanation. The truth is we have no idea how to correlate a training program to actual behavior change. When the business realizes this, they ask us a series of follow up questions to help determine if they should really send their people into these training programs. The discussion that follows rarely leads to an increase in utilization. More often than not, those who turn up to attend either do so because they have been asked to or are curious. Rarely do attendees expect career-changing insights. There is also a 100% chance that three months later, most have forgotten what they learnt. 

In order to increase the effectiveness of the program, the learning and development team expands the design to cover the entirety of the 70:20:10 model. There exists an extremely strong belief amongst L&D professionals that if they supplement classroom sessions with a mentor/coaching circle and an on the job project, the training program will begin to demonstrate measurable returns.  The challenge now lies in ‘how to design an on-the-job intervention that leverages what their management classroom session has just taught’. 

Let us take an easy example of difficult conversations. How do you ensure that in the next x number of months, the manager faces five difficult conversations and manages them exactly as you have taught in the classroom session? The very idea is laughable. In addition, your mentorship program isn’t working that well either. Even if it is, we do not really know how to measure effectiveness, do we? All one can really do is tweak a number of variables but never have scientific data on what impact any of these changes had except anecdotal feedback and survey scores. Over time, the training programs fail to earn the trust of the target population, which in turn impacts dropout rates. And thus, we arrive at a full circle. 

At this point, you are likely a little angry and upset. You say – ‘Ankita, you spent the last 700 words talking ill of training programs. Why not tell us how to fix it?’

Unfortunately, I am going to have to let you down. There is abundant advice on knowing your audience, customizing training to fit the audience, encouraging daily practice and focusing on results. However, we have been trying to do this forever and not getting anywhere. We tend to look at technical skills-based training and replicate the design for the rest. However, the definition of practice in both kinds of training is different. While technical skills are easier to practice, when it comes to management skills, it is immensely difficult to simulate real-world scenarios. People rarely behave as they would and practice feels artificial and forced. 

Maybe it is time to acknowledge that we are done flogging the dead horse. Maybe we need to take a call and eliminate management-training programs in the organization, go back to the drawing board and see how we can create something that truly adds value. We need to rethink everything we know about training programs and find a better way to utilize all that investment. If you do find a solution, come back and tell me about it. When you do, I promise to pull this piece down from here and write about what you discovered instead.

Topics: #SkillUp, #GuestArticle, Training & Development

Did you find this story helpful?

Author