Training evaluation to measure learning success and knowledge transfer

Knowledge transfer and learning successTransfer of learning is the extent to which learners can apply their newly learnt skills, tools or techniques in everyday professional life. But how do you measure learning success and impacts? Organisations struggle to evaluate these effects, as unlike a weight or a temperature, they are expressed in changes in behaviour that cannot be directly measured.

Learning success and knowledge transfer: How to measure?

  • “How was it?”,
  • “What was good, what less so?”,
  • “How was the trainer on a scale of 1 to 10?”
  • — and so on.

Such evaluative questions often quickly arrive after a course. Such output measurements are based on the assumption that if a course is well received, it will have the desired effect in practice.

The fallacy of participant feedback and learning outcome

Survey: How was your day?Most of the time, post-learning questionnaires don’t identify the quality of the learning. Sadly, research (1) shows that there is practically no correlation between the immediate assessment of a course and its long-term impact, i.e., the transfer of what has been learned into practice. And if you research your own experience, you realise: What you don’t apply, you forget again — so you could have spared yourself the course!

Because in practice it’s the lasting effect that counts: In my other column, “Learning from your peers”, I tried to show that this learners can considerably strengthened this effect. By teaming up as a peer-learning pair, two graduates support each other and thus increase the transfer into practice.

Kirkpatrick’s method of evaluating training programmes

A few words about the theory first: Donald Kirkpatrick pioneered the research on the effects of in-company training. In 1959 he published the eponymous model for measuring the effects of training and courses. It has four levels, visualised, for example, in this graphic:

Kirkpatrick Levels of Training Evaluation

  • At Level 1, participants provide feedback directly after the learning: the immediate “reaction”.
  • Level 2 is a knowledge check after the course, e.g., a test or quiz on the “learning content”.
  • Level 3 involves the transfer of that learning into practice: Are there changes in “behaviour”?
  • Finally, Level 4 is about the long-term “results”, or effects in the outside world. e.g. more turnover for a company, or social impact of an NGO.

Unclear responsibility at Kirkpatrick Level 3 is part of the learning success problem

According to Kirkpatrick’s teaching, organisations should measure the impact of training at every level. Unfortunately in practice, this usually only happens at Levels 1 and 2, i.e. immediately at the end of the course. This is where it’s easiest to achieve, but it just doesn’t reveal much!

Due to the higher effort and the question of responsibility, there is often little measurement of learning success at the very relevant Level 3. Knowledge transfers from the trainer to the company. Yet this is where it is often unclear who is responsible for noticing these changes in behaviour. Is it:

  • the learner,
  • or their line manager,
  • Learning and Development,
  • or maybe People and Culture?

Yet it is precisely this transfer of learning into practice that ultimately measures the effectiveness of investments in training. This is a prerequisite for what is actually desired to happen at Level 4: An improvement in the results of a company, a project or of a campaign. The “return on learning” (RoL), so to speak.

The improved, New World Kirkpatrick Model

Kirkpatrick’s original model is a linear model that suggests that if Levels 1 and 2 produce good results, the same will happen at the other levels. But as I said, this is not automatically the case (1). Nevertheless, it is useful for becoming aware of the whole learning process. In addition, his children have improved it through increased process orientation. It is now called New (World) Kirkpatrick Model — with, among other things, the following added values (see graphic below and 2):

The New World Kirkpatrick Model for Learning Success

Will a learner apply their new knowledge?

At Level 2, assess not only the knowledge-transfer, but also if the learner is confident that they can apply what they have learned. That way you can estimate the probability of learning success. “The higher the confidence and commitment, the better the transfer”.

New Kirkpatrick takes Level 4 as the goal for determining training measures. So, ask yourself what capacities, skills, team-spirit or know-how do learners need to achieve the desired operational results. Or in the case of NGOs: What social change do they seek? And what are the internal preconditions for this? And consequently: How can campaigners, project leaders or teams acquire the necessary competences and apply them in practice?

Monitor and reward learning success

These learning and transfer processes are accompanied and checked; not as a control, but together with the participants. That way they are more likely to achieve what they themselves want.

Encourage and reinforce the learning

Support such as encouragement (e.g. time to try things out), reward (e.g. recognition in the team) and reinforcement (e.g. peer support or mentoring) considerably strengthen the transfer of learning. And if training measures are agreed together, such transfer opportunities can be included in the process from the beginning (see “Toolkit for Learning Transfer”).

How to measure learning success, behaviour change and improved results?

With a questionnaire

Needs changeNormally achieved with a self-assessment questionnaire completed after 6, 9 and/or 12 months. This aims to determine how the learners judge they implemented various aspects of their learning and where they failed. This is can be very time-consuming (see 3).

With a focus group or interviews

Less time-consuming are focus groups or interviews, i.e. the collection of qualitative data. These do not measure, but instead give a good picture of what works and what does not. In general, questionnaires and ratings tend to be too onerous.

Measuring learning success with a Learning Management System

In the course of digitalisation, in-company training is increasingly taking place online, using so-called Learning Management Systems (LMS). This poses two dangers:

  • That one now clicks through many, many learning contents. In the opinion that this is efficient, quantity is given priority over quality. But time-saving-efficient learning is not possible. Learning degenerates into surfing; “but only what you use, sticks”.
  • At Level 3, the problem of lack of transfer increases online. Especially so in interpersonal learning such as team building, giving feedback, project management or leadership. Such learning content needs hybrid offerings, both online and offline. Learners can’t acquire know-how and competence theoretically.

And the opportunities are: LMSs have unbeatable advantages, such as the individualisation of content and learning pace, video-based tutorials or the simple recording of learning success at Kirkpatrick Level 1 and 2. By conceiving learning as a journey with the practical transfer threaded in the LMS, the leader can embrace the trend to self-organised learning.

Driving real-world results of learning

Developing an awareness of learning methods is of great relevance for campaign (or any type of project) work. Especially since campaigns are social learning interventions, because every (desired) change is subject to a learning process. However, even in campaigns, organisations and NGOs often only measure their impact up to Level 2. For example, by evaluating the media response or collecting feedback immediately after an activity or publication. This is good, but not sufficient.

Real world learning resultsOrganisations often do not allocate resources for the actual transfer and impact measurement. Just consider what a difference could be made by following through on all four levels of Kirkpatrick in measuring the success of learning. Whether it is creating affirmation that the correct decisions have been made, or as an opportunity to reflect, learn and have another go! For organisations to truly work with a Growth Mindset, a full measurement needs to take place.

To talk through any of these ideas, do get in contact, or leave a comment below.

References

  1. Der Mythos Wirkungskette in der Weiterbildung – empirische Prüfung der Wirkungsannahmen im Four Levels Evaluation Model von Donald Kirkpatrick”;  Zeitschrift für Berufs- und Wirtschaftspädagogik (2011)
  2. Kirkpatrick Model / The New World Kirkpatrick Model https://elearningdesigners.org/articles/kirkpatrick-model-the-new-world-kirkpatrick-model/
  3. Wie Weiterbildung messbar wird”, Gesellschaft für angewandte Berufsbildungsforschung (2018)

Enjoyed this article?

Subscribe to Newsletter…



I want to know more!

Get in touch and one of the team will contact you to see how we can help.

This entry was posted in Personal Development and tagged , , by Kuno Roth. Bookmark the permalink.

About Kuno Roth

Now retired, Kuno was leader of the global mentoring and coaching programme at Greenpeace International. Before that, he was head of education at Greenpeace Switzerland for 25 years. Kuno continues to support Greenpeace, serves as Co-President of the Swiss NGO Solafrica and as a mentor in the Women's Solar Project in Nicaragua. He holds a PhD in chemistry and works as a human ecologist, learning expert and writer.

Leave a Reply

Your email address will not be published. Required fields are marked *