The Google Plus logo

Going for Gold

LINE Consultant, Gareth Jones reflects on the lessons the Olympics can teach us when it comes to measuring the impact of training. This post is the latest in an ongoing series of posts about implementing learning architectures.


Watching the television over the last couple of weeks has persuaded me that the approach we take to measuring performance has become massively over-complicated.

Surely evaluating success ought to be simple. After all, everything we do is either to solve a problem or to make a change. So in order to evaluate, all we need – presumably – is a reliable way of finding out whether and to what extent the problem has been solved, the change made, the target achieved.

Easier said than done perhaps, but I would contend that our current process models for evaluation, principally those created and elaborated over the years by Jim Kirkpatrick and Jack Phillips and their heirs (both biological and intellectual), can end up creating a lot of needless complication and confusion.

Not because they are inherently complex, or difficult to understand, they aren’t, the four Kirkpatrick levels, for instance, are in fact eminently simple to grasp; they follow common sense. However, how often do trainers disappear down rabbit holes attempting to apply them!

This thought has been thrown into sharp relief by the Olympics. What has made the sport so compelling is that the objective is so wonderfully simple – the target is a single number measuring height, weight, distance, speed etc.

The goal is utterly self evident, and partly because of that the individual athlete goes to extraordinary lengths to achieve it. Almost every champion has a personal story of the bone-crunching journey that has been taken to get to the top spot on the winner’s podium.

Our world is so different, a training dashboard has multiple numbers and mostly measures activities; number of people, events, percentage utilisation etc. Translate that to the Olympics and the gold medal would be hung around the neck of the individual that attended the most training sessions!

The goal the training is intended to achieve is often unclear and, if it has been articulated, penny to a pound there will be no measures around the performance target that tells the individual where the finishing line is. How often do you read learning objectives that are truly measureable?

Imagine telling Usain Bolt and his fellow sprinters that the distance is roughly 200 metres and no, we can’t tell you exactly how fast you should run it! In any event, don’t worry too much because the certificate we are going to give simply says that you have taken part (Is that what Pierre de Coubertin, the founding father of the modern Olympics truly meant!)

Of course if we were applying our five level evaluation model to Bolt’s performance; before announcing him champion we would have to measure how much he enjoyed the race, what he had learnt during his training sessions and which element of the training had he been able to apply. ‘And finally we would have to say: ‘My apologies, you are obviously pretty exhausted, but please estimate the amount of revenue you will now generate as a consequence of beating the world record!’

This does not make for a television spectacle…

There is more life in this metaphor too! One of the reasons why world class athletes are highly motivated at the Olympics is because there is a worldwide audience watching them. You can not fail to be amazed by the 17 and 18 year olds who seem to be completely unfazed by an 80,000 strong crowd screaming at the top of their heads. In fact they relish it.

In the adult world of company-based training, often the one person who should be showing an interest, the line manager, doesn’t realise the reason for an individual’s absence is because they are on a training course. Studies have repeatedly shown that interest from managers has a material effect on the impact of the training.

The return on investment would be interesting, but if companies want to improve the value they get from their investment, simply agreeing with the individual what improvement the training will make (before they go on the course) is a great place to start.

One of the problems with the way the Kirkpatrick/Phillips model is interpreted is that it is frequently seen as hierarchical nature. There is an inbuilt assumption that level four/five (business impact/ROI) is the best. Any level below that therefore has to seem less good, and even slightly suspect.

I would contend that each level is as important as the other. Often dismissed as ‘happy sheets’ – level one feedback can produce some hugely valuable data if the questions are constructed carefully.

Let’s get back to the Olympics. The performance coach for cycling, David Brailsford, is frequently asked to explain the extraordinary success of his team. Yes the individuals are extraordinary, they work incredibly hard, and they are dedicated and strive tirelessly to improve their performance.

But the success is based on two important factors – selecting the right competitors in the first place and then making many often tiny but incremental improvements to every factor that supports improved performance.

How does this read across to the training world? If we look at selection, or in our language recruitment, often there is a disconnect between recruitment and training. If they live in the same organisational solo (HR) they are normally two quite separate teams. How many organisations have a formal process where the training team advises HR on the qualities they should look for in new recruits? How often are they asked to feedback insights into the individual’s capability during that crucial probationary period?

More importantly, which organisations do we know that take a holistic approach to improving every aspect of the work environment that enables individuals to perform at their highest level?

Does the individual have the best tools to do the job; has adequate time been set aside for training; does the line manager have effective coaching skills; are the delegates being trained on precisely the same system they will eventually use; is the training going to be applied immediately; are there appropriate awards for improving performance? Equally important, is the individual being taught how to make best use of the training that is being made available to them?

What Brailsford would most likely hear, if our world was applied to his, would be: ‘No I don’t know why I’m doing this training, my coach sent me. Use these bikes, they are not the same as the ones you will actually race with but we’ll worry about that later. You have got a coach but you’ll find most of his/her time is spent in meetings with other senior coaches. Your training is scheduled but it’s a month before you have to race so make plenty of notes. No we don’t know what time you have to complete your race in, but just do your best.’

And of course: ‘Your training has been cancelled today because we are going through a reorganisation. It is likely that tomorrow, instead of cycling, we would like you to throw the discus.’

When you look at the various models that explain how best to accelerate the pace at which people learn, great emphasis is placed on developing the ability to learn how to learn. How often is that an element of a company’s training portfolio?

As we move closer to the closing ceremony, there is no doubt there are some great lessons we can learn when it comes to evaluation. For me there are these…

  • Make it clear to the learner what is expected of them and how success will be measured. There is a huge amount we can learn from the education sector on how to assess core competence. There is no disputing the value of a qualification from a decent college or university. Make sure that the internal continuous professional development programs have equally robust assessment strategies.
  • There is no harm in stopping at Level 2. That is as far as a university education goes and we don’t question the value of that. If it is a leadership programme you are running, link it to national standards and this could give the individual a qualification that they would be proud of. The Chartered Management Institute offers a scheme of this nature.
  • Do spend time measuring the time it takes to take an individual from novice to master. (This is particularly critical in fast moving emerging markets)
  • Don’t waste time and money attempting to work out the return on the investment in developing strategic capability i.e. CPD. Just make sure the curriculum is aligned to business goals.
  • Do spend a great deal of time on measuring the ROI of programmes that are specifically intended to address short term operational challenges e.g. driving improvements in revenue or forcing down costs.
  • For those who have been following our series on ‘Learning Architectures,’ make sure the above is hard backed into the design of your preferred learning architecture from the outset
  • Do spend time and money on removing the barriers to performance

For me the Olympics, and the brilliant David Brailsford, are a timely reminder that it is the last one in the above list that makes the difference between silver and gold.

This post first appeared on the LINE blog on 10th August 2012.