Posted on 20th February, 2019 by Joanna Kori
Joanna Kori, Global Learning and Development Manager for LEO Learning’s parent company, LTG plc, reflects on the key discussion points from this year’s Learning Technologies conference.
Learning analytics: A key talking point this year
Learning analytics led the way in importance for the majority of L&D folks at Learning Technologies 2019, mentioned from the offset at the very first keynote speech.
This is a topic very close to our hearts, as we have been driving awareness on the use of learning analytics to measure the business impact of learning for over three years. LEO Learning has also just launched a new consulting service and accompanying workshops that aim to get organizations of all levels of maturity up to speed with a learning measurement strategy.
Design Your Learning for Learning Analytics
As Andy Lancaster from the CIPD highlighted in his seminar session, ‘The future L&D team: Essential skills and emerging roles’, the present-day L&D team is no longer sitting “in the depths of HR” and instead are “being pulled out into the business”. Data analysis and performance consultancy skills were identified as essential requirements for L&D to effectively relate their various learning initiatives back to the goals of the wider business.
This trend again ties into LEO Learning’s core consulting services. We provide the type of deep-dive consultancy that helps L&D teams establish evaluation methods for their organizations’ learning directly from their business goals.
We have been further bolstered in this area by the recent full integration within LTG of our sister company, Watershed.
Watershed’s learning analytics platform pulls in the data L&D teams need from the full range of sources that make up their organization’s learning ecosystem. Using clear dashboards, this data is then presented in formats that help organizations understand the impact and performance of their learning.
LEO Learning and Watershed can now work even more closely together to help organizations design their learning for learning analytics.
Metrics Should Complement Personal Experience
The value of Watershed’s platform is that it enables teams to view and present the data they’ve collected in a clear and easy to understand way and contextualize their findings, so senior stakeholders can easily understand the key insights.
This is essential to today’s L&D teams, because the data cannot be left to ‘speak for itself’ (as might happen if it were simply left in its raw state). Data needs context and personal insight to make sure findings are fully understood. Many people from my generation remember the impact made by information designer Edward Tufte’s historic claim that a NASA engineering PowerPoint was automated to the point where it failed to highlight a problem with the tragic 1986 Space Shuttle Challenger.
L&D is not, of course, an acronym for life and death, but none of us want to kill off the enormous amount of effort, care and consideration we put into developing a year’s worth of our employees’ learning via a single botched up presentation to the CEO.
Jerry Z. Muller describes metrics as ‘tyrannical’: “today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers…”
However: “…when used as a complement to judgment based on personal experience, metrics can be beneficial.”1
Face the Fear: Automation and Analytics
A key theme at this year’s conference was a practical look at how we will adapt to cope with the rise of automation in the world of work.
There has been a lot of fear around automation of traditional work roles (such as management) and metrics-led learning within L&D over the last year. But instead of putting our heads in the sand and hoping for safe passage, L&D professionals should embrace the aspects where we need the human experience.
LTG’s own internal L&D measurement always comprises both qualitative and quantitative data and not all target audiences for that evaluation narrative will be the same. Data is global, and has to be interpreted across a range of regions with differing areas of cultural knowledge.
From Ugly Stepchild to Mary Poppins
As quoted by Will Thalheimer in his excellent Day one session, ‘Transforming Evaluation’, learning evaluation has for a long time been the “ugly stepchild – frustrating, complex and we all wish we could do it a little better”. Two key slides (below) from his presentation, one from Donald Kirkpatrick in 1960, the other from Towards Maturity in 2017, illustrate how long we have been wishing for this.
Managers of L&D know that it’s not possible for our measures to be ‘Mary Poppins perfect’—and in the words of Deborah Bandelos, “they should be thought of instead as approximations”2. But measurement can aim to be ‘practically perfect in every way’, especially if it is contextualised and narrated by those of us who are used to reading and leading our organizations’ L&D landscapes.
Learning Measurement: Start With the End in Mind
Thalheimer’s Learning Transfer Evaluation (L-TEM) model—introduced to us all at last year’s LT and developed for today’s workplace from the 60-year old Kirkpatrick-Katzell model we all love to hate —is reflected in LEO Learning’s three steps for effective, practical measurement.
It’s essential to approach measurement ‘with the end in mind’—as our very own Rose Benedicks and Andrew Joly highlighted in their packed out sessions on both days of the conference. It is also within the DNA of Watershed’s evaluation approach to data work backwards from your goals.
This year’s conference demonstrated that the need to implement learning analytics and start measuring the impact of learning is still imperative for L&D. This is not just a startup process, it’s a mature journey that requires today’s L&D professionals to keep their eye firmly on business goals.
We’ve just launched our three-step program for practical, effective learning measurement, find out more here.