Measuring learning impact – Mike Rustici answers your questions
Posted on 25th April, 2017 by Mike Rustici
I recently presented a segment on big data as part of LEO’s Measuring the Business Impact of Learning event in April. While the event took place in London, thanks to technology, I could ‘beam in’ from the Watershed offices in the US to speak to a room full of L&D professionals and elearning experts, and talk about how to get started with measuring learning impact.
As learning data analysts, Watershed works with a variety of clients to help them get the most from their learning data. As part of the event, attendees had a number of very interesting questions, which we unfortunately couldn’t get around to answering on the day. Here are the ones that people seem really keen to find out more about, with my answers of course.
Getting started with learning analytics
1) How do you get started with measurement, where do you find data and how do you gather it if you don’t have any?
This is a hard question since so much of the answer depends on your context.
We like to help clients start by considering any questions they may have. What is relevant and timely to your organization right now? Are there new learning tools or programs being released that you can incorporate analytics into? Are there larger organizational goals or initiatives impacted by learning that you would like to measure? Once we have a question, we can focus on the possible answers, then see what data we can access to support our suggestions.
Others take a bottom-up approach and just look for whatever data is easiest to access. These organizations often start with a lot of learning experience analytics to see how a particular tool is being used and then build up from there.
Forming a plan to get started can be the hardest part. I’d be happy to spend a few minutes brainstorming with you if it would help.
Are you happy with happy sheets?
2) Could you start with happy sheets? And if trainers put info in, can you trust it?
Starting somewhere is better than not starting at all! Kirkpatrick Level 1 happy sheets certainly give you more information than you had before if you’re not using them. I’d suggest being strategic about how you use them to ensure that the learning is applicable, as well as being enjoyable. I’d suggest using further surveys after the event as an opportunity to assess whether the learning was actually retained and applied on the job.
Trust in data is a complicated topic and extends well beyond just trainers self-entering figures. For me, the key is to match the level of trust with the size of the stakes. I wouldn’t trust self-reported data for ensuring that a brain surgeon is qualified to operate, but I might trust self-reported data about the general feel and attitudes in a classroom (unless the reporter’s promotion depends on it).
I also believe that you can always learn something from data. For example, looking at what a person chooses to report (and what they choose not to report!) and how that compares to what others report can at times be quite revealing.
Exporting learning data
3) How do you draw together separate non-standard non-learning data into analysis tools?
That depends a lot on the analysis tool. At Watershed, we are big believers in xAPI. This data format is very flexible and is able to represent so many different forms – whether it’s learning data or not. Once all of your data is in a common format, it becomes much easier to query that data for information.
That then begs the question of how you get the data into xAPI format. Many tools will publish to xAPI but for those that don’t, we look to use a CSV export or other API to extract it and normalize it into the xAPI format. We do this integration work one time on the front-end to allow for easier and more frequent querying later on.
Click here to download LEO’s free insight on the 10 steps for measuring the business impact of learning or to speak to a LEO expert about how we can help with your training needs, contact us right here.