First up, I’d like to thank Simon Buckingham-Shum for his recent post on learning analytics. Rarely do I read a blog post which not only considers an ‘entry level’ question, but also then follows through multiple steps in the analysis of the topic in the way this post did. When I read the post I had the unnerving sense that the author had already thought through every question I might have considered, and answered it before I could pause to take a breath and even form the question properly in my head. I’ve been mulling the post over for the last week, and I decided that I really needed to put his post in context alongside some of my own thoughts (even if this is, in academic terms, like parking a Bugatti Veyron ‘in context alongside’ a Toyota Corolla, but never mind).
In my recent post-ascilite blog post, I hypothesised that one of the gaps we have at the moment in the field of learning analytics is that of interpretation as a discipline (or an art form), to quote Buckingham-Shum’s post:
‘who is supposed to make sense of the dazzling dashboards embedded in every e-learning product pitch?’
Regardless of how much effort has been put in to construct interpretive models of how successful students are in their learning journey, there exists somewhere in the mix either a ‘black box’ attempting to draw inference on data, or a person (or people) serving this same purpose. This poses a risk, again from Buckingham-Shum’s post, that “… data points in a graph are tiny portholes onto a rich human world … proxy indicators that do not do justice to the complexity of real people, and the rich forms that learning take.” This concept captured me, as somewhere, some time, someone needs to draw some kind of inference from the data in order for it to be put to effective use. Who this is, and how it is done – there lies the question.
…somewhere, some time, someone needs to draw some kind of inference from the data in order for it to be put to effective use.
I can think back to probably my first experience with learning analytics working for the then Senior Secondary Board of South Australia as a great example of this (even though at the time, 1999, I had no idea that what we were doing had a name). As the head of the IT group and a quasi-backup-DBA, I would without fail get at least one phone call of the following form each and every year:
Parent: Hi, you’re the guy that keeps all of the Year 12 results, right?
Me: Yes, that’s me.
Parent: I’ve just moved into the area, and I want to know what the best school is, so can you tell me that please?
Me: That depends on what you mean by ‘best’.
Parent: The one which gets the best Year 12 results, obviously!
Me: Well, maybe, but there’s a lot of other factors that come into that discussion that influence your idea of ‘best’ – you can’t distil the term ‘best’ into one single number at the end of Year 12.
Parent: I don’t care, just tell me what the best school is.
It was at this point that I could mercifully refer them to the legislation that forbade me from even constructing this information, let alone sharing it, and suggesting that they go talk to the schools in their area to see which one seemed to fit their child’s needs, and to talk to their local MP if they didn’t like the legislation. The point in all this was that this was a great example of a situation where there was a significant ‘interpretation gap’ around what these well-meaning parents wanted to know, versus what I (as the ghost in the machine) could have told them, even if I had been legally allowed.
When I think about the ‘ghost in the machine’, I question whether any educational organisation driven by ‘market share’ can tread the fine ethical line between doing what is right for their own financial sustainability versus what is right for the student, and possibly what is right for the greater good of humanity. As Buckingham-Shum points out, “questions must also be asked about whether part of a university’s mission is to help students discern their calling, which may include discovering that they are on the wrong course.” This particularly concerns me in an increasingly deregulated education environment, where survival goes to the institutions who retains the most students, not the ones who may shake hands with a student and quite honestly tell them that they are better off taking another path in their life. This is, however, a much bigger conversation.
Going back to the SSABSA example, this was one of the reasons that the legislation at the time forbade the publishing of ‘league tables’ – because of the risk of information consumers (in my case, parents) drawing overly significant inferences from one single indicator, rather than considering the broader, more nuanced picture of the whole, and the impacts that this could have had on those who were hamstrung for a variety of reasons in being able to achieve good results in this one single measure. I could go on here about the LAN tests as well, but there are others who are already doing this far better than I can.
When I think about this ‘interpretation gap’ I also relate it back to my ten year old son. As I watch him learn (or, for that matter, teach), it is clear that my role in his life is rapidly becoming less of being the font of all knowledge – his first point of call now will be an iPad and Google to answer most questions he has. What he does not have by default is the capacity to draw sound inferences from all of the information he finds, and so my role has transitioned in a very short period of time to being far more focused on helping him develop the skills he needs to work out which information he can take as trustworthy, and which information he needs to question as potential nonsense. My gut feel tells me that right now the interpretation gap in learning analytics is a similar story – we need to work out a way of teaching information consumers how to critically analyse and draw sensible inferences from the data they can access via learning analytics.
As I then think back to Buckingham-Shum’s post, and the final paragraph where he states a need to focus on the upper right corner of the Knowledge-Agency Window, I start to look ahead and consider what the characteristics of a learning analytics function would need to have in order to support this kind of self-directed learning, open-ended enquiry kind of learning. Three main things stand out to me as being necessary (but not necessarily sufficient) characteristics of such a system:
- It must start by initially providing simple, easy to understand, and almost impossible to misinterpret measures to consumers of the data. This acknowledges that the journey to analysis of the upper right hand quadrant of the Knowledge-Agency Window starts at the beginning – in the lower left hand quadrant, but it starts there as nothing more than a ‘seed’ of curiosity for the consumer.
- It must be flexible enough to allow the consumers of the data – both students and institutions – to build on this simple start, asking follow up questions, and allowing them to ‘follow the white rabbit’ in terms of digging deeper into the information, but doing it in a way which allows them to create the measures which are meaningful for their context. If I think back to the Moodle Engagement Analytics tool designed by Phillip Dawson and developed by the team at NetSpot, then it did this to an extent – it gave users a starting point of a common framework of typical interaction measures on a course, but it also supported (if not encouraged) data consumers to tweak these measures in order to get a better result based on the specifics of their course. Granted, this was purely driven by the course designer, but it did at least support this concept to an extent – extend this concept to a truly user-driven model of analytics construction and I think the end result could be immensely powerful.
- Finally, it must be designed in such a way which helps the consumers of the information to build their own interpretation skills as they follow their own white rabbit, whether they be a student, teacher or organisation – teaching them to question the inferences they can draw, the limitations of the data, and what else they should be considering as part of the broader picture. Like my son learning how to critically evaluate the information he finds on the internet, all consumers of learning analytics information must learn how to critically interpret their own data, and then to plan actions based on it.
The realities of this happening are, of course, somewhat complex. As SOLAR have noted in the past, a common data language is on the critical path before much of this conceptualisation can become reality. The need for interoperability of information across an increasingly distributed and uncontrolled learning information ecosystem makes it so. No longer can we rely on learning analytics based purely on the LMS as a valid measure of student engagement, participation and achievement – we need to assume that the data which can help to draw inference on a student’s learning journey will continue to be more and more disconnected, not more centralised.
With this in mind, I wonder if we are at a point in learning analytics where Robert Kahn & Co were back in the 1960s when they defined the four underpinning principles of the World Wide Web. Although at the time, before the existence of TCP/IP, there was no standard model for how internet traffic would travel between points around the world, there was at least an understanding of what some of the characteristics of such a network would look like. Perhaps right now in the history of learning analytics we are waiting for one pervasive standard to emerge which will allow learning data to act as a collection of loosely coupled, easily integrated, vendor independent entities which can be used and discarded as needed by the individual, depending on where the white rabbit takes them.
Perhaps in order to predict the likely future of learning analytics, we should be looking at the past, and drawing our own inferences from the information we have at hand as to where the journey will take us.