top of page

Background

​

The methodology of learning analytics is concerned principally with interrogation and interpretation of digital data harvested from digital educational applications such as LMS platforms or games, or from data collection devices including wearable, audio or video recordings or other data capture devices embedded in the environment. Learning analysts apply techniques such as social network analyses, data mining, machine learning, semantic analysis, and so on. The field of learning analytics is young and is not without it challenges. There is growing awareness that measures of learning need to be accurate, fair, reliable, valid, and interpretable regardless of whether they are used for prediction, for feedback, or for research (Bergner, Lang and Gray, 2017; Milligan, 2015; Prinsloo & Slade, 2017). Data can do harm if used to shape the information about or treatment of, a person, especially if the data is based on problematic observations, or if the conclusions from the data are based on faulty inferences. This is especially so if decisions are made based on automated algorithms. Questions are also being raised about the effectiveness of analytics (Ferguson & Clow, 2017).

​

Educational measurement also has at its core the analysis of quantitative data on learning (both large and small scale), but this field is older, and is concerned principally with the use of data to derive assessments of human attributes that are reliable, valid, have utility, and are interpretable for educational professionals across a wide range of specialties, the most common being teachers (Messick, 1995, Wilson, 2005). It especially focuses on measuring what learners know or can do, and is also used to gauge learning through longitudinal data analysis. There is a well-established methodology, underpinned by the understanding that data cannot speak for itself, that not every relationship found in data is meaningful, and that some found relationships can be damaging if used to predict or shape learning. Educational measurement techniques provide a means to cut through the inherent complexity and interrelatedness of educational evidence to distinguish what is meaningful and useful, from what is merely related.

​

Although not young, educational measurement is not without its challenges either. Its job is getting harder. Changes in conceptions of what learning content should be assessed are evident in reforms of national and international curriculum frameworks, which now routinely supplement the cognitive outcomes of traditional subjects and disciplines with requirements that learners develop complex competencies comprised of the knowledge, values, attitudes, skills and beliefs required for effective performance in any field. These traits and the data forms required to measure them are new to many specialists in educational measurement, and hence have been found to be difficult to manage using the traditional approaches of the field. Teaching methods are changing too. Digital learning platforms and applications classes are very commons. Greater reliance is placed on automated assessments, and computer-based agents. Educational measurement and assessment is increasingly using big data of the kind that learning analysts engage with, and its models, techniques and tools are needing to change at the same time (Mislevy, 2016, Pellegrino, 1999).

​

The advantages of methodological collaboration between these two fields have been remarked in both the learning analytics community, and the educational measurement community (Bergner, Lang & Gray, 2017; He et al., 2016; Wilson & Scalise, 2016; Wilson, Scalise & Gochyyev, 2016). There are advantages in exploring differences between the fields in assumptions about the nature of learning, in how learning can be detected and understood, and even in what is understood by the term ‘measured’. Different assumptions apply to consideration of matters of data adequacy, and control, and the standards of proof required to verify the utility and interpretability of findings. The fields use different statistical techniques for data modeling, and for uncovering meaning in the data. There is, however, already evidence that collaboration between the two fields can prove productive, including the emergence of teams combining methodologies to good effect (Griffin & Care, 2015; Milligan, 2015; Milligan and Griffin, 2016; Shute & Ventura 2009; Wilson & Scalise, 2016).

​

This Symposium-style Workshop will focus on the methodology of learning analytics, aiming to extend the conversation between two communities of scholars – learning analysts and educational measurement specialists – to the benefit of both. The fields share an interest in learning, a commitment to improving practice and a belief that data can assist understanding of learning. Both fields have an interest in measuring learning. There are also differences, and these provide both opportunities for productive collaboration, and challenges.

bottom of page