Students, laptops and late nights

As we come to the end of semester the following two studies might be of interest:

The first is Patrick, Yusuf et al, ‘Effects of Sleep Deprivation on Cognitive and Physical Performance in University Students’ (2017) 15(3) Sleep and Biological Rhythms 217
Other studies into sleep deprivation have concentrated on the broader adult population or tasks such as driving.  This study looks at effect on university students of an all-night effort to write assessment.  They find that despite physical effects, the cognitive abilities of the students were not impaired.  So lack of sleep might affect their physical ability to write the exam paper neatly, but the all night cramming won’t affect their cognitive abilities.
Cramming of course may not mean they’ll have remembered what they needed to learn …
The other really interesting study is Arnold L. Glass & Mengxue Kang, Dividing attention in the classroom reduces exam performance  (2018) Educational Psychology, DOI: 10.1080/01443410.2018.1489046 .They make the fascinating finding that:
Dividing attention between an electronic device and the classroom lecture did not reduce comprehension of the lecture, as measured by within-class quiz questions. Instead, divided attention reduced long-term retention of the classroom lecture, which impaired subsequent unit exam and final exam performance.
And that retention is reduced even if it’s not you using the device, but you are distracted by the person next to you using one.
So it seems that if you check Facebook during class you won’t realise at the time you are learning less because your instant recall is just the same.  Later though, it won’t ‘stick’ with you and you’ll do worse in the exams.

Why marking is rarely ‘right’

As many of us struggle through a pile of marking and wonder if we’ve really assessed every student to exactly the same standard, the conclusion to a detailed study of experienced markers in the UK might be food for thought.

… we need fresh thinking about reliability, fairness and standards in higher education assessment, and that our current reliance on criteria, rubrics, moderation and standardising grade distributions is unlikely to tackle the proven lack of grading consensus. One way forward worth considerably more investigation is the use of community processes aimed at developing shared understanding of assessment standards. …

The real challenge emerging from this paper is that, even with more effective community processes, assessment decisions are so complex, intuitive and tacit that variability is inevitable. Short of turning our assessment methods into standardised tests, we have to live with a large element of unreliability and a recognition that grading is judgement and not measurement. Such a future is likely to continue the frustration and dissatisfaction for students which is reflected in satisfaction surveys. Universities need to be more honest with themselves and with students, and help them to understand that application of assessment criteria is a complex judgement and there is rarely an incontestable interpretation of their meaning. Indeed, there is some evidence that students who have developed a more complex view of knowledge see criteria as guidance rather than prescription and are less dissatisfied.

Accepting the inevitability of grading variation means that we should review whether current efforts to moderate are addressing the sources of variation. This study does add some support to the comparison of grade distributions across markers to tackle differences in the range of marks awarded. However, the real issue is not about artificial manipulation of marks without reference to evidence. It is more that we should recognise the impossibility of a ‘right’ mark in the case of complex assignments, and avoid overextensive, detailed, internal or external moderation. Perhaps, a better approach is to recognise that a profile made up of multiple assessors’ judgements is a more accurate, and therefore fairer, way to determine the final degree outcome for an individual. Such a profile can identify the consistent patterns in students’ work and provide a fair representation of their performance, without disingenuously claiming that every single mark is ‘right’. It would significantly reduce the staff resource devoted to internal and external moderation, reserving detailed, dialogic moderation for the borderline cases where it has the power to make a difference. This is not to gainsay the importance of moderation which is aimed at developing shared disciplinary norms, as opposed to superficial procedures or the mechanical resolution of marks. (references omitted)

Sue Bloxham, Birgit den-Outer, Jane Hudson & Margaret Price (2016) Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria, Assessment & Evaluation in Higher Education, 41:3, 466-481,

Blog at

Up ↑