Why marking is rarely ‘right’

As many of us struggle through a pile of marking and wonder if we’ve really assessed every student to exactly the same standard, the conclusion to a detailed study of experienced markers in the UK might be food for thought.

… we need fresh thinking about reliability, fairness and standards in higher education assessment, and that our current reliance on criteria, rubrics, moderation and standardising grade distributions is unlikely to tackle the proven lack of grading consensus. One way forward worth considerably more investigation is the use of community processes aimed at developing shared understanding of assessment standards. …

The real challenge emerging from this paper is that, even with more effective community processes, assessment decisions are so complex, intuitive and tacit that variability is inevitable. Short of turning our assessment methods into standardised tests, we have to live with a large element of unreliability and a recognition that grading is judgement and not measurement. Such a future is likely to continue the frustration and dissatisfaction for students which is reflected in satisfaction surveys. Universities need to be more honest with themselves and with students, and help them to understand that application of assessment criteria is a complex judgement and there is rarely an incontestable interpretation of their meaning. Indeed, there is some evidence that students who have developed a more complex view of knowledge see criteria as guidance rather than prescription and are less dissatisfied.

Accepting the inevitability of grading variation means that we should review whether current efforts to moderate are addressing the sources of variation. This study does add some support to the comparison of grade distributions across markers to tackle differences in the range of marks awarded. However, the real issue is not about artificial manipulation of marks without reference to evidence. It is more that we should recognise the impossibility of a ‘right’ mark in the case of complex assignments, and avoid overextensive, detailed, internal or external moderation. Perhaps, a better approach is to recognise that a profile made up of multiple assessors’ judgements is a more accurate, and therefore fairer, way to determine the final degree outcome for an individual. Such a profile can identify the consistent patterns in students’ work and provide a fair representation of their performance, without disingenuously claiming that every single mark is ‘right’. It would significantly reduce the staff resource devoted to internal and external moderation, reserving detailed, dialogic moderation for the borderline cases where it has the power to make a difference. This is not to gainsay the importance of moderation which is aimed at developing shared disciplinary norms, as opposed to superficial procedures or the mechanical resolution of marks. (references omitted)

Sue Bloxham, Birgit den-Outer, Jane Hudson & Margaret Price (2016) Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria, Assessment & Evaluation in Higher Education, 41:3, 466-481, https://doi.org/10.1080/02602938.2015.1024607

Leave a comment on this post

Blog at WordPress.com.

Up ↑