Share this page

Comparing examination standards: is a purely statistical approach adequate?

By Ben Jones

Abstract

There has recently been a renewed interest in three types of comparability of standards in the United Kingdom public examination system: between years, between subjects and between the six examination boards.

Whilst comparisons of raw grade distributions are now generally acknowledged to be invalid indicators of relative standards, comparisons are regularly made for this purpose between adjusted grade distributions. Such adjustments are typically the result of statistically controlling for some of the relevant variables.

The dangers of such an approach are that only easily quantifiable variables are used in the adjustment and that any residual differences between distributions will automatically be attributed to difference in standard.

Using candidate‐level data from four 1994 Advanced level (A-level) mathematics examinations (designed for 18‐year‐old students), and paying particular attention to the Schools’ Mathematics Project (SMP) 16‐19 syllabus, the paper reports on two such analyses.

It then discusses some reasons why attributing differences in the adjusted grade distribution to differences in standard could be invalid. Whilst the study focuses on four A-level mathematics syllabuses, the same principles apply irrespective of the context in which statistical comparisons of examination results are made.

The methodologies, their shortcomings and the pleas for caution are not, therefore, specific to this study, this type of comparison or this examination system.

How to cite

Jones, B (1997). Comparing examination standards: is a purely statistical approach adequate? Assessment in Education: Principles, Policy & Practice, 4, 2.

Keywords

Connect with us

Contact our team

Join us

Work with us to advance education and enable students and teachers to reach their potential.

Apply now