About Us
Helping students and teachers to realise their potential
Helping students and teachers to realise their potential
In June and July each year, we mark around 10 million exam scripts and issue nearly five million results to students.
We know that getting the right result is very important to students and teachers and we have put it at the heart of everything we do. And by 'right result', we don't just mean getting the right number of marks or grade – it has to be an accurate reflection of a student's ability too.
Here you can find out what we are already doing to get it right, as well as how we are continuously improving what we do.
One of the strengths of the English exam system is that it allows us to assess students' abilities by asking them to do specific tasks as part of their exam or coursework. In the English system, the assessment is considered more valid than if students were only answering short factual or multiple choice questions.
Some of these tasks include:
Of course marking these tasks is not as straightforward as marking multiple choice questions, so we need examiners to use their expert, professional judgment to decide what mark best reflects the standard of the work.
To guide their judgements and make sure they are consistent, we set out the criteria for awarding marks in the mark scheme. We then check during the marking period that the marking meets the right standard.
As we and the other exam boards have developed new GCSEs, AS and A-levels as part of the Government's reforms, we have also reviewed the design of our assessments and mark schemes.
Using evidence from research by our world-leading experts within our Assessment Research and Innovation teams, we have been able to make changes where we think they will bring about greater reliability and consistency of marking.
Some of this includes:
Designing assessments that accurately test different ranges of ability means students are more likely to get the grade that reflects what they can do, and examiners are much clearer about the right number of marks to give the work they are assessing.
As we are relying on examiners to use their professional judgement, we need to make sure we have the right people in place. Almost all our examiners are qualified teachers who are currently teaching or have recently taught the subject they are marking.
We sometimes recruit a small number of PGCE and PhD students to mark papers or a specific question, as our research shows that they do a good job. They are carefully selected based on their strong academic backgrounds and familiarity with the subject they are marking and have enabled us to expand our pool of high quality examiners.
All examiners and AQA colleagues who develop assessments go through rigorous training so they are up to date with the latest research and developments in assessment design.
While examiners are experts in their subjects, they have different levels of experience when it comes to marking. We have strict quality controls in place and check marking along the way to ensure examiners mark to the right standard, whether marking is paper-based (traditional) or online.
Here is a brief overview of the process and quality controls we use.
Before marking starts, all examiners must successfully complete a standardisation process so they fully understand the mark scheme and where to award marks.
During standardisation, a panel of senior examiners marks a set of students’ answers. Examiners are then given these answers to mark, and their marks are compared with those agreed by the senior examiner panel.
If there are any differences, the examiner’s team leader will discuss the discrepancies with them. They will only be allowed to start marking their allocation of students’ work when their team leader is satisfied that they understand how to apply the mark scheme correctly.
In addition to standardisation at the start of the process, we monitor examiners‘ marking to ensure they understand the mark scheme and are applying it correctly. Depending on how the exam papers (scripts) are marked, there are different ways of carrying out this checking. Here is some of what we do.
During online marking, we test a sample of examiners' work to check if they are applying the mark scheme correctly. We call this process 'seeding' and this is how it works:
The seeding process is repeated throughout the marking period to help ensure that examiners stay on track.
Sometimes seeding isn’t the best option, for example for longer essay-type answers. In these cases, we use double marking - where two examiners mark the same answer - to check the quality of marking. We need to allow for small, acceptable differences in professional judgement, which we call ‘tolerance‘, but if the marks differ by more than a small number, a senior examiner decides on the correct mark. As with seeding, examiners who are not applying the mark scheme correctly are stopped until their team leader is able to contact them and discuss their marking. Examiners can also be temporarily stopped while the other examiner or the senior examiner complete their marking.
We are moving more of our marking online, as our research shows that it increases reliability. For example:
After results are published, teachers and students who are unhappy with their marks can apply for a review of marking or other post-results services.
The exam system operates on a large scale and involves an important element of human judgement. Over the past few years, there has been an increase in the number of requests for marking reviews, especially in cases where a student has missed a grade by only a mark or two.
We know that every mark and grade change is significant for students, and we’re doing everything we can to make sure our marking is as accurate and reliable as possible. But we do sometimes get it wrong, so we want to make sure that any reviews of marking provide a fair and consistent way to put things right.
In 2016, Ofqual introduced some changes to post-results services that mean marks can only be changed where there has clearly been an error in the marking or moderation.
All our reviewers are experienced, expert examiners and moderators, who marked or moderated exams in the summer series before the reviews. In November 2017, we introduced some enhancements to their training and to how we monitor their progress and performance throughout the marking review period.
To ensure that they fully understand the review of marking process and what is expected of them, we provide mandatory marking review training through an e-learning module as well as detailed guidance on the process. Reviewers also spend time re-familiarising themselves with the mark scheme and doing some sample marking for the specific assessment they will be reviewing.
We have a dedicated team and robust processes in place to monitor reviewers’ progress and performance throughout the marking review period. This means we can provide any necessary support or corrective action to ensure performance standards are consistent and agreed timescales are met, and that marks only change where there is an actual marking or moderation error.
Because we recognise how important getting the right result is for everyone, we are always looking to improve, using research evidence to build on what works well and learning from mistakes. This includes analysing the performance of all our papers, processes and examiners after each exam series and taking on board feedback.
Read about how exams work and watch our animations explaining how: