Inside Exams

Podcast series 1 now available

Ways to listen
Google Podcasts
Apple Podcasts
Spotify
Share the podcast



WhatsApp

Teacher Craig Barton is back with series two of Inside Exams, the podcast that gives you an access all areas pass to snoop around behind the scenes at AQA.

He’ll be meeting the people who write and mark your students’ exams, as well as pioneering teachers, to get answers to all the questions you ponder throughout the school day.

© AQA 2019

Reviewing assessments: a date with data

Series two
Episode one | 11 November 2019

How do exam boards review their assessments? Craig Barton gets some answers from AQA’s assessment design manager, chair of examiners for biology, and head of curriculum for history. Louisa Cotterill, a Head of Humanities, also looks at ways data can be used to monitor classroom progress too.

Featured in this podcast

Craig Barton – Maths teacher, podcaster and author

Georgina Holmes – Assessment Design Manager at AQA

Eoin MacGabhann – Head of Curriculum for History at AQA

Michelle – Chair of Examiners for Biology

Louisa Cotterill – Head of Humanities at Alcester Academy

Episode resources

Craig Barton: Hello, and welcome back to series two of Inside Exams. I’m Craig Barton and I have 15 years of classroom experience as a maths teacher. However, there are things about exams that continue to take me by surprise. So, this is the podcast where I head behind the scenes to ask the questions you’ll struggle to find answers to anywhere else. I’ve been back on the road visiting classrooms across the country to get an idea of what information I can gather from exam boards to better equip you to teach

Natasha: Hi, my name is Natasha, I teach sociology and psychology and I would just like to ask how you decide on the questions for the next set of exams. Do you base decisions for the next year’s exams on this year’s results or feedback at all?

Craig Barton: Feedback is an interesting one. It’s immediately made me think of the annual performance review at work which in turn has sent a slight shiver up my spine. Being called in for a chat by your boss is brilliant if you were showered in compliments, but there are also times when we have to face the uncomfortable reality that we fell short in some areas and need to make changes. There must be merit in exam boards subjecting their assessments to similar scrutiny, so do exam boards look back in order to look to the future? And what information will they base subsequent decisions on? To answer your question, Natasha, I’m going to speak to AQA assessment design manager, Georgina, chair of examiners for biology, Michelle, and Eoin, who is head of curriculum for history. So, one of the things I learnt in season one of Inside Exams is that it seems there’s almost as much work goes on after the exams have been written and set than goes into actually creating them. And one of the areas that I was absolutely clueless about is the review process that awarding bodies go through. So, my first question is, when are assessment reviews done and why are they done?

Georgina: The assessment reviews tend to happen about a month after the final marks are all in and they actually signify the start of the question paper writing process for the next year. So, we don’t start writing until we’ve got feedback from the previous series.

Craig Barton: Are there different focuses for these reviews or is it just one big review?

Eoin: There are different focuses. One is to make sure that the feedback that we’ve gotten from teachers and students by email, verbally, over Twitter, over Facebook, is acted upon. At times, there will be things that we’ll decide that we don’t want to act upon and then are times there’ll be things that actually we will have a look at in the review in the light of the question data. How students did on particular questions, just to see if we can make improvements to the examinations, which we have done in history in the past as well.

Michelle: I think, from my perspective, although the formal review starts about a month after marking review is complete, from my point of view, there is work going on before that during the marking because of the monitoring that I would be involved in of laws of marking. I’m starting to collect maybe some of that soft data, not just from what teachers or students are emailing in, but what we’re actually seeing. So we’re starting to get some thoughts together about how questions and question papers are functioned, the data then comes in and we have that meeting around October/November where we’re looking to see, okay, is this confirming what we think’s happened? Was it an issue or isn’t it an issue? So, there’s that kind of pre-review process, that’s all that information when you’re looking at live pupil responses.

Craig Barton: Gee, so it’s a real long-term thing, this.

Michelle: Yes, I would say it is. It’s a continual cycle that’s merging into one, I would suggest, where you’re just constantly getting more information, you’re developing the team’s approach to writing papers and you’re constantly refining and improving practice.

Craig Barton: I love specifics on this, and I’m fascinated to get behind the door into what actually goes on in the awarding bodies. So, let me ask you this, Georgina, where are you getting all this data from to help you with the review and just how deep are you going with it?

Georgina:The main source of the data in terms of numbers comes from the question paper functioning reports which are also known as QPFRs.

Craig Barton: Yes, I like it, catchy.

Georgina: And you’ll love them because they’re full of numbers.

Craig Barton: Now we’re talking, things are looking up.

Georgina: So there’s lots of different information in there and we also tend to look historically as well, so we’re not just looking at the data for this year, but we’re looking at the data from previous years as well, so it helps us to build up a picture over time.

Craig Barton: And when you say ‘numbers’, what are we talking here? Just literally the number of kids who’ve got different marks on questions or are you breaking that down into cohorts? Just how specific are we going here

Georgina: Question paper functioning reports contain lots of different types of data, typically we have mean marks, we are able to look at how the papers correlate with each other and how they correlate against the subject. We’ve got quite a lot of item level data, so for each item or question, we have information that tells us how well it discriminated, so we’ve got a discrimination [indice], and then we’ve also got a facility [indice] as well, so it tells us how hard or easy the question was for students. We also have item level distributions, so we’re able to see, per-question, how students performed on that particular question. For our levels of response question, typically, something that’s got a higher mark, six or nine marks, we would expect to see a normal distribution. Obviously, for the point mark questions, one, two, three, four, we’re not necessarily looking for that, but yes, it is quite detailed. And then, of course, we can manipulate it in lots of different ways, so we can rank-order the questions from easiest to hardest, rank-order them in terms of how well they discriminated. There’s a plethora of data out there, but it’s obviously not the only data that we look for, as Eoin has indicated, we pull a lot of data from customers, what are they saying? So, we merge the two together.

Craig Barton: Wow, I’m getting excited about getting my hands on this spreadsheet. Michelle?

Michelle:  One of the things we’ll look at in science are, we will look at per-question. When we write a paper, we aim it at a particular standard of attainment, so we will spend quite a bit of time in those review meetings looking at the statistics as to where the question actually performed. So, if we intended it to be a question aimed at grades 8 and 9, is that where the question performed or was it, maybe, slightly too accessible and actually lots of students around a grade 6 or 7 were getting marks? Or, was it too hard?

Craig Barton: Eoin, why does subject expertise come into this? So, is it all just kind of relying on info from data and info from the public or do the examiners or AQA people, does their subject expertise come into play at all?

Eoin: Yes, I think in terms of deciding what feedback that you get from teachers and students, and examiners as well, what type of feedback is something that needs further investigation? A lot of the stuff that we’ll see during the summer in terms of feedback over Twitter, you can see it and think, ‘well, that’s just people venting after an exam’ and that’s fine. But if there is feedback on a particular question where, let’s say, at A-level, you might have people who might challenge the basis of the question of the question. At that point, you will refer with the chiefs and the chairs and the lead examiners, they are very much the experts, especially in history where we have 30 different topics at A-level and 20 or 16 at GCSE. There is an awful lot of very specific expertise that we can draw upon to say, ‘is this an issue? Do we need to do some further investigation on this?’ And at that point, then, that sort of soft data will usually inform the more statistically-based evaluation that people like Georgina will do during the question paper review.

Craig Barton: Can I just ask on that, we had Raquel on in season one talking about how she’s managing all this Twitter info that’s coming in and she talked about if it seemed there was a problem in a paper, she’s straight on the phone to somebody. So if there’s a history issue kicking off, is she straight on the phone to you? Are you the first point of contact there?

Eoin: Absolutely, and on the morning of the exam or when the exam has started or the immediate period after it’s finished, you are looking at the phone and hoping that nobody is going to talk to you or ring you.

Craig Barton: It’s never good news when that phone rings, is it?

Eoin: It’s a wonderful feeling when you get through that day without anybody actually saying, “Oh, you need to come down to the bunker.” And I do remember one of Georgina’s colleagues who I worked with in previous years, I went down on the day of the exam to ask him a completely incidental, very non-important question. And the poor man nearly lost his life when he saw me because his immediate assumption was, something has gone terribly wrong, and he was furious with me when he found out it was something like “Have you got change for the Coke machine?”

Craig Barton: Georgina, if I could just come back to you, we haven’t spoken about numbers for about 30 seconds, so I start getting a bit on edge at this point. I’m interested in, what are you looking for that gives you a sense that a question has performed well? What’s in the data that suggests it’s doing what it should do?

Georgina We do have to look at everything individually. Really, what we’re looking at is, what was their intention? What did we intend for the assessment to do and did it then do that? I mean the most important thing for us with any assessment is that we have rank-ordered the students appropriately, because that’s actually what assessments are intended to do. So, we would be looking at things like the standard deviation for the overall paper, so good standard deviation, it means we have managed to spread those students out over all the marks that are available. If we don’t have a good standard deviation, then that would be a place where we’d start to look and say to ourselves, “Well, why did that happen?” We do then go into quite a lot of detail, so we may also look at the question level distributions, so if we saw that a large number of students, or larger than expected, were getting zero marks, we might ask ourselves why that is. If it was a high demand question, then we might expect to see that anyway. If it wasn’t intended to be of high demand, we would go back and look at the question and say, “Well, how did we rate it? What language did we use? What was the area of content?” Sometimes we might look at it and say, “Well actually, that’s a really hard area of content for students anyway,” so even though it wasn’t intended to be high demand, it was just hard for them.

Craig Barton: Michelle, if I can come to you on this, how much of this data and insights gets back to teachers and how do you get it back to teachers? Because this sounds like really important, interesting stuff that teachers need to know, so what’s the line of communication there?

Michelle:The three chairs, in this case, biology, chemistry and physics, we work very closely together, we will be looking for those common patterns and common issues that we’re seeing around question topic areas or types. So typically, in science, across all three disciplines, we are seeing that maybe required practical questions, students aren’t maybe accessing those as well as we would expect, considering they are required with the specification. But in a subject like science, we’ve got hundreds of examiners, so the team leaders will be feeding back to the lead examiner as well, what they’re seeing. So you’re getting a lot of information about the questions where students maybe aren’t attempting it as well as you would expect, and that goes into the examiner’s report.

Craig Barton: Eoin, just on that, is it just the questions that you’re analysing here? Does the mark scheme come under scrutiny in this kind of review process?

Eoin: Absolutely...

Craig Barton: So, nothing’s off the table here.

Eoin: Yes, sorry, I went into Darth Vader there. Every single aspect of the assessments and the specification, it’s not that it’s all up for grabs, but we do scrutinise it to see if it’s working in the way that it was intended to work. And I think no spec or no assessment is perfect, so if we can make changes that we think will benefit teachers and students, and still maintain the objectives of the assessment and the specification as it’s set out, then we’ll make those changes. This year, in history, as a result of the review, there was an improvement to be made in terms of adding more time onto the examination. After the question paper review, if we are looking to make a change, we will chat with Ofqual, we will submit to Ofqual, we’ll run the ideas by them and if they say that seems okay, then, working in partnership with them, we’ll then make the improvements to the assessments that we want to make. And as a result, this year, we did add 15 minutes onto the examination from the 2019 exams for GSCE.

Craig Barton: I just want to come back to this notion of using data, I can picture how you could analyse the questions, how on earth do you analyse a mark scheme with data?

Eoin: After the review process, it would be someone from the curriculum team and there would be senior examiners there and there’ll be an assessment design manager there and any other relevant staff from AQA. I think it’s the input from the senior examiners is very, very important in saying, “Okay, the way that this mark scheme is written or the way it has been interpreted is perhaps leading to an effect that we don’t want in the way the question is performing.” So, the directions in the mark scheme, in history we have an awful lot of levels of response mark scheme, and history, we have a mark scheme, but there is a judgemental facet to marking in history as there is in any sort of essay-based subject like English or even parts of geography as well. And that means that if there is a judgmental element to the mark scheme, we really do want it to be as clear as possible, and also as positive as possible so that we have huge bunching of marks at the bottom that we can’t discriminate. So the wording of the mark scheme would come under scrutiny like that in the light of the data that we’d see.

Craig Barton: Wow.

Michelle: I think, after the review process, obviously we’re into question-writing and maybe not everyone is as clear that actually we’re spending a lot of time commenting and refining that mark scheme. So a lot of my comments and a lot of my questions in meetings would be around ‘this question doesn’t elicit this mark scheme, it doesn’t elicit this point’, and we work on the wording or the mark scheme is not right. So there will be just as many comments about the mark scheme to get it right as there probably is about the question paper

Craig Barton: Georgina, I’m back to you on this one here, how do you analyse the aesthetics of an exam paper? And the reason I’m asking this is, we’ve had a question on Twitter from [@Sheena2907] and she says, specifically, ‘how do you judge the space given for answers?’

Georgina: In terms of the answer space, there is a rule of thumb in terms of the answer space that we have. It’s usually a number of lines per mark, but we are able to go outside of that. Students are actually likely to write quite a lot, so we would make the answer space a little bit bigger. The answer space is actually really important though because students are looking to us to guide how much they should write, so it’s really important that we don’t put too much space in there, otherwise they might think, ‘well, I’ve kind of written everything I think I know, but there’s still another five lines to go. So, I’m going to have to keep writing and keep filling’. We don’t want to waste time doing that when they should be moving on to another question, so yes, it’s quite a fine line.But that’s something else that we do take feedback on, year-to-year, so we’ll look at the number of additional pages that came in, ie, the number of students who wrote additional pages outside an answer, if that exists, because that’s an indicator that we didn’t get the spacing right, the answer space right for the students. I think, as well, I’ve sometimes had in geography where we have questions under figures or graphs, students can quite often miss those out if we haven’t put enough white space between the graph and the question, so we have to be really careful. We also look at things like making sure that the source and the answer space are on facing pages, so that’s why you’ll see answer booklets where you’ve got page one and two and then there’s a blank page that has the big line through that says ‘this is a blank page, do not write in this space’. And then it opens onto another double page, and we do that on purpose so that students don’t have to flick back if they have to refer to a source.

Craig Barton: We’ve gone through this review process, do you have any practical examples of how it’s led to improvements in future series?

Michelle: I think the thing that springs to mind for science and that we get comments about, is when you’ve got questions targeting assessment objective 2, so that’s student’s ability to apply their knowledge and understanding.

Craig Barton: Yes, are carrots coming into play by any chance?

Michelle: It wouldn’t be a podcast without carrots, would it?

Craig Barton: That’s absolutely right.

Michelle: This year it was the axolotl, which …

Craig Barton: It was the what?

Michelle: Axolotl.

Craig Barton: I’ve never heard of that one.

Michelle: I had lots of children…

Craig Barton: There lies the problem, right?

Michelle: No, actually the students answered the question, it performed quite well, because we gave them a diagram, but we do know that those type of questions, from all of the reviews, cause more problems than others. So, as a result of review, we do spend a lot of time, throughout the whole process, really checking the context, that we have got it as simple and clear as possible. So that the context isn’t getting in the way of students demonstrating their knowledge and understanding. It’s a difficult one because, obviously, because of the Ofqual rules that were laid out for all awarding body specifications in science, we have to ask those questions. But we don’t want to choose a context that we think is really exciting and interesting, that actually is just too complex for students if there’s another way of asking it. So, I think a focus on those AO2 questions is something that we’ve been looking at as a result of review.

Craig Barton: To end though, let me come to you. If we’ve got teachers listening to this who think ‘I really want to benefit from the work that’s going on in this review process’, what’s the key takeaway? What should teachers be doing or thinking about as a result of the work that you and your team do at AQA?

Eoin: Firstly, if they have an opinion on the exam, either positive or negative, contact us, let us know. There are particular social media groups where teachers gather together and ferment. Tell us about it. The other thing is, if you do get through to people on my team, in the main, we’re all ex-teachers, I mean I think the curriculum team in AQA is essentially a kind of a home for bewildered ex-teachers, to be honest. We get an awful lot of feedback, we try to make sure, as far as possible, that the people who set the papers and mark the papers get that feedback. Over the course of the summer, let’s say for A-level history, we may get 25 or 30 emails a year. Well, when we see examiners, we do share this stuff with them, sometimes it’s not very easy reading for them, but it’s always good for them because they are at that point of the process of thinking about next year’s exams or the exams two years down the line.That information does get passed on, so do call us, do contact us, and also if you did like the exam, it’s always nice to get an email like that as well. And they do come in, they’re far, far more rare, but, you know, it’s still something that that feeds into the review as well.

Michelle: I would suggest that teachers look at the enhanced results analysis tool on e-AQA. Obviously, when I worked in school and when I worked with schools, that’s one of the things we look at, because that programme will allow you to look at certain question types for your cohort. There might be something that teachers can consider in their department and the way they’re approaching their teaching that they could alter for the next cohort going through.

Craig Barton: Fantastic, well once again, I’m enlightened here, I’m reassured that so much thought goes into this review process. I’m excited in the fact that, as a teacher, I can share my opinion and it will be listened to. So, Georgina, Michelle and Eoin, this has been absolutely fascinating. Thanks so much for your time.

Georgina: Thank you.

Eoin: Thank you.

Michelle: Thank you.

Craig Barton: Fantastic. I have to say, I’m impressed by this desire to dissect every detail of the previous exam series in the hope of making the next one even stronger. But, as teachers, we can’t rest on our laurels and assume that our teaching should stay the same each year. I think it’s vital that we have a similar ongoing review process too. I’m off to meet Louisa Cotterill, Head of Humanities at Alcester Academy to find out how individuals and departments can use data to monitor performance throughout the year.I’m dead excited about talking about how you analyse data in your students’ results. Before we get into what you do now, how has it changed over the years? And the reason I ask is because this is my 15th year of teaching and when I think back to those first couple of years, we didn’t have half the things available, tools for analysis that we used to have, and it was a bit of a flippin’ nightmare. Is it the same for you? What did the analysis used to look like?

Louisa:  You’re right, things have massively changed. So, when we were first looking at data, you’d look at what the school had from the exam boards and you wouldn’t have the exam pro stuff from the exam boards as well. And so you were sort of second-guessing what the exam officer gave you and sometimes you’d think to yourself, ‘well, I’m not really sure what I’m supposed to do with that’.

Craig Barton: Yes, and looking back, I don’t know how I survived with that because you didn’t have anything, you didn’t have those pupil-level, question-level analysis, you didn’t have any of that. It was a bit of a guessing game, wasn’t it?

Louisa: Yes.

Craig Barton: So, tell me what you do now, and I want to go as specific as possible, do not hold back here, so tell me, what’s the first thing that happens and when does the process start for you?

Louisa: Well, we literally start on the next day after GCSE results, so I’ve got all the data on [Sistra] and straightaway I’ll be looking at what are the areas we need to improve, what things do we need to look at that aren’t really helping us to be successful.

Craig Barton:  And how are you getting those areas to improve? What are you looking for there? And the reason I ask this is, there’s a danger that, just because kids have done really well on one question on a certain topic, but not so well on another question, because the exam is only assessing a small portion of the domain of the whole subject, you kind of over-react in a way and think ‘oh God, that’s a disaster area’ or ‘we can relax on that’. Does that make sense?

Louisa: It does make sense completely, because I’ve been doing some SEF work today in prep…

Craig Barton: So, what’s SEF?

Louisa:  Our subject evaluation form.

Craig Barton: I like it, SEF, yes, nice.

Louisa:   It’s my favourite part of the job. I’ve got three to write and I love it.

Craig Barton: And you’re not being sarcastic? You love it?

Louisa: No, I’m being sarcastic. RE, geography and history, and they’re 26 pages long, but …

Craig Barton: Wow, oh God, wow, and there’s no copying and pasting going on between them.

Louisa: No, so that means I know the kids inside and out, and it also means that I can look at what exactly… the middle ability for us in history and geography this year was the area that we need to look at. And as much as I think it sometimes is a chore, it’s not, because I can see exactly what we need to do. We knew that we need to target the top-end 7 to 9, so we thought about what we were doing, are we challenging kids? Are we like pushing them? Are we giving them extra responsibilities in class?

Craig Barton: Who is involved in the process of review? Is it just you as head of department?

Louisa: No, it’s all of us.

Craig Barton: Everyone, so talk me through the practicalities, are you sat around a table? Are there biscuits, cups of teas involved? What’s happening there?

Louisa: So, the first, obviously, the exam day, it’s my second and me, thinking about what we need to do.

Craig Barton: And this is the day after results, is that right?

Louisa:  Yes, Gareth is my second and he’s Head of History as well, so we sort of think ‘what do we need to disseminate to our other staff?’

Craig Barton: So, this is still on this Friday after results day?

Louisa: Yes.

Craig Barton: So, the rest of the department aren’t involved at this stage, is that right?

Louisa: Not yet.

Craig Barton:  So, then what practical things are you putting in-place to pass on the insight that you’ve got from these results to your department? How do you disseminate that info?

Louisa: We have a really tight department that’s five members, and I think that’s the strength of Alcester, as a school, I think across the school, we do this. So, I have a meeting straightaway, we have an afternoon that’s Inset, so the first morning is results and then the second half is like subject. We just literally sit there and go through what we need to do, what our plans are, we’re talking about intent implementation and impact as well for our Ofsted, so I want my staff to know exactly why they’re teaching it. It’s not about how, but why.

Craig Barton: And I’m just picturing, so you’ve got all this data on that Friday after results day, how much of that are you sharing with the department or are you doing a kind of big picture approach, if that makes sense? How specific are you going?

Louisa:  We share it with everybody. I think that you need to do that. I think if you’re not inclusive and don’t share your data, especially with things like the SEF, I think people should be able to see that, it’s a subject-wide thing. If I’ve identified that that’s not what we need to be doing, then we need to be thinking about what we’re doing in the future.

Craig Barton: So, from looking at the past data, are there any examples of concrete changes you’ve made to your practice, going forward?

Louisa: So, we’ve implemented some Saturday schools, which I know is controversial, which is up to the individual teacher, and we’re also thinking about holiday school.

Craig Barton: Just on the Saturday and holiday revision, are you using the insights that you’ve learned to determine what goes into those?

Louisa:Yes, we target different groups of students on different days, so they have an invitation that they come in and we phone home and it’s like a positive thing though, it’s not ‘you need to come into revision’, it’s like ‘this would be amazing for you to come in, what you get from it, this is like a 5 to 7, this is a 7 to 9.

Craig Barton: Wow, and like topic-specific or just kind of grey range-specific?

Louisa: Topic-specific as well really, so, for example, in history, last year the study was leading… I think it was castles, and they went to a castle, they went to go and see it and then some kids didn’t go, so then those kids were invited to come in. It’s been so effective at our school, I think it’s made the difference. I know it’s controversial in terms of time, and not everyone wants to do it, which is fine, because not everyone has to do it.

Craig Barton: I remember a particular child, I won’t name him, but back in the days when it was A* to G, like I got him, he was in my class and the reason he was in my class was because he had a target of an A. And that had come from KS2 SATs, and again, I knew it and he knew it, there was not a cat in hell’s chance he was getting this A, right? But we couldn’t move him down a set because –

Louisa: His target.

Craig Barton: – his target was an A, right? And it was an absolute nightmare, and the poor lad, he battled and battled and battled and he came out, I think he scabbed a B in the end and that was like an absolute miracle that that had happened. But these situations always happen with data, don’t they? What do you do in those situations when you’ve got such a gap? Do you stick to your guns and say, “Look, this is your predicted grade” or do you have a quiet word with the kid and say, “Look, this is just something that the numbers have crunched out, but maybe something more realistic is this.” What do you do?

Louisa:  Yes, I think that’s what we do. So, for example, I had a student this year, he was predicted a 7, he’s probably on a 5, I just thought ‘I need to speak to you, I need to say to you you’re doing really well, this is the best …’ He had loads of outside commitments, he was doing drama, he was doing the play, I thought a 5 was going to be amazing for him to get him to sixth form, to do what he wanted, and then he got a 6.

Craig Barton: So you’ll take that, jeez.

Louisa: Yes, because he believed in himself rather than being aspirational and looking around and thinking ‘I can’t get a 7’.

Craig Barton: Yes, that’s interesting. I’ve been doing a lot of thinking, particularly for this series, about the notion of valid assessments and, indeed, we’ve got an episode coming up later in this series about assessment validity.

Louisa: I look forward to it.

Craig Barton: Genuinely?

Louisa: Yes.

Craig Barton: I say this to my wife, I say, “I’m doing validity,” she’s like, “Oh God,” so falling asleep before I get to the end of the sentence, but I absolutely love it. And I just think back to my practice where I’m either making up assessments myself or grabbing them from God knows where, or giving out past papers, like a year before the kids will be taking them. And I’ve just started to really question, how much reliable data can I get from those, given that these things, one, are just written by me or are designed to be taken by somebody in the May and June of Year 11? Is that something you can relate to?

Louisa: That’s really interesting. I’ve had to really question that this year because we went to a system of having to have end-of-year assessments that every year agree. Because our school thought it would be good to get the younger ones practiced with sitting in the hall, which has got its own implications for workload, which is a different story. But for Y10, 9 and 11, it didn’t work because we were effectively doing end-of-unit assessments. So, what I’ve decided to do in humanities, across the board, is that we’re just doing an end-of-unit, because I didn’t find that data … it was useless last year.

Craig Barton: If we just move away from this big data thing that we get from the high stakes GCSE and A-level exams and so on, what does your data collection look like throughout the year?

Louisa: We have three lots of data at school before the exams, so we’ve got the mocks and we’ve got best case and gut feeling.

Craig Barton: Are they just integer values, so you bang a 7 in, a 6 and a 6? Or will you go…

Louisa: So, the mock is obviously based on the actual data and the gut data is based on what you actually think.

Craig Barton: Yes, and just to a nearest whole level?

Louisa: Yes, a nearest whole level, and then the best case is the same.

Craig Barton: How often are you doing that?

Louisa: Well, we used to do it every eight weeks, but we’ve gone to every half-term now because it was becoming unusable, it was just not meaning anything. So, what we’ve decided as a school is that it’s not worthwhile doing a mid-unit, and end-of-unit, an exam, and then doing end-of-year exam, it was just becoming data for the sake of data. And parents were getting data that basically said ‘on-track’, ‘above track’, ‘below track’ and we didn’t even know what that meant.

Craig Barton:   I’ve been there, yes, I’ve been there.

Louisa: Yes, you know what I’m talking about.

Craig Barton: I 100% do, yes.

Louisa: So, at the moment, I feel like we’ve got it right because we’re stripping it right back to one assessment a unit, every eight weeks, which is fine, because that’s teacher-led, it doesn’t have to be exam-led, so it’s your teacher judgement. And then we’ve got one at the end of the year that’s exam.

Craig Barton: And from these end-of-unit things, what data are you collecting from those?

Louisa: I look at it against their target grade and I have a conversation with each of them, because we’ve gone to more verbal feedback.

Craig Barton: Each of these students?

Louisa: Yes, I put something in their books, but I do talk to them as well, but I’m conscious of the fact that I don’t want students to think that’s just what they’ve got, because it’s one exam. And when you’ve got really high ability students, which we’ve got a few that are predicted 8s and 9s, if they see they’ve got a 6, they’re like … you need to make sure they’re still motivated.

Craig Barton:  Yes, of course.

Louisa:  And at the bottom end, they find it okay, they don’t worry so much, but at the top end at our school, they do.

Craig Barton:  I love data and it seems like you love a bit of data as well. Is there a danger that there’s too much data bombing around and what kinds of problems does that create?

Louisa: I think sometimes people get so hung up on different groups of data, you know, sub-groups, whereas if you’re a good teacher, you’re teaching the kids well, you need to know about the sub-groups. I think the class context sheets we have to do at school, that’s invaluable because you know about them, but I’m not sure we need to be drilling down to that much. Banding is rubbish, I don’t understand banding for a start, I don’t know why I just talk about banding in my SEF.

Craig Barton: So, to finish, if we think back to reviewing assessments and data in general, we’ll have two groups of people, I guess, listening to the show, we’ll have teachers and we’ll also have heads of department and heads of faculty, and obviously you play the role of both. What advice would you have first for a teacher in terms of making the best use of the information and the review process that they get back from a high stakes exam like a GCSE?

Louisa: I think they need to have a really good relationship with their head of faculty and head of department. I think there’s no benefit of not using their expertise, I think you have [to] intrinsic plan, I think you have [to] integrated plan, I think if you look at your own class, that’s fine, but if you’re a new teacher, you need to have your guidance of your head of department. You need to sit down and look at that data, you need to have a plan.

Craig Barton: Likewise then, as a head of department or faculty, what would your advice be?

Louisa: That would be the same.

Craig Barton: And instigate those conversations.

Louisa: Yes, I’ve had that conversation with my second and we all need to sit down and think ‘well, what is the purpose of the curriculum? What are we doing? What does this data mean?’ And I think also, which is most important, I think we’ve done this this year at our school, is to share the improvement plan with your department, because in the past it’s just been a subject leader document –

Craig Barton: Yes, that’s right actually

Louisa: – that nobody else can access.

Craig Barton: And again, the key is in it, improvement, this is your…

Louisa: Yes, it’s your subject.

Craig Barton: And it’s the practical things that you’re going to do, based on what you’ve learned from what’s happened in the past.

Louisa: Exactly, and if you don’t share that with your subject, then you’re not going to improve, that’s what we’ve done, a massive change at our school is like it’s open now.

Craig Barton: Do you know what, that’s interesting, I wonder for how many schools and departments that’s true where all –

Louisa: I don’t think it will be.

Craig Barton: – where all the members of the department have seen that improvement plan.

Louisa: Yes, I bet it’s not.

Craig Barton: I bet it’s not as well, you know, and as we say, that is the improvement and that’s the thing that’s come from this assessment review process.

Louisa: If you don’t talk about your targets, there’s no point having them.

Craig Barton: That’s a good soundbite, what a good way to end on, that’s fantastic. Well, it’s been an absolute pleasure talking to you, Louisa.

Louisa: Thank you, I hope so.

Craig Barton: I’ve learnt absolutely loads, thank you so much.

Louisa: Thank you.

Craig Barton:  Well, I’ve come away from that chat very clear that sharing data and the emotional load within departments is key to a successful review. If you want to visualise the assessment design review process, head to the podcast show notes where you’ll find a graphic showing how question papers get produced. Eoin also mentioned level of response mark schemes. If you want to know more about those, or indeed any other type of mark scheme, you might just find episode seven of series one pretty enlightening. I’ll be back in two weeks’ time, getting answers to more of your questions, but in the meantime, make sure you rate, review and subscribe to the podcast. You can also join the conversation and ask your own questions on Twitter using #insideexams. Until next time, goodbye.