Assessing What Matters: Why AI Should Change What We Test, Not Just How We Police It

Published

Thursday 5 Mar 2026

Author

Rose Luckin

As AI makes traditional coursework easy to replicate, the real challenge for education is not preventing misuse but rethinking what we assess. This piece argues that assessment must shift away from knowledge recall towards the human capacities AI cannot replace, such as judgement, metacognition and self-awareness.

As part of looking at the broader topic of The Place of AI in Education, we’re working with thought leaders and key stakeholders to explore what it means to embed AI in our education system. The first article in this series discusses how the traditional assessment system needs to change in an age where AI can outperform humans in areas like knowledge recall. It argues that this shift should push us to focus more on assessing other areas such as metacognition and emotional regulation, which AI cannot replicate.

A college leader told me recently that her school had spent the best part of a term devising new procedures to stop students using AI on their coursework. Staff were exhausted. The irony, she said, was that the written assignments they were trying to protect were the kind of thing AI could now produce competently in minutes. She was not asking me how to catch cheats. She was asking a far more important question: if AI can do this work, should we still be assessing it this way?

It is a question the entire sector needs to confront. Much of the current conversation about AI and assessment focuses on detection and prevention. Universities are redesigning coursework. Schools are tightening supervision. Awarding bodies are weighing the risks. The Curriculum and Assessment Review final report rightly noted the risks AI poses to coursework, standards, and fairness. These are legitimate concerns and they deserve serious attention.

But the most important question is whether our assessments are testing the right things – the things that matter in the modern world. The Review states that exams remain the principal form of assessment, in part because they mitigate the risks posed by generative AI. There is a paradox here that we have not yet resolved: we are preparing young people for an AI-enabled world by assessing them in conditions where AI is forbidden. If we define educational success primarily through the recall and reproduction of knowledge, we are asking students to compete with machines on the machines’ own terms. That is a contest humans will increasingly lose.

The real opportunity is to use AI as a catalyst for rethinking what we value. Drawing on decades of research in cognitive science, developmental psychology, and education, I have argued that human intelligence is best understood not as a single capacity but as an interwoven model comprising seven elements. Knowledge about the world, the domain where most current assessments operate, is only one of them.

The remaining six are where things become far more consequential:

  1. our personal epistemology, which is our understanding of what knowledge is and how it is constructed,
  2. our social intelligence,
  3. our metacognitive awareness, meaning our ability to know what we know and do not know,
  4. our meta-subjective intelligence, encompassing emotional and motivational self-regulation,
  5. our metacontextual awareness of how environment shapes our thinking, and
  6. our accurate perceived self-efficacy, which is the capacity to make sound judgements about our own capabilities.

These elements are precisely what AI cannot replicate. An AI system does not understand what evidence is in the way humans do. It identifies patterns in data rather than grasping the meaning, reliability, or contextual weight of evidence. It can present competing claims, but it cannot model the genuine intellectual judgement required to weigh them, nor can it recognise when a student’s reasoning is superficially sound but fundamentally flawed.

AI has no emotions, so it cannot develop the resilience a young person needs to persist through genuine intellectual difficulty. It has no direct experience of the physical or social world, so it cannot cultivate the contextual judgement that underpins wise decision-making. And yet our assessment systems largely ignore these dimensions of intelligence. We test what AI does well and neglect what only humans can do.

This does not mean abandoning examinations. It means broadening what they assess. We could, for example, ask students to use AI to produce an initial analysis and then assess them on their ability to critique, refine, and defend the result. We could design assessments that require collaborative problem-solving, where the quality of a student’s contribution to a group’s thinking is evaluated alongside the outcome. We could probe metacognition directly, asking students to explain not just what they know but how they know it, where their understanding is weakest, and what they would do to strengthen it.

These are not fanciful ideas. There is robust evidence that metacognitive skills can be developed and assessed, and that doing so improves learning outcomes across all ability levels.

The stakes are high. Recent research has found that students who rely heavily on AI demonstrate what has been described as ‘metacognitive laziness’, associated with weaker metacognitive regulation and self-monitoring behaviours, alongside increased procrastination. If our assessment systems continue to reward the kinds of performance that AI can deliver, schools will rightly focus on what AI tutors teach. We will produce students who recall facts efficiently but lack the capacity for independent, critical, self-aware thought. That is the opposite of what an AI-enabled world demands.

Awarding bodies are uniquely placed to lead this shift. Assessment systems drive the nature of education systems across the globe. If we change what we test, we change what schools teach. The arrival of AI is not a threat to assessment; it is an invitation to assess what has always mattered most. We should accept that invitation with urgency.

We’d like to hear your thoughts on how our assessment systems can and should change with the presence of AI today. Or would you like to contribute to an upcoming article? You can contact us at policy@aqa.org.uk.