Jake Waterfield is a young finance professional in London. He ran the 2025 London marathon to raise money for St Bartholomew’s Hospital who saved his life in 2024.
I took my final set of undergraduate exams in June 2019, and what a time it was to sit them; COVID was yet to enter the national lexicon, beers in the college bar were as cheap as £1.80 a pint, and Boris Johnson and Michael Gove were facing off to be the next leader of the Conservatives, and therefore the Prime Minister.
Of course, lots has changed since then, both for the country and the Conservatives, but also in the very way that our exams are taken.
As an Economics student, my final exams were assessed in three forms: most (c. 70 per cent) were through traditional written paper-and-pen timed exams – think a large sports hall with hundreds of nervous students packed inside, stern invigilators and strict policies around mobile phone use. The second was via a 10,000 word dissertation to be completed privately over the course of the full year, and finally, a small portion of my marks (c. 10 per cent) were from 1,000-3,000 word take-home essays with anything up to 2 weeks to prepare, write and submit.
As I discuss further later, the beauty of the first method of assessment (timed pen-and-paper exams), as much as I hated them, was its emphasis on understanding and application of knowledge – fundamentally it forced myself and my cohort to prepare thoroughly and whole-heartedly, knowing we couldn’t rely on external aids or prompts on the day.
So, what’s changed since then? Two things have caused a radical shift in how students are now examined and previous rigour to be eroded – COVID, and AI.
The pandemic understandably caused an accelerated move to online assessments – take-home essays, open-book exams, and remote submissions became the norm for education institutions up and down the country. Universities justified these changes at the time as being flexible and accessible, which was fair during the initial days of COVID, but these changes have proven sticky, with institutions retaining these models now more than five years since the worst of the pandemic, with far fewer paper-and-pen exams than before.
AI essay assistance really took off around late 2022, with the public release of ChatGPT and adoption surged in 2023-2024 as students began using AI more widely. Students can now generate essays with minimal effort or engagement with the course material; moreover, plagiarism detection tools struggle to identify AI-written work, undermining the integrity of grades and even whole degrees.
A survey from last year highlights that 77 per cent of UK employers now use skills tests rather than CVs or degrees, reflecting declining confidence in academic qualifications alone. I’ve written before here about grade inflation, which, along with the proliferation of AI, means employers simply don’t believe that the grades on a student’s transcript mean anything.
Worryingly, the internet is awash with stories around the over-use of AI in education – some of which, such as the advent of ‘Claude boys’ in schools who ‘live by the Claude and die by the Claude’, are quite humorous (Claude being an AI model developed by Anthropic), but this example in particular highlights a darker outcome for students – a generation who are completely unable to think for themselves, deferring instead to their AI-bot of choice and following their instructions blindly.
It doesn’t take any particularly unique insights to see how problematic this could become in the coming years. As such, the question is less about whether or not the current examination system is fit for purpose (given it so clearly isn’t), and instead about how we can make it fit for purpose again.
The first point I would stress is that, in my opinion, traditional pen-and-paper exams are vastly superior, and always have been, to ‘seen’ essays. As I mentioned at the beginning of this piece, pen-and-paper exams allow students to develop:
- Cognitive skills: Writing by hand aids memory retention and deeper understanding, thus forcing students to internalise knowledge rather than outsource thinking – I remember spending endless evenings at the university library ensuring that I understood all facets of the material given how I would need to apply it unprompted in exams
- Assessment Integrity: It’s impossible (theoretically) to outsource answers to AI in a closed exam hall – of course, this requires the use of a strict invigilator system to ensure cheating is detected. This also helps to maintain fairness across socioeconomic backgrounds given not everyone has equal access to AI tools
- Real-World Application: Closed book exams simulate pressure and problem-solving under time constraints, both valuable skills to an employer
By contrast, ‘seen’ essays only encourage surface-level engagement – at its best, students can patch together arguments without truly mastering or even understanding all of the content, at its worst AI can now produce the full essay itself, reducing incentives to study deeply, or even at all.
I should stress that our friends on the other side of the pond are already coming around to this reality. A recent article in the Wall Street Journal noted that sales of the lined exam blue books commonly found in universities in the US rose by more than 30 per cent at Texas A&M University and nearly 50 per cent at the University of Florida during the last academic year, highlighting their shift back to more written exams.
As I’m sure readers are eager to comment – whilst it’s true the UK Government cannot formally dictate exam formats to universities, the Conservative Party should instead focus on influencing universities through funding (via the Office for Students, which oversees quality and standards), as well as policy pressure and public debate.
Through these methods, the Conservatives should look to champion the following in our university examination system:
- Rebalance assessment weighting: As already mentioned, by making 80 per cent of the final grade based on timed, in-person paper-and-pen written exams, this reduces the reliance on the essays and coursework which are most vulnerable to AI misuse
- Invigilated exams: Require students to sit these written exams in controlled environments with strict invigilation. Ban electronic devices guaranteeing that answers reflect the student’s own knowledge and preparation
- Design exams that test knowledge application, not just recall: Using scenario-based questions, where students must apply theory to real-world problems that require critical thinking and demonstrating contextual understanding
- Integrate oral defences: When dissertations or major projects are used, require a short viva where students must explain and justify their work in person, proving genuine comprehension and depth of engagement
I was going to write a paragraph on how we should be punishing students for proven AI misuse. The issue with this is that, aside from a few monumentally careless errors on the part of the student, proving AI use is very hard. Instead, the suggestions above render AI practically useless for the pen-and-paper exam format (assuming a strict invigilator system), and therefore limit the need to have to check for AI involvement at all.
Instead, the Conservatives should take the opposite approach and be pragmatic for the ‘seen’ essays and coursework (making up no more than 20 per cent of assessment marks). For these, AI should be encouraged to aid in producing these pieces of work. Fundamentally, AI has already, and will continue to, become a key tool in the workplace that students enter after University. By explicitly encouraging AI use in the limited coursework component, universities can teach students how to harness these tools responsibly. This ensures graduates are not only grounded in rigorous knowledge but also prepared to thrive in workplaces where AI is ubiquitous.
Although this article has focused primarily on universities, it is worth stressing that the same principles apply to secondary education. The most acute problems with take‑home assessments and AI‑assisted coursework have emerged in the tertiary sector, where pandemic‑era practices have proved particularly sticky, but the logic is no different for schools: rigour, invigilation, and closed‑book written exams remain the fairest and most reliable way to assess genuine understanding.
Crucially, in secondary education the Government also has far greater formal authority over exam structures and standards, meaning ministers are not merely influencers but direct stewards of the system. Ensuring that GCSEs and A‑levels retain their integrity would reinforce the very culture of academic seriousness that universities should then be expected to uphold.
To conclude, AI presents a vast and immediate challenge to the integrity of our examination system.
The worrying signs are already visible, from the rise of students outsourcing their thinking to AI tools, to cultural phenomena like the so‑called “Claude boys” who proudly proclaim their dependence on the technology. If we are serious about restoring rigour, the Conservatives should push for a system where the vast majority of assessments are conducted through pen‑and‑paper exams – only in that environment are students compelled to engage with, retain, and deeply understand the content of their courses.
At the same time, it would be short‑sighted to simply ignore AI altogether, instead, a limited space of no more than 20 per cent of assessments should be reserved for coursework where students are encouraged to fully harness AI, reflecting its growing role in the workplace.
By doing this, UK universities can both protect the value of their degrees and prepare graduates for the realities of a world where technology is ever-present, but human understanding remains indispensable.
conservativehome.com (Article Sourced Website)
#Jake #Waterfield #rendered #examination #system #unreliable #Conservative #Home
