Using AI to assist marking and feedback
As artificial intelligence (AI) tools become more accessible, many educators are exploring how they can support marking and feedback. But how can you use AI effectively, ethically, and in line with current expectations?
Our new guidance offers practical advice on how to integrate AI into your marking processes. This guide is designed to help you reduce workload, improve efficiency, and maintain high standards in teaching, learning and assessment.
Download our new guidance: Using AI with confidence: A guide to the marking and feedback of summative assessment in centres
What’s inside the guide?
Using AI with Confidence is a practical guide designed to help educators and centre leaders make informed, responsible decisions about using AI in marking and feedback. It sets out clear requirements and examples of best practice for using AI in centre-marked assessments, helping you stay compliant and consistent across your organisation.
It also aligns with current guidance from JCQ, Ofqual and the Department for Education, so you can be confident your approach meets national expectations.
The guide offers practical support for addressing four key challenges:
- Accuracy and quality – how to ensure AI supports human judgement and maintains high standards.
- Privacy – how to protect learner data and comply with UK data protection law.
- Transparency – how to build trust with learners and carers by being open about how AI is used and how decisions are made.
- Knowing your learners – how to preserve the essential connection between teachers and learners and ensure feedback remains meaningful and personalised.
Whether you're just beginning to explore AI or looking to refine your current approach, this guide will help you feel equipped, confident and ready to use AI tools effectively and ethically in your assessment practice.
FAQs
Assessors remain fully accountable for the marks and feedback they give, even when they use AI tools to assist them. Centres must ensure that assessors have clear accountability for final marks and feedback. The requirement for assessors to review and critically evaluate AI outputs ensures that they retain control and uphold quality.
The guidance emphasises that AI must support – not replace – human judgement, in-line with Ofqual’s and JCQ’s current stance. To ensure this supportive role, assessors must review all submissions in full and critically evaluate AI outputs for factual accuracy, bias, and alignment with assessment criteria.
Absolutely not! We want to support centres to have the best marking and feedback practices possible for the benefit of learners, however they design and implement this activity in their centre.
Input from Ofqual, JCQ, and DfE on this subject makes it clear that centres are allowed to use AI assistance to mark learner work, however up until now there hasn't been a clear set of instructions about how to do this effectively and safely. This guidance fills that gap, providing NCFE’s customers with a clear set of rules about what they need to do initially before rolling out AI assistance to mark learner work, and how they can then build on that work with best practice ideas to continue to improve it.
Join our webinar to learn more
How to use AI with confidence in your marking and feedback
Tuesday 25 November, 3:30–4:30pm
In this live session, our Director of Research and Innovation, Rebecca Conway, and our Innovation Manager, Gray Mytton, will explore how AI can be used with confidence in the marking and feedback of summative assessments.
You’ll have the chance to dive deeper into the guidance, understand how to apply it in your own setting, and ask questions during a live Q&A. Whether you're just getting started or looking to refine your approach, this is a great opportunity to gain practical insight and build your confidence.