Think of a pile of student essays. Instead of teachers grading every essay one by one with a long rubric, the system just keeps asking: ‘Which of these two is better?’ After lots of these quick comparisons, the software works out a reliable score for every piece of work. It’s like ranking players in a tournament, but for writing and exams.
Traditional marking of open-ended student work (essays, extended responses) is slow, inconsistent, and expensive. No More Marking uses comparative judgement to produce more reliable, faster, and scalable assessment outcomes across classes, schools, and exam boards.
Specialised assessment methodology (comparative judgement), accumulated assessment datasets, and strong integration into school and exam-board workflows create switching costs and defensibility.
Classical-ML (Scikit/XGBoost)
Structured SQL
Medium (Integration logic)
Scalability of pairwise comparisons and human-judgement collection for very large cohorts; latency and cost are driven by the number of comparisons and the need for teacher input.
Early Majority
Focus on comparative judgement for open-ended responses rather than purely rubric-based or fully automated AI marking; positions itself as a robust, research-backed alternative to traditional exam board marking workflows, often used alongside or on top of existing systems.