EducationClassical-SupervisedProven/Commodity

No More Marking – Comparative Judgement for Assessment

Think of a pile of student essays. Instead of teachers grading every essay one by one with a long rubric, the system just keeps asking: ‘Which of these two is better?’ After lots of these quick comparisons, the software works out a reliable score for every piece of work. It’s like ranking players in a tournament, but for writing and exams.

9.0
Quality
Score

Executive Brief

Business Problem Solved

Traditional marking of open-ended student work (essays, extended responses) is slow, inconsistent, and expensive. No More Marking uses comparative judgement to produce more reliable, faster, and scalable assessment outcomes across classes, schools, and exam boards.

Value Drivers

Cost reduction in marking and moderation of written workFaster turnaround time on assessments and benchmarkingImproved reliability and consistency of marks across cohorts and markersBetter insights into student writing quality and progressionScalability for large cohorts without linear increase in teacher workload

Strategic Moat

Specialised assessment methodology (comparative judgement), accumulated assessment datasets, and strong integration into school and exam-board workflows create switching costs and defensibility.

Technical Analysis

Model Strategy

Classical-ML (Scikit/XGBoost)

Data Strategy

Structured SQL

Implementation Complexity

Medium (Integration logic)

Scalability Bottleneck

Scalability of pairwise comparisons and human-judgement collection for very large cohorts; latency and cost are driven by the number of comparisons and the need for teacher input.

Market Signal

Adoption Stage

Early Majority

Differentiation Factor

Focus on comparative judgement for open-ended responses rather than purely rubric-based or fully automated AI marking; positions itself as a robust, research-backed alternative to traditional exam board marking workflows, often used alongside or on top of existing systems.