This is like having two different “weather apps” for grades. Both look at a student’s past behavior and background (attendance, homework, test scores, etc.) and try to forecast how well they will do in the future. The paper compares which forecasting engine—XGBoost or Random Forest—does a better job at predicting students’ academic performance.
Universities and schools struggle to identify which students are likely to underperform or drop out early enough to intervene. Manually spotting at‑risk students from many variables (grades, attendance, demographics) is slow and often inaccurate. This work evaluates two machine-learning methods to more accurately predict student performance from existing data, enabling earlier, data-driven interventions.
Classical-ML (Scikit/XGBoost)
Feature Store
Medium (Integration logic)
Feature engineering quality and labeled historical data coverage (data sparsity, bias, and drift across cohorts).
Early Majority
Focuses specifically on head-to-head performance comparison of XGBoost and Random Forest for academic performance prediction in education, informing which off-the-shelf classical ML method may be better suited for student-risk modeling rather than proposing a novel algorithm.