We show that Progress 8 is a reliable measure of school effectiveness and that parents often do not apply to their most effective local school.
School effectiveness measures can play a vital role in informing parental decision-making, promoting high teaching standards, identifying best practices and addressing educational inequality. One challenge with measuring schools’ effectiveness is accounting for what can be large differences in their pupil intakes: for example, simply comparing the average test scores in different schools is unlikely to provide a fair assessment of how well they foster pupils’ attainment.
To get around this challenge, for most of the past two decades England has used ‘value added models’ as one measure of secondary school effectiveness. Instead of comparing pupils’ raw test scores at the end of secondary school, these models adjust the scores for previous levels of attainment and often other pupil characteristics, to measure the amount of progress that pupils in different schools make. In England, the value added measure that is currently used is known as ‘Progress 8’. Its most notable feature is that it only adjusts for previous levels of attainment, measured through National Curriculum tests at the end of Key Stage 2 (commonly known as Year 6 SATs scores).
Using value added models to measure school effectiveness can be controversial: critics claim that they might fail to adjust properly for differences in student characteristics. In our new research, we evaluate how well value added models such as Progress 8 perform in practice, by exploiting randomness in the school admissions process and comparing how well value added models predict outcomes of students who are affected by that randomness.