As Table 1 demonstrates, Key Stage 2 maths scores do quite a good job at predicting end-of-Year 8 English scores by themselves (correlation = 0.68). Adding in Key Stage 2 reading scores does not budge this (i.e. the correlation is still 0.68), while adding in end-of-Year 7 English tests lead to only a small improvement (correlation = 0.72); in fact, adding in end-of-Year 7 maths scores do a better job (correlation = 0.80).
These results have two implications for schools.
The first is that they might reflect on cambodia rcs data whether they are currently using the data they hold to predict and monitor pupil progress in the optimum way. Such predictions are likely to be better when they do not draw only upon the most recent data point in a specific subject alone.
The second is with respect to the amount of testing done, particularly doing so on a termly basis. Sam Sims has suggested that half-termly data drops are adding lots of workload without useful information. Testing has learning benefits in itself and may help pupils and teachers identify specific learning gaps that they may need to address. But if the primary goal of these tests is to track pupil progress, it is likely that a twice-yearly assessment (or possibly even just a single annual assessment) will suffice.
Exams and assessment
About the Author: John Jerrim
John Jerrim is a research associate at FFT Education Datalab and a professor of education and social statistics at UCL Institute of Education. John’s research interests include the economics of education, access to higher education, intergenerational mobility, cross-national comparisons and educational inequalities.