The Department for Education has lived up to its name and yesterday responded to a report from the Standards and Testing Agency (STA) that the 4-year old baseline be dropped this year as an accountability measure in English schools due to the statistical problems of comparing different starting points. I have written on this elsewhere, but it is pleasing nonetheless. The STA study concludes:
We therefore conclude from this study that there is insufficient comparability between the 3 reception baseline assessments to enable them to be used in the accountability system concurrently.
This does not mean that the government has abandoned the idea entirely. It is just for this year. Nothing has altered in the minds of the ministerial team about using such vapid tests as a school accountability measure, even if they say they will not. The philosophy is still wrong headed and the policy misguided. It is just that the statisticians in the STA can’t get the three chosen methods to give a coherent picture. This was always going to be the Achille’s heel of this policy, and those of us involved in the education of 4-year-olds could see easily and clearly that this would happen.
For our part, I have taken a robust view (though hopefully not draconian) that these tests would be of little use and would harm and undermine us in the long run. In addition to the posts I have written on this since April last year, and the learning we gained on the impact of baselines on teacher mental health in Holland back in June, my view was made clear to governors in my report to them last November:
We have chosen this term to trial one of the government’s new baseline assessments, called Early Excellence. Last March there was much debate about how and whether we ought to use an electronically measured baseline for measuring children’s abilities on entry. The government invited providers to create products, with the proviso that until 7000 schools had chosen them, they would not be guaranteed to be included in the database of children’s results that would then be used to measure school’s progress from age 4 to 11. The government’s desire was to be able to judge schools on pupil progress data that was as removed from teachers as possible. Those in favour of using an electronic system used a range of arguments, of which the only one remotely persuasive to me was that if you wanted OFSTED to take any notice of your KS1 improvements (i.e pupil progress from age 4-7) you would have to use the electronic baseline, not one done by actual humans. OFSTED said that they would not take into account the baselines measured by teachers, and therefore the only hope of showing good KS1 progress was to use the electronic baseline: a primary school might then have to rely solely on data from Y3 to Y6, because it was accredited by government (the current debate about the reintroduction of externally marked KS1 tests is interesting in this context).
I am resolutely opposed to using electronic baselines, because I believe that to use them is an argument from fear rather than from what is best for children. It is not compulsory to use one, though the government has weighed the dice strongly against schools who choose not to. Factors to take into account in this argument (and it is a lively one) are:
- How straightforward the electronic baseline is to use: so far, Early Excellence have proven to be fairly good at this. The Foundation team quite liked using it, and it did not interfere unduly with the children’s learning.
- How similar to what we already do is the electronic baseline: Early Excellence seems to be one of the better ones.
- How wide is the range of what is measured: again, Early Excellence was chosen for trial because it had the widest range of data – obviously, the government will ignore a lot of it, and judge schools only on 4-year old abilities in English and Maths.
- How well it reflects a school’s own assessment philosophy and arrangements. This is as yet untested at Christ the Sower, but I suspect that Early Excellence will be better than some we might have chosen.
- How well the outcomes match those measured by school using our traditional baseline routines: this is where there is some problem. Early Excellence tells a much more positive story (because it uses a restricted range of data, not including for instance the Leuven/Louvain scales of wellbeing) than that we measure ourselves. It will be interesting to see if there has been any other data from other schools on this.
The only reasons I have sanctioned our trial of Early Excellence is that a) it is clearly the best baseline assessment tool of all those on offer (there are some truly dreadful ones) and b) it is possible to pull out of the trial and choose not to use one. I remain deeply sceptical about DfE arrangements for baseline assessments. Both the minister and department seem to be out of their depth is working with schools and policy changes or is withdrawn regularly, particularly in the vexed area of assessment. This is a key area in which to reclaim our professionalism and show that those who know the children are always in the best position to assess and monitor their progress as learners. Where baseline assessments have been introduced in Holland, they appear to have caused enormous stress to learners and practitioners.
There has been plenty of comment already on this new announcement. Early Excellence has already e-mailed me to tell us their reaction to the news, which must be a serious financial blow to them, having invested heavily in training and online support already. They interpret the announcement as spelling an end to the use of the 4-year-old baseline as an accountability measure (which it never ought to have been) and expect it to be replaced in 2017 with a “school readiness measure” with a single provider, for which they presumably are in pole position. Here is their take in the future:
It also feels important to build momentum around practitioner-led assessment in order to ensure the government’s next move is shaped by best practice and the things that are most significant for young children’s learning.
The CPR Trust has a short comment from Robin Alexander to the effect that if this is “inappropriate and unfair to schools”, maybe other policies will come in for scrutiny. My gut feeling is that it will not lead to any such thing, as the baseline has been abandoned for statistical reasons, rendering the data worthless. There is nothing in the government’s response that makes me think that they will depart from accountability simply because a policy is inappropriate or unfair. Their statement makes this clear:
We remain committed to measuring the progress of pupils through primary school and will continue to look at the best way to assess pupils in the early years.
Actually, so do we, and so do all schools “remain committed to measuring progress”. This is not in doubt. What is in doubt is whether the DfE can do this better than the many excellent practitioners who already, in love and rich professional depth, use their understanding of assessment and pedagogy to build a far richer picture than that offered by any of the existing baseline providers.
The last word goes to Russell Hobby from the NAHT, placing this in the context of a year of chaos where the DfE have barely managed to enter the “emerging” grade for the management of assessment:
This outcome is symptomatic of the general chaos on assessment in the primary phase, with poor planning and a lack of consultation with the people who know what will actually work. We are clear that a piecemeal approach to individual tests will not work. It’s what got us into this mess in the first place. We need a coherent approach to assessment from start to finish across all ages, methods and subjects.