The waters have yet to settle, but Clare Hodgson, HfL Assessment Adviser, reflects on what has gone, what is worth salvaging, and what we do know about the new Teacher Assessment Frameworks for writing.
Another new school year, another assessment framework – but just how different is it? Kirsten Snook succinctly maps where things have gone, stayed the same or been subtly changed, following the government’s consultation with schools. Click on the links for an extremely helpful shortcut to an outline of the new expectations.
Remember when the former Education Secretary, Michael Gove, declared that we had “had enough of experts”? A dispiriting sentiment for someone who had previously been such a keen advocate for the accumulation of knowledge and for learning from best practice.
But have we really now had enough of academic experts and well-researched ideas, favouring instead the chaotic noise of social media? Or is there still an appetite for tapping into the collective expertise of those who have dedicated their careers to studying and thoroughly researching their field?
Take formative assessment (also known as ‘Assessment for Learning’ or AfL) for example. There has been an enormous wealth of research and exploration in this area over the last few decades. Yet it is an area that is still quite misunderstood by some and, in some corners of the twittersphere, the accusation has been made that the claimed impact of formative assessment on pupils’ learning has been exaggerated.
Let’s discuss this by firstly reminding ourselves of some of the landmark research publications.
Black & Wiliam’s Inside the Black Box (1998) is still considered a seminal work, but the research into formative assessment goes back much further, for example Sadler (1989) and Butler (1988) to name but two earlier studies. More recently, John Hattie’s Visible Learning (2008) brought a fresh perspective on which assessment-related activities seem to have the greatest impacts on pupils’ learning.
For me, one of the key messages coming out of all of this research is this: formative assessment is not a bolt-on ‘thing’ that teachers choose to do at certain times, neither is it a set of rules or strategies. It is an intrinsic part of teaching and learning, an essential element of pedagogy. My opinion, for what it’s worth, is that where it is claimed that AfL has not worked – not succeeded in leading to better pupil progress – it has not been properly understood and implemented.
For an interesting narrative of what AfL is and what it isn’t, see Sue Swaffield’s “The Misrepresentation of Assessment for Learning”.
Definitions of AfL, or formative assessment, vary.
The Assessment Reform Group (2002) defined it as “the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there”. But this definition led to some misunderstandings, as some took it to be about ‘measurement’ of where learners are in their learning (e.g. via levels) and about numerical target-setting – rather than the intended meaning which was about students’ understanding of the specific knowledge and concepts being explored within a sequence of lessons.
Perhaps a more helpful definition is this one, offered by Dylan Wiliam: “Using evidence of achievement to adapt what happens in classrooms to meet learner needs”.
I have, on the odd occasion, heard people refer to an “AfL lesson”. I’m guessing they mean a lesson that includes some particular assessment activity from which the teacher hopes to glean useful information about what the children have understood. But if that is an “AfL lesson”, what’s happening the rest of the time? A refusal to use first-hand evidence of what children are understanding or not understanding to adapt what happens in the classroom?
Klenowski (2009) states that formative assessment “is part of everyday practice by students, teachers and peers that seeks, reflects upon and responds to information from dialogue, demonstration and observation in ways that enhance ongoing learning”.
So, based on this definition, real AfL should be happening all the time. It is in the moment. And my view, informed by working with groups of teachers as part of action research projects led by Shirley Clarke, is that when teachers really understand what it is – and what it is not – and wholeheartedly embrace it within their day-to-day practice, it has revolutionary impact on the children’s approaches to learning, their engagement, motivation and, ultimately, their progress. (It also has been seen to produce huge benefits for the teachers themselves – in terms of their motivation, professional learning and job satisfaction.)
Future blogs will explore in more detail some of the aspects that make up effective formative assessment practice.
In a few weeks’ time, we have the huge privilege and pleasure of having Dylan Wiliam come to our training centre in Stevenage, to lead a conference on Embedding Formative Assessment. At the time of writing, there are still a few places remaining if there are more folk out there who still value the ideas, thoughts and practical suggestions of a bona fide ‘expert’.
Please also see the full range of assessment training that we are providing in the forthcoming term, or contact us if you would like to arrange bespoke training or consultancy in your school.
Ben Fuller, Lead Assessment Adviser, Herts for Learning Ltd.
Assessment Reform Group (2002) Assessment for learning: 10 principles, Online: http://www.aaia.org.uk/content/uploads/2010/06/Assessment-for-Learning-10-principles.pdf
Black, P. and Wiliam, D. (1998) Inside the black box: raising standards through classroom assessment, London: School of Education, King’s College
Butler, R. (1988) Enhancing and undermining intrinsic motivation; the effects of task-involving and ego-involving evaluation on interest and performance, British Journal of Educational Psychology, 58, 1-14
Hattie, J. (2008) Visible learning: a synthesis of over 800 meta-analyses relating to achievement, Oxford: Routledge
Klenowski, V. (2009) Assessment for learning revisited: an Asia-Pacific perspective, Assessment in Education: Principles, Policy & Practice 16, no. 3: 263-268
Sadler, R. (1989) Formative assessment and the design of instructional systems, Instructional Science, 18, 119-44
Swaffield, S. (2009) The misrepresentation of assessment for learning – and the woeful waste of a wonderful opportunity, Online: https://www.aaia.org.uk/content/uploads/2010/07/The-Misrepresentation-of-Assessment-for-Learning.pdf
A few quick thoughts and observations from me, now that I’ve had a little bit of time to have a look around the beta version of Analyse School Performance, the replacement to RAISEonline.
No big surprises in the way data is displayed – it combines elements of the graphical displays used in the Compare School Performance site with some of the key tables from the current RAISEonline report. And Key Stage 2 scatterplots are still there (who doesn’t love a scatterplot?) albeit without the option to change the x-axis from overall prior attainment to subject-specific prior attainment (which is a shame, as I rather liked that).
Continue reading “It’s the end of RAISEonline as we know it (and I feel fine)”
With the deadline for registering pupils for the KS2 tests on NCA Tools fast approaching (17th March), there is one particular question that I have been asked quite frequently in recent days: Is it better for me to enter my child with (insert description of a particular set of Special Educational Needs and/or Disability here) for the SATs or to disapply them?
Continue reading “To Sat or Not to Sat?”
Sabrina Wright is a Teaching and Learning Adviser for English at Herts for Learning.
Following on from my last blog, where I unpicked the KS1 exemplification materials and moderation guidance, I felt the urge to spend a little of my time considering the handwriting element of the Interim Teacher Assessment Frameworks (ITAFs) against the National Curriculum (NC) expectations.
Ben Fuller, Lead Assessment Adviser at Herts for Learning
And so to the second part in this series (of undefined length – might turn into a box-set) of RAISEonline Brain teasers. If you missed part 1, it’s here. You might also find this a useful discussion about a key difference between the unvalidated and the validated KS2 data.
Continue reading “RAISEonline Brain Teasers part 2”
Ben Fuller, Lead Assessment Adviser at Herts for Learning
Yesterday saw the release of the KS2 Performance Tables (based on validated data). You can find the figures for any school in England here.
This means that anyone can look up your school and see inspiring data such as this:
To the casual glancer, this chart might appear to suggest that this particular school has achieved progress scores somewhere around the median. But beware, that middle section covers around 60% of schools, so what the image above actually shows is data that could be anywhere between the 21st and 80th percentiles.
The greater surprise, though, in exploring the validated data is that an unexpected methodological change has taken place since the unvalidated data appeared in RAISEonline. This change applies to one very specific group of pupils – those pupils who were entered into the tests (reading and maths) and who failed to score enough marks to be awarded a scaled score.
In the unvalidated data, these children were excluded from the progress data (but included in attainment). (However, where children were not entered into the test because they were working below the standard of the test, their Pre-Key Stage standard teacher assessment was used instead and those children were included in the progress measure. This seemed counter-intuitive, in terms of setting up a strange incentive for schools to enter children into a test in which they clearly were unable to achieve.)
Here’s the change: now those children have been included – provided the teacher assessment is one of the Pre Key Stage standards (PKG, PKF or PKE). If you had children who took the test and didn’t achieve a scaled score, and the teacher assessment was either PKG, PKF or PKE, your progress score will almost certainly have gone down.
If the teacher assessment for such children was HNM (Has Not Met the standard) then those children are still excluded from the measure – so the progress score should be unaffected. (This is a strange anomaly in the system. It would make more sense to me in such cases to award the same score to HNM that is used for PKG (79 points) rather than remove such a child from the progress measure altogether.)
So, if you had children who sat the KS2 tests but did not achieve a scaled score – check your validated data progress scores on the Performance Tables site. They might be different to the figures you have already been looking at in RAISEonline and Inspection Dashboard. (Both of these documents will be updated to the validated data at some point in the Spring.)
The intricacies of the KS2 progress model are very well explained in this excellent blog by James Pembroke (aka ‘sigplus’). Thanks James for bringing my attention to this methodological change via the medium of Twitter!