Search

Herts for Learning

Blogs for Learning

Category

Assessment

To Sat or Not to Sat?

With the deadline for registering pupils for the KS2 tests on NCA Tools fast approaching (17th March), there is one particular question that I have been asked quite frequently in recent days: Is it better for me to enter my child with (insert description of a particular set of Special Educational Needs and/or Disability here) for the SATs or to disapply them?

The first part of my answer is to explain that there is no such thing as *disapplying a child from the tests. Never has been. (There is disapplication from the National Curriculum – but that is quite different.) What I find people generally mean when they use the ‘d-word’ is to register a child as “working below the standard of the test” (code B in the pupil registration procedure). NB this distinction is more than mere semantics. ‘Disapplied’ could lead one to assume that such a child was discounted from the published data, which is not the case.

Having established that what the questioner really means is “Should I indicate that they are working below the standard of the test, or should we put them in and see what happens? How will it affect my data?”, the next part of the answer is to refer to the statutory guidance, the Assessment & Reporting Arrangements (ARA) document, which states in Section 5.1:

“if pupils are considered to be able to answer the easiest questions, they should be entered for the test. These pupils may not achieve a scaled score of 100, the ‘expected standard’, but should still take the test.”

Furthermore, it states:

“Pupils shouldn’t take the tests if they:

 have not completed the KS2 programme of study, or

 are working below the overall standard of the KS2 tests, or

 are unable to participate even when using suitable access arrangements”

So, regardless of the question of the impact on school data, the statutory position is that if a child is able to answer even just a few questions on the test, they should be entered. Code B is only appropriate where a child cannot access any of the test at all.

There are a couple of other codes that could be used in certain circumstances:

J = Just arrived in the country (and therefore we have not yet been able to establish whether they are working at the standard or not)

U = Unable to access the test, although the child is working at the academic standard of the test (e.g. a sensory impairment or physical disability prevents the child from being able to access the test)

This is all explained in greater detail in section 5.2 of the ARA.

Nonetheless, having explained that there is a statutory requirement to enter a child into the tests if they can access them, the question still remains – how will it affect my data?

The first point to make here is that, whilst we know exactly how attainment and progress were worked out in 2016, there is no guarantee that the methodology won’t be tweaked for 2017. (Confirmation expected in April.)

But let’s assume the system remains the same as last year. By entering the child into the test, they will either

  • achieve a scaled score of 80 or above, or
  • (if they fail to score more than a very few marks) achieve no scaled score.

Assuming that you believe the child to be working above the Pre-Key Stage Standards (and if they weren’t, they probably shouldn’t be sitting the test in the first place) but below the Expected Standard, your teacher assessment would be HNM (“Has Not Met”). If this child then failed to achieve the scaled score of 80, in the 2016 system they were then not included at all in the school’s progress figure (but included in the attainment data). Even if DfE adjust the methodology in 2017 and decide to award an arbitrary scaled score of, say, 79 in this circumstance (which would make more sense in my opinion) you still do not stand to lose by entering the child into the test.

So – the answer to the question is the same, regardless of whether you base your approach on statutory guidance or on what produces the best data outcome: if the child can access at least some of the test, then they should take it.

Of course there are situations where it is not appropriate for a child to take the test because it is quite clear to the educational professionals that the child is working at a lower standard. In such circumstances, of course the child should not sit the test. And one would expect their teacher assessment to be based upon the Pre-Key Stage Standards (not ‘HNM’). Provided this is all done correctly then again there is no incentive, data-wise, to act in a way which is contrary to statutory guidance. If you did put such a child in to the test, knowing that they would not achieve a scaled score, then their Pre-Key Stage Standard assessment would be used in the calculation of a progress measure – just as it would if they had not been entered (code B).

There is further useful guidance on pupil registration here.

Note also that if circumstances change between Pupil registration (in March) and taking the test (in May), you will be able to amend the test attendance register accordingly (for example, if you had entered code B in March and by May come to the conclusion that the child should be taking the test).

*One further point about “disapplication” – the other situation in which I hear this word used (still wrongly) is regarding the DfE Data Checking process, whereby schools can apply to remove certain pupils from the published (validated) dataset. This process takes place in the September after the Summer term in which the SATs took place. The main scenario where a school can apply to remove a child from their data is if the child arrived from overseas during the last 2 years, from a non-English speaking country, and does not have English as their main spoken language. However, this process is entirely separate to the issue of whether or not the pupils took the tests. And it’s still not called disapplication.

Ben Fuller is the Lead Assessment Adviser at Herts for Learning

Book now for the Embedding Formative Assessment One-Day Conference with Dylan Wiliam

Expectations for handwriting: you’re write to be confused!

Sabrina Wright is a Teaching and Learning Adviser for English at Herts for Learning.

Following on from my last blog, where I unpicked the KS1 exemplification materials and moderation guidance, I felt the urge to spend a little of my time considering the handwriting element of the Interim Teacher Assessment Frameworks (ITAFs) against the National Curriculum (NC) expectations.

Continue reading “Expectations for handwriting: you’re write to be confused!”

RAISEonline Brain Teasers part 2

Ben Fuller, Lead Assessment Adviser at Herts for Learning

raise-pupil-groups-2

And so to the second part in this series (of undefined length – might turn into a box-set) of RAISEonline Brain teasers. If you missed part 1, it’s here. You might also find this a useful discussion about a key difference between the unvalidated and the validated KS2 data.

This post features 2 frequently (ish) asked questions, together with answers.

Q1. Why do the numbers of pupils in the 3 prior attainment groups not add up to the total number of pupils in the cohort?

(For example, in the image above, the 3 figures that I have encircled in blue show that this cohort had 10 pupils in the ‘Low’ prior attainment group, 26 in the ‘Middle’ and 12 in the ‘High’. 10+26+12 = 48 pupils. But the total cohort is shown as 58. So 10 pupils are missing.)

A: The missing pupils will be children who have no measure of prior attainment, so they cannot be allocated to a prior attainment group. For example, maybe they were not in the country at the previous key stage. Or perhaps a teacher assessment of ‘A’ was submitted at the previous key stage (which would be the case if the child had been absent for a large amount of time, making it impossible to determine a teacher assessment level).

Q2. Why do the numbers of pupils in the 3 prior attainment groups shown in RAISEonline differ from the numbers shown in Inspection Dashboard?

inspectiondashboard-pupil-groups-2

Compare the Inspection Dashboard image above with the RAISEonline image at the top. These 2 images are from the same school, same data-set (KS2 Reading outcomes).

Why does Inspection Dashboard show prior attainment group sizes of 12, 27 and 9 pupils in low, middle and high groups respectively, whereas RAISEonline shows groups of 10, 26 and 12?

A: The difference is because Inspection Dashboard is grouping children according to their prior attainment in that same subject (i.e. in this case, reading) whereas RAISEonline groups the children according to their overall prior attainment from the previous key stage. (If looking at KS2 data, the prior attainment is based on children’s KS1 attainment in reading, writing and maths – but with maths given equal weighting to reading & writing combined).

When looking at prior attainment by individual subject, categorising the pupils is fairly straightforward – Level 3s are ‘High’, Level 2s are ‘Middle’, Level 1s and below are ‘Low’.

When using the ‘Overall’ prior attainment, an Average Point Score of 18 or higher is ‘High’, 12-17.9 is ‘Middle’, below 12 is ‘Low’.

So, in the example shown here, there are 9 children in the Reading High prior attainment group, i.e. they achieved level 3 at KS1. But there are 12 children in the overall High group shown in RAISEonline – meaning 3 extra children whose level in reading was below a level 3, but whose overall APS is at least 18 – most likely because they achieved level 3 in maths.

To really unpick what is going on, look at the pupil level data (Pupil List in RAISEonline – or look in your own internal management information system) to see how children have been categorised.

Arguably, the Inspection Dashboard way of doing things makes more sense (in my opinion) – and this is the more significant of the two documents when it comes to how Ofsted use the data pre-inspection.

Why are there these differences between the two documents?

Afraid I can’t answer that one…

NB – when looking at the Progress elements in Inspection Dashboard, the pupil groupings are by overall prior attainment group, not by individual subject. All of the above relates to the Attainment data.

That (probably) concludes my blogging for this term. But more brain teasers to follow in the New Year. I hope you can all cope with the antici…

 

 

 

 

 

Reflecting on the new ‘higher standards’ at Key Stages 1 and 2

Clare Hodgson, Assessment Adviser at Herts for Learning

specs

Succumbing to the inevitable, I have recently acquired, at great expense, a pair of varifocal glasses. I find that I have to hold my head at a fractionally lower angle, as I walk, in order to see clearly. Even so, I am still struggling to adjust. I’m told it will take time.

In a similar way, I am still struggling to adjust to the ramifications and implications of the first year of KS1 and KS2 results, using the new Assessment frameworks aligned with the new National Curriculum. Continue reading “Reflecting on the new ‘higher standards’ at Key Stages 1 and 2”

KS2 Performance Tables (with an added surprise)

Ben Fuller, Lead Assessment Adviser at Herts for Learning

Yesterday saw the release of the KS2 Performance Tables (based on validated data). You can find the figures for any school in England here.

This means that anyone can look up your school and see inspiring data such as this:

progress-chart

To the casual glancer, this chart might appear to suggest that this particular school has achieved progress scores somewhere around the median. But beware, that middle section covers around 60% of schools, so what the image above actually shows is data that could be anywhere between the 21st and 80th percentiles.

The greater surprise, though, in exploring the validated data is that an unexpected  methodological change has taken place since the unvalidated data appeared in RAISEonline. This change applies to one very specific group of pupils – those pupils who were entered into the tests (reading and maths) and who failed to score enough marks to be awarded a scaled score.

In the unvalidated data, these children were excluded from the progress data (but included in attainment). (However, where children were not entered into the test because they were working below the standard of the test, their Pre-Key Stage standard teacher assessment was used instead and those children were included in the progress measure.  This seemed counter-intuitive, in terms of setting up a strange incentive for schools to enter children into a test in which they clearly were unable to achieve.)

Here’s the change: now those children have been included – provided the teacher assessment is one of the Pre Key Stage standards (PKG, PKF or PKE). If you had children who took the test and didn’t achieve a scaled score, and the teacher assessment was either PKG, PKF or PKE, your progress score will almost certainly have gone down.

If the teacher assessment for such children was HNM (Has Not Met the standard) then those children are still excluded from the measure – so the progress score should be unaffected. (This is a strange anomaly in the system. It would make more sense to me in such cases to award the same score to HNM that is used for PKG (79 points) rather than remove such a child from the progress measure altogether.)

So, if you had children who sat the KS2 tests but did not achieve a scaled score – check your validated data progress scores on the Performance Tables site. They might be different to the figures you have already been looking at in RAISEonline and Inspection Dashboard. (Both of these documents will be updated to the validated data at some point in the Spring.)

The intricacies of the KS2 progress model are very well explained in this excellent blog by James Pembroke (aka ‘sigplus’). Thanks James for bringing my attention to this methodological change via the medium of Twitter!

 

 

 

RAISEonline Brain Teasers Part 1

Ben Fuller, Lead Assessment Adviser at Herts for Learning

Over the last half-term, my email inbox has noticed a bit of a rise in the number of queries along the lines of “I’m not sure I get page x of the new RAISEonline report – can you help?” or “What is this particular table telling me?”

I thought it might be helpful, with the permission of the enquirers, to share some of these brain teasers, along with my responses, as the chances are many others might have been wondering similar things about their own data (but perhaps were too afraid to ask!) Continue reading “RAISEonline Brain Teasers Part 1”

Just Give them a Grade – Sound Advice from the Minister?

Ben Fuller is Lead Assessment Adviser at Herts for Learning

Yesterday, our Schools Minister Nick Gibb said that teachers could save time and workload by, instead of producing in-depth marking of children’s work, just writing a grade on each piece. We do of course all want to find ways to make marking and feedback less time-consuming and more impactful, but this suggestion of using grades as part of the day-to-day process of formative assessment demonstrates a tragic vacuum of understanding about the purpose of feedback.

Continue reading “Just Give them a Grade – Sound Advice from the Minister?”

Unpicking KS2 Progress Scores ahead of Friday’s RAISEonline release

Ben Fuller is Lead Assessment Adviser at Herts for Learning

This Friday our eager anticipation will be over and the new-look RAISEonline reports, showing the 2016 unvalidated data for Key Stages 1 and 2, will be released. (Interactive reports available from Friday 21st October; Summary reports available from the following Tuesday.) Information has already been provided explaining the new-look tables and charts we are going to see.

ks2-progress-scores
Progress in RAISEonline

Continue reading “Unpicking KS2 Progress Scores ahead of Friday’s RAISEonline release”

Primary assessment: reflection and feed-forward

Ben Fuller is Lead Assessment Adviser at Herts for Learning

Welcome to the inaugural blog post from the Herts for Learning Assessment team. The aim of this blog is to periodically bring you important updates, ideas and suggestions in the world of school assessment.

I will start with some brief reflections on 2015/16, which has certainly been an interesting year in statutory assessment, with new approaches to the ways in which pupil performance has been measured and evaluated at the ends of Key Stages 1, 2, 4 and 5, as well as ongoing developments in the debate around Reception baseline assessment.

In this post I will focus on the primary phase, where teachers in Years 2 and 6 this year had to contend with new tougher tests and a new system for teacher assessment, based on the Interim Teacher Assessment Frameworks (‘ITAFs’) – which use what has been referred to as a “secure fit” (rather than “best fit”) system.  (Personally, I prefer to call it a “must have everything” approach, as I think it an unusual use of the word ‘secure’).

20160831_104309 Continue reading “Primary assessment: reflection and feed-forward”

Blog at WordPress.com.

Up ↑