This year's performance
Moderators: Section Moderators, Forum Moderators
Sally Anne
Could it be that the exam is flawed. I understand NFER have a pool of about 1000 questions from which they concoct each years offering. The number of tutors who actually see the exams i.e. those working in schools increases their children's chances of success.
Is it not also very very strange that each year about the same number of children reach the pass mark although there is no transparency as to what this magic mark is, only rumours that it goes up every year.
Verbal reasoning is no longer credited as an acurate predictor in any business organisation here in the UK and I think the only reason Bucks use it is because the MC format, computer marking,ect make it a very inexpensive way of dividing children.
Could it be that the exam is flawed. I understand NFER have a pool of about 1000 questions from which they concoct each years offering. The number of tutors who actually see the exams i.e. those working in schools increases their children's chances of success.
Is it not also very very strange that each year about the same number of children reach the pass mark although there is no transparency as to what this magic mark is, only rumours that it goes up every year.
Verbal reasoning is no longer credited as an acurate predictor in any business organisation here in the UK and I think the only reason Bucks use it is because the MC format, computer marking,ect make it a very inexpensive way of dividing children.
-
- Posts: 9235
- Joined: Wed Jan 11, 2006 8:10 pm
- Location: Buckinghamshire
But that's because they just bunch all the scores past 140 as 141, isn't it? They've taken the end of the long sloping tail and squashed it horizontally. In fact, they might as well do that at 121, since distinctions beyond there have no effect on allocations.Sally-Anne wrote:The graph runs from a score of 69 to the peak of 141. From 69 to 140 it resembles a gentle rolling hill, but with the Eiffel Tower at the right hand side for the 141 scores!
I believe that the same picture emerges every year, for whatever reason.
The usual procedure is to standardize to a mean of 100 and standard deviation of 15, but Bucks use a mean of 111 and an unknown standard deviation that puts the magic level of 121 where they want it (it seems more like 16 or 17 from Sally-Anne's figures). Anyone seen "Spinal Tap"?Belinda1 wrote:I thought ANY standardised score, if following the example (Bell shaped) distribution by NFER, then only 2% of the cohort (those sitting THAT test) would have a score above 130...
WP is exactly right. There will be ever-decreasing numbers of children with scores right out to maybe 150, but all the children with scores above 141 are in roughly the top 3-4% of the test population, so there is little point discriminating betweeen them, and they are all added together into one big peak at 141.
Further to my last post, I believe the reason that the same number of children achieve the pass mark each year is that the centre of the bell curve is shifted sideways slightly in order to DEFINE the pass mark at theis level. Hence the difficulty in comparing actual scores for children in different years.
The same raw score might pass in one year but not in another. However, the variation is likely to be small, as the average standard of the test population would be expected to be very similar year on year.
The same raw score might pass in one year but not in another. However, the variation is likely to be small, as the average standard of the test population would be expected to be very similar year on year.