Hi all

Nove pm'd me as i have looked at some standardisation issues in bucks before and i think you have the same issue we have, i have never been involved in exam standardisation before but i have alot of experience with stats.

to explain what i think is the issue and just in case you are not familiar with standardisation the exam board web sites explain that basically standardisation follows this formulae:

Standardised Score = 15x(RawScore — meanscore)/sd + 100

so in essence all the results for the paper are analysed and a mean and standard deviation (a measure of the spread of results ) are calculated. For each childs results the difference between their score and the mean score is calculated so if the mean score is 55/75 and your child got 65 then the difference is 65-55 =10, you then divide this by the standard deviation to get their score as a number of standard deviations from the mean. So if the standard deviation is 15 raw score points then they are 10/15 or 0.66 standard deviations above the mean. This is then mulitplied by 15 (not sure why 15 is chosen) so that if their score is 1 standard deviation above the mean then this would give 15 or in our case 0.66 x 15 = 10. This is then added to 100 to give the standardised score. So a child who gets 1 standard deviation above the mean score gets 100+15 = 115 ss and 2sd above would get 130 etc.

However this is all the information we get from the exam boards, and we look at actual raw score and ss results then the above analysis is clearly not what is happening. The above formula would give a constant value for SD across all scores which is not what we see in practice at all, in bucks we have had lots of raw score vs ss data which is in this post:

viewtopic.php?f=12&t=18356Which demonstrates this.

Also if we look at Nove's dc's results 64/75 RS = 111 ss and 69/75 RS = 116 ss this would imply that the standard deviation for this paper is 5 x3 =15 rs ( (as 64 to 69 (5rs) gives 111-116 ss (5ss) so 1 sd (15ss) = 5x3 rs) ). However this can't be the case as for scores above 69 to 75 ie 6 rs produces 24 ss or 24/15 or 1.6 SD's making 1 SD = 3.75 rs.

So for paper 1 the standard deviation of the scores (or value of each standardised mark) reduces as the score increases.

The reason for this can only be speculated as the exam boards seem to keep the analysis that they do very secret. My explanation for this non linearity is that basically the paper is too easy or the children who take the paper are doing alot better than the children who tested the paper. So if you looked at the results of the paper i suspect you will find that the results are bunched to the upper end of the scores and then have a steep drop in the distribution curve from around 68-75 ie the distribution of results is not a lovely binomial but a nasty -ve skewed distribution (see second diagram down on this link

http://mvpprograms.com/help/mvpstats/di ... ssKurtosis )

This means that each extra mark a child gets is hugely more significant as the score increases (or the ruler stretches out see this post :

viewtopic.php?f=12&t=18108)

This gives v poor resolution for the exam and is thus a v. coarse assessment.

So i think the problem with paper 1 was that it was actually too easy and so loads of children got too high scores and the pass mark was decided on by too few v difficult questions.

I'm not sure this helps for appeal or not i am by no means an expert on this.

Hope this makes sense let me know if there is anything you are not sure about .

Tree