Let's ignore age differences for the moment. If you take the score of a large number of children of the same age doing some test, and make a histogram of the number of children getting each score, you'd expect the histogram to approximate a bell-shaped curve (called the normal distribution) like this:

For different tests, you'd see distributions that had the same bell shape, but differed only in being moved left or right, or in being stretched sideways. In order to compare these, it's common to scale the marks to make these curves line up -- that is standardization. A common choice is an average of 100 and a spread so that the proportion of candidates in each band of scores looks like this:

For example, 34% of the population ends up with a score between 100 and 115.

Now suppose you do the above for a group of children of 10 years 1 month, and then do it separately for a group of children of 10 years 9 months. The younger children will tend to score lower, so in standardizing their marks the curve will have to be moved further to the right (to get the average at 100) than the curve for the older children will. (I believe the actual process is more complicated, based on determining an overall relationship between age and score, but the effect is much the same as standardizing each age group separately.)

Finally, the scores are truncated at the ends: everyone who would have got a standardized score of greater than 140 is reported as 141 (or 140), and everyone less than 70 is reported as 69 (or 70).

So the relationship between raw marks and standardized scores is obscure. 141 may not represent 100%, but it is a very high score, corresponding to the top 0.3% of the population.