Non-Response Biases in Surveys of School Children: The Case of the English PISA Samples
John Micklewright, Sylke V. Schnepf, Chris Skinner
revised version published as 'Non-response biases in surveys of schoolchildren: the case of the English Programme for International Student Assessment (PISA) samples' in: Journal of the Royal Statistical Society, Series A (Statistics in Society), 2012, 175 (4), 915-938
We analyse response patterns to an important survey of school children, exploiting rich auxiliary information on respondents' and non-respondents' cognitive ability that is correlated both with response and the learning achievement that the survey aims to measure. The survey is the Programme for International Student Assessment (PISA), which sets response thresholds in an attempt to control data quality. We analyse the case of England for 2000 when response rates were deemed high enough by the PISA organisers to publish the results, and 2003, when response rates were a little lower and deemed of sufficient concern for the results not to be published. We construct weights that account for the pattern of non-response using two methods, propensity scores and the GREG estimator. There is clear evidence of biases, but there is no indication that the slightly higher response rates in 2000 were associated with higher quality data. This underlines the danger of using response rate thresholds as a guide to data quality.
Text: See Discussion Paper No. 4789