This monograph describes an Item Response Theory analysis which explores and controls for variations in the quality of assessment ratings submitted by teachers. The methodology includes a plausible values imputation approach for deriving population estimates on several language proficiency domains. This involved a multidimensional item response analysis combining student responses, rater judgements and student background variables. The target student population was lower grade primary school students enrolled in the Hong Kong schooling system. The raters were local teachers of English employed within the sampled target schools. The primary objective of this research was to impute plausible values for student proficiencies where no rater data were provided or where rater data were deemed suspect. By necessity, a secondary objective of this study was to establish rules for justly excluding particular data on the basis of questionable validity. Student proficiency scores based on suspect data were replaced with imputed scores and the impact on important population parameter estimates was then explored.