Correcting for Rater Bias in the Presence of Non-Ignorable Missing Ratings
thesisposted on 2017-02-17, 00:00 authored by Andrew Peter Swanlund
This thesis addresses the problem of non-ignorable missing ratings in judge rated data. A Bayesian bivariate probit ordinal missing data model implemented with Markov chain Monte Carlo (MCMC) was applied to simulated and real-world data sets to test the extent to which this proposed approach outperformed existing methods for analyzing judge rated data across a variety of evaluation criteria and data collection scenarios. The MCMC approach was compared to the many-facet Rasch model, generalizability theory (with a linear regression correction for rater effects), and the Rasch rating scale model. The objectives of the research were to test the extent to which the proposed methods could 1) calculate generalizability theory variance components when traditional methods could not be applied, and 2) produce more accurate latent trait measures than existing methods. The study used eight simulated data sets with varying numbers of examinees, raters, items, and distributional properties of examinee ability estimates. In addition a real-world data set consisting of classroom observations was used to test the applicability of the methods to non-simulated data. The Bayesian bivariate missing data model produced variance component estimates (and D-study coefficients) that were quite accurate for measurement scenarios with only a single, randomly assigned rater. The MCMC approach yields confidence intervals with better coverage probabilities than traditional approaches, and this finding is consistent when raters are randomly or non-randomly assigned to examinees. This modeling approach more accurately models the uncertainty in examinee scores by taking into better account the error due to rater severity, and non-random assignment of raters.