posted on 2018-11-27, 00:00authored byWilliam James Trapp
Research has revealed the value of college admissions testing on the college application and admissions process. The combination of college admission test scores and the applicant’s high school grade point average help the applicant and the college predict whether they will be a good fit together. Therefore, test stakeholders have high expectations of math question quality. Math question quality partially relies on the use of qualified question writers and multiple question reviews, including reviews by professional test developers and current math educators. Test takers are the largest group of test stakeholders and they are rarely consulted during the writing and reviewing of test questions. However, test taker question criticism may permit educators to confirm that they are finding and resolving all test taker concerns and may improve the quality of educator question reviews. Also, reviewing test taker criticism is a type of vicarious experience that may increase educator’s confidence in their question review ability (also referred to as educator self-efficacy). The purpose of this study was to investigate the impact of test taker question criticism on educator self-efficacy and question review quality during the standardized test question review process.
In this study, first I recruited nine high school students as test takers. After giving them test review training, I asked them to review sixteen unused SAT math items. I further collected and summarized these students' test item criticism. Then I recruited sixteen educators and randomly assigned them into a control group or an experimental group. I facilitated separate question reviews: the control group of educators reviewed the questions without access to test taker criticism and the experimental group of educators reviewed the questions with access to test taker criticism. Participants self-reported their self-efficacy prior to and immediately after the question review event. During each question review, participants identified question issues and, where possible, attempted to remove the issues with revisions to the question.
The results revealed both similarities and differences between the two groups of educators. Split-plot ANOVA of the pre- and posttest administrations of the appraisal inventory suggest that educator self-efficacy in both educator groups increased significantly from pretest to posttest. ANCOVA results suggest that the posttest scores of the experimental group were significantly higher than the posttest scores of the control group, after controlling for their pretest educator self-efficacy score. That is, the test taker question criticism seemed to help the experimental group increase their educator self-efficacy score more than the control group. In contrast, inclusion of test taker criticism in the question review process did not have a measurable impact on the question review quality. The question review audio transcriptions were coded and themes emerged were closely connected to individual statements and overall appraisal inventory concepts such as math content, construct-irrelevance, cognitive psychology, and consequences of testing. During the question reviews, the test takers did not successfully identify question issues but both groups of educators successfully identified multiple issues and resolved the issues by proposing changes to the questions. These findings have implications for test taker involvement in the question review process, as well as educator question review training and evaluation.
History
Advisor
Yin, Yue
Chair
Yin, Yue
Department
Educational Psychology
Degree Grantor
University of Illinois at Chicago
Degree Level
Doctoral
Committee Member
Smith, Everett
Sheridan, Kathleen
Thomas, Michael
Thorkildsen, Theresa
Superfine, Alison