File(s) under embargo
until file(s) become available
Specialty, Institution and Evaluator Influence on End of Rotation Form Written Comments and Numeric Data
thesisposted on 01.12.2021, 00:00 by Lauren M Anderson
Introduction: Assessment is a vital component of residency education. The Clinical Competency Committee (CCC) documents resident progression towards independent practice. Programs must provide quality data for CCC deliberations and use of the End of Rotation Form (EORF), which collects data from multiple evaluators in multiple settings, generally serves that goal. Previous studies have identified that EORFs are able to detect resident deficiencies. Written comments on the EORFs are not only able to highlight weaknesses but add value by picking up non-competency-based factors that are not present in numeric data. Written comments are a useful component of assessment; however, comments are often irrelevant and overwhelmingly positive. The purpose of this study is to examine the relationship between numeric EORF ratings, written comments, evaluators, and institution type. Methods: In this mixed methods study, one cohort of residents from three programs and two institutions (two anesthesia, one internal medicine) was examined. A total of 38 internal medicine residents completing training between 2016-2019 and 20 anesthesia residents in training from 2015-2019 were studied. Numeric scores were analyzed using descriptive statistics. Written comments were given two scores (relevance and orientation) and coded for themes using a deductive content analysis. Results: 6,530 evaluations were collected from the three programs. The findings revealed a lack of variation in the numeric data; of the 96,244 individual points, 793 were answered as no or below expected level which accounted for less than one percent (0.82%) of the total data. 70% of the 5,079 comments were irrelevant. Comments for each specialty focused on essential skills, such as procedural skills vs. creating a differential diagnosis however, work ethic, disposition and medical knowledge were common themes in all programs. Different types of evaluators (within the program department, external/outside department, peers) focused comments on different behaviors and displayed varying degrees of relevance and orientation. Conclusion: This study highlights the need for improving our graduate medical education assessment system. Beyond creating better forms, we must train faculty and residents on form composition and use, and how to write quality comments. We need to further examine the learning culture, the culture of assessment.