Endovascular Simulation: A Systematic Review of the Validity Evidence, Methodology, Quality and Outcomes
thesisposted on 01.08.2019 by Mark Glynn Davies
In order to distinguish essays and pre-prints from academic theses, we have a separate category. These are often much longer text based documents than a paper.
ABSTRACT Background: Vascular surgery as a specialty has undergone both a procedural and a training transformation over the last 10 years with the introduction of endovascular approaches and the birth of integrated residencies. As a result, a greater emphasis has been placed on endovascular simulation to accelerate skill acquisition at each level of training. The quality, validity and outcomes of endovascular simulation research have been poorly addressed. Aim: The aim of this study is to evaluate the validity evidence, research methods, reporting quality and outcomes that support simulation training in endovascular interventions. Methods: An electronic search for relevant articles published between January 2000 and December 2018 was performed to identify reports and publications on endovascular simulation. The search adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards for systematic reviews. Selected studies were reviewed and evaluated for research quality, research bias, and validity evidence using frameworks of Messick and of Kane. Results: Sixty-six reports met the inclusion criteria. The skill sets assessed in these studies were basic skill sets in 24%, moderately difficult skills in 14%, complex skill set in 52%. Ten percent of studies used mixed skill sets . The total number of participants was 1453, and the average number of participants in the studies was 22 (SD 11, range 4-77). Participants were drawn from multiple specialties with vascular surgery being the dominant specialty in the papers encountered. When one examines the prevalence of key educational features of simulation, clinical variation, repetitive practice and feedback were the dominant features employed. The majority of studies used time to complete a task as a marker of improved performance, which are derived from the particular simulator used in the study. Applying the International Nursing Association for Clinical Simulation and Learning Standards of Best Practice for simulation to the 66 studies chosen, the average number of elements (max 11) identified was 53 (meanSD) or under 50% of elements for a successful simulation design. Fifty-three percent of the studies scored at the median or higher. On MERSQI, sixty-six percent of the studies achieved a passing score of 12 or greater and were considered adequate. When the STARD/GRRAS criteria for methodology are applied to the studies, the overall methodology was poor with 42% of necessary components (20 out of 47) being accounted for in the studies. Within the QUADAS-2 criteria, there was a bias in the selection of the participants (66% of studies). Application of the index and reference tests did also raise the concern for bias in up to half the studies. No studies used either Messick’s or Kane’s framework of validity. When analyzed using the framework of Messick, all failed to capture all sources of validity evidence. The average score for Messick framework using the current grading scale was 62 (meanSD) with a median of 6 (range 2-13; scoring range 0-15), Most studies referenced some validity for content. However, few demonstrated evidence for response process, internal structure, relation to other variables and consequences. When analyzed using the Kane’s framework, most of studies reported well on Scoring (mean score 2; max 3); however, all had weak rationales and discussion of Generalization (mean score 1), Extrapolation (mean score 1) and Implications (mean score 1). These weaknesses led to an average score for Kane’s framework of 62 (meanSD) with a median of 5 (range 2-10, scoring range 0-12). Conclusion: The research methods and reporting quality for simulation in vascular surgery is weak and requires significant refinement. When two contemporary frameworks of validity are applied, the current body of work fails to achieve sufficient rigor to be valid and further work must be done to strengthen this area of assessment before widespread introduction into Graduate Medical Education or professional examinations.