- Browse by Author
Browsing by Author "Yachimski, Patrick"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Development and initial validation of an instrument for video-based assessment of technical skill in ERCP(Elsevier, 2021) Elmunzer, B. Joseph; Walsh, Catharine M.; Guiton, Gretchen; Serrano, Jose; Chak, Amitabh; Edmundowicz, Steven; Kwon, Richard S.; Mullady, Daniel; Papachristou, Georgios I.; Elta, Grace; Baron, Todd H.; Yachimski, Patrick; Fogel, Evan L.; Draganov, Peter V.; Taylor, Jason R.; Scheiman, James; Singh, Vikesh K.; Varadarajulu, Shyam; Willingham, Field F.; Cote, Gregory A.; Cotton, Peter B.; Simon, Violette; Spitzer, Rebecca; Keswani, Rajesh; Wani, Sachin; SVI study group; U.S. Cooperative for Outcomes Research in Endoscopy; Medicine, School of MedicineBackground and aims: The accurate measurement of technical skill in ERCP is essential for endoscopic training, quality assurance, and coaching of this procedure. Hypothesizing that technical skill can be measured by analysis of ERCP videos, we aimed to develop and validate a video-based ERCP skill assessment tool. Methods: Based on review of procedural videos, the task of ERCP was deconstructed into its basic components by an expert panel that developed an initial version of the Bethesda ERCP Skill Assessment Tool (BESAT). Subsequently, 2 modified Delphi panels and 3 validation exercises were conducted with the goal of iteratively refining the tool. Fully crossed generalizability studies investigated the contributions of assessors, ERCP performance, and technical elements to reliability. Results: Twenty-nine technical elements were initially generated from task deconstruction. Ultimately, after iterative refinement, the tool comprised 6 technical elements and 11 subelements. The developmental process achieved consistent improvements in the performance characteristics of the tool with every iteration. For the most recent version of the tool, BESAT-v4, the generalizability coefficient (a reliability index) was .67. Most variance in BESAT scores (43.55%) was attributed to differences in endoscopists' skill, indicating that the tool can reliably differentiate between endoscopists based on video analysis. Conclusions: Video-based assessment of ERCP skill appears to be feasible with a novel instrument that demonstrates favorable validity evidence. Future steps include determining whether the tool can discriminate between endoscopists of varying experience levels and predict important outcomes in clinical practice.Item A Prospective Multicenter Study Evaluating Learning Curves and Competence in Endoscopic Ultrasound and Endoscopic Retrograde Cholangiopancreatography Among Advanced Endoscopy Trainees: The Rapid Assessment of Trainee Endoscopy Skills (RATES) Study(Elsevier, 2017) Wani, Sachin; Keswani, Rajesh; Hall, Matt; Han, Samuel; Ali, Meer Akbar; Brauer, Brian; Carlin, Linda; Chak, Amitabh; Collins, Dan; Cote, Gregory A.; Diehl, David L.; DiMaio, Christopher J.; Dries, Andrew; El-Hajj, Ihab; Ellert, Swan; Fairley, Kimberley; Faulx, Ashley; Fujii-Lau, Larissa; Gaddam, Srinivas; Gan, Seng-Ian; Gaspar, Jonathan P.; Gautamy, Chitiki; Gordon, Stuart; Harris, Cynthia; Hyder, Sarah; Jones, Ross; Kim, Stephen; Komanduri, Srinadh; Law, Ryan; Lee, Linda; Mounzer, Rawad; Mullady, Daniel; Muthusamy, V. Raman; Olyaee, Mojtaba; Pfau, Patrick; Saligram, Shreyas; Piraka, Cyrus; Rastogi, Amit; Rosenkranz, Laura; Rzouq, Fadi; Saxena, Aditi; Shah, Raj J.; Simon, Violette C.; Small, Aaron; Sreenarasimhaiah, Jayaprakash; Walker, Andrew; Wang, Andrew Y.; Watson, Rabindra R.; Wilson, Robert H.; Yachimski, Patrick; Yang, Dennis; Edmundowicz, Steven; Early, Dayna S.; Department of Medicine, IU School of MedicineBackground and aims Based on the Next Accreditation System, trainee assessment should occur on a continuous basis with individualized feedback. We aimed to validate endoscopic ultrasound (EUS) and endoscopic retrograde cholangiopancreatography (ERCP) learning curves among advanced endoscopy trainees (AETs) using a large national sample of training programs and to develop a centralized database that allows assessment of performance in relation to peers. Methods ASGE recognized training programs were invited to participate and AETs were graded on ERCP and EUS exams using a validated competency assessment tool that assesses technical and cognitive competence in a continuous fashion. Grading for each skill was done using a 4-point scoring system and a comprehensive data collection and reporting system was built to create learning curves using cumulative sum analysis. Individual results and benchmarking to peers were shared with AETs and trainers quarterly. Results Of the 62 programs invited, 20 programs and 22 AETs participated in this study. At the end of training, median number of EUS and ERCP performed/AET was 300 (range 155-650) and 350 (125-500). Overall, 3786 exams were graded (EUS:1137; ERCP–biliary 2280, pancreatic 369). Learning curves for individual endpoints, and overall technical/cognitive aspects in EUS and ERCP demonstrated substantial variability and were successfully shared with all programs. The majority of trainees achieved overall technical (EUS: 82%; ERCP: 60%) and cognitive (EUS: 76%; ERCP: 100%) competence at conclusion of training. Conclusions These results demonstrate the feasibility of establishing a centralized database to report individualized learning curves and confirm the substantial variability in time to achieve competence among AETs in EUS and ERCP.