- Browse by Author
Browsing by Author "Goh, Alvin C."
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item An Assessment Tool to Provide Targeted Feedback to Robotic Surgical Trainees: Development and Validation of the End-to-End Assessment of Suturing Expertise (EASE)(American Urological Association, 2022-11) Haque, Taseen F.; Hui, Alvin; You, Jonathan; Ma, Runzhuo; Nguyen, Jessica H.; Lei, Xiaomeng; Cen, Steven; Aron, Monish; Collins, Justin W.; Djaladat, Hooman; Ghazi, Ahmed; Yates, Kenneth A.; Abreu, Andre L.; Daneshmand, Siamak; Desai, Mihir M.; Goh, Alvin C.; Hu, Jim C.; Lebastchi, Amir H.; Lendvay, Thomas S.; Porter, James; Schuckman, Anne K.; Sotelo, Rene; Sundaram, Chandru P.; Gill, Inderbir S.; Hung, Andrew J.; Urology, School of MedicinePurpose: To create a suturing skills assessment tool that comprehensively defines criteria around relevant sub-skills of suturing and to confirm its validity. Materials and Methods: 5 expert surgeons and an educational psychologist participated in a cognitive task analysis (CTA) to deconstruct robotic suturing into an exhaustive list of technical skill domains and sub-skill descriptions. Using the Delphi methodology, each CTA element was systematically reviewed by a multi-institutional panel of 16 surgical educators and implemented in the final product when content validity index (CVI) reached ≥0.80. In the subsequent validation phase, 3 blinded reviewers independently scored 8 training videos and 39 vesicourethral anastomoses (VUA) using EASE; 10 VUA were also scored using Robotic Anastomosis Competency Evaluation (RACE), a previously validated, but simplified suturing assessment tool. Inter-rater reliability was measured with intra-class correlation (ICC) for normally distributed values and prevalence-adjusted bias-adjusted Kappa (PABAK) for skewed distributions. Expert (≥100 prior robotic cases) and trainee (<100 cases) EASE scores from the non-training cases were compared using a generalized linear mixed model. Results: After two rounds of Delphi process, panelists agreed on 7 domains, 18 sub-skills, and 57 detailed sub-skill descriptions with CVI ≥ 0.80. Inter-rater reliability was moderately high (ICC median: 0.69, range: 0.51-0.97; PABAK: 0.77, 0.62-0.97). Multiple EASE sub-skill scores were able to distinguish surgeon experience. The Spearman’s rho correlation between overall EASE and RACE scores was 0.635 (p=0.003). Conclusions: Through a rigorous CTA and Delphi process, we have developed EASE, whose suturing sub-skills can distinguish surgeon experience while maintaining rater reliability.Item Demonstrating the effectiveness of the fundamentals of robotic surgery (FRS) curriculum on the RobotiX Mentor Virtual Reality Simulation Platform(Springer, 2021-04) Martin, John Rhodes; Stefanidis, Dimitrios; Dorin, Ryan P.; Goh, Alvin C.; Satava, Richard M.; Levy, Jeffrey S.; Surgery, School of MedicineFundamentals of robotic surgery (FRS) is a proficiency-based progression curriculum developed by robotic surgery experts from multiple specialty areas to address gaps in existing robotic surgery training curricula. The RobotiX Mentor is a virtual reality training platform for robotic surgery. Our aims were to determine if robotic surgery novices would demonstrate improved technical skills after completing FRS training on the RobotiX Mentor, and to compare the effectiveness of FRS across training platforms. An observational, pre-post design, multi-institutional rater-blinded trial was conducted at two American College of Surgeons Accredited Education Institutes-certified simulation centers. Robotic surgery novices (n = 20) were enrolled and trained to expert-derived benchmarks using FRS on the RobotiX Mentor. Participants’ baseline skill was assessed before (pre-test) and after (post-test) training on an avian tissue model. Tests were video recorded and graded by blinded raters using the Global Evaluative Assessment of Robotic Skills (GEARS) and a 32-criteria psychomotor checklist. Post hoc comparisons were conducted against previously published comparator groups. On paired-samples T tests, participants demonstrated improved performance across all GEARS domains (p < 0.001 to p = 0.01) and for time (p < 0.001) and errors (p = 0.003) as measured by psychometric checklist. By ANOVA, improvement in novices’ skill after FRS training on the RobotiX Mentor was not inferior to improvement reported after FRS training on previously published platforms. Completion of FRS on the RobotiX Mentor resulted in improved robotic surgery skills among novices, proving effectiveness of training. These data provide additional validity evidence for FRS and support use of the RobotiX Mentor for robotic surgery skill acquisition.Item Development and validation of an objective scoring tool to evaluate surgical dissection: Dissection Assessment for Robotic Technique (DART)(American Urological Association Education and Research, Inc., 2021) Vanstrum, Erik B.; Ma, Runzhuo; Maya-Silva, Jacqueline; Sanford, Daniel; Nguyen, Jessica H.; Lei, Xiaomeng; Chevinksy, Michael; Ghoreifi, Alireza; Han, Jullet; Polotti, Charles F.; Powers, Ryan; Yip, Wesley; Zhang, Michael; Aron, Monish; Collins, Justin; Daneshmand, Siamak; Davis, John W.; Desai, Mihir M.; Gerjy, Roger; Goh, Alvin C.; Kimmig, Rainer; Lendvay, Thomas S.; Porter, James; Sotelo, Rene; Sundaram, Chandru P.; Cen, Steven; Gill, Inderbir S.; Hung, Andrew J.; Urology, School of MedicinePurpose: Evaluation of surgical competency has important implications for training new surgeons, accreditation, and improving patient outcomes. A method to specifically evaluate dissection performance does not yet exist. This project aimed to design a tool to assess surgical dissection quality. Methods: Delphi method was used to validate structure and content of the dissection evaluation. A multi-institutional and multi-disciplinary panel of 14 expert surgeons systematically evaluated each element of the dissection tool. Ten blinded reviewers evaluated 46 de-identified videos of pelvic lymph node and seminal vesicle dissections during the robot-assisted radical prostatectomy. Inter-rater variability was calculated using prevalence-adjusted and bias-adjusted kappa. The area under the curve from receiver operating characteristic curve was used to assess discrimination power for overall DART scores as well as domains in discriminating trainees (≤100 robotic cases) from experts (>100). Results: Four rounds of Delphi method achieved language and content validity in 27/28 elements. Use of 3- or 5-point scale remained contested; thus, both scales were evaluated during validation. The 3-point scale showed improved kappa for each domain. Experts demonstrated significantly greater total scores on both scales (3-point, p< 0.001; 5-point, p< 0.001). The ability to distinguish experience was equivalent for total score on both scales (3-point AUC= 0.92, CI 0.82-1.00, 5-point AUC= 0.92, CI 0.83-1.00). Conclusions: We present the development and validation of Dissection Assessment for Robotic Technique (DART), an objective and reproducible 3-point surgical assessment to evaluate tissue dissection. DART can effectively differentiate levels of surgeon experience and can be used in multiple surgical steps.Item Proving the Effectiveness of the Fundamentals of Robotic Surgery (FRS) Skills Curriculum: A Single-blinded, Multispecialty, Multi-institutional Randomized Control Trial(Lippincott, 2020-08) Satava, Richard M.; Stefanidis, Dimitrios; Levy, Jeffrey S.; Smith, Roger; Martin, John R.; Monfared, Sara; Timsina, Lava R.; Wardkes Darzi, Ara; Moglia, Andrea; Brand, Timothy C.; Dorin, Ryan P.; Dumon, Kristoffel R.; Francone, Todd D.; Georgiou, Evangelos; Goh, Alvin C.; Marcet, Jorge E.; Martino, Martin A.; Sudan, Ranjan; Vale, Justin; Gallagher, Anthony G.; Surgery, School of MedicineObjective: To demonstrate the noninferiority of the fundamentals of robotic surgery (FRS) skills curriculum over current training paradigms and identify an ideal training platform. Summary Background Data: There is currently no validated, uniformly accepted curriculum for training in robotic surgery skills. Methods: Single-blinded parallel-group randomized trial at 12 international American College of Surgeons (ACS) Accredited Education Institutes (AEI). Thirty-three robotic surgery experts and 123 inexperienced surgical trainees were enrolled between April 2015 and November 2016. Benchmarks (proficiency levels) on the 7 FRS Dome tasks were established based on expert performance. Participants were then randomly assigned to 4 training groups: Dome (n = 29), dV-Trainer (n = 30), and DVSS (n = 32) that trained to benchmarks and control (n = 32) that trained using locally available robotic skills curricula. The primary outcome was participant performance after training based on task errors and duration on 5 basic robotic tasks (knot tying, continuous suturing, cutting, dissection, and vessel coagulation) using an avian tissue model (transfer-test). Secondary outcomes included cognitive test scores, GEARS ratings, and robot familiarity checklist scores. Results: All groups demonstrated significant performance improvement after skills training (P < 0.01). Participating residents and fellows performed tasks faster (DOME and DVSS groups) and with fewer errors than controls (DOME group; P < 0.01). Inter-rater reliability was high for the checklist scores (0.82–0.97) but moderate for GEARS ratings (0.40–0.67). Conclusions: We provide evidence of effectiveness for the FRS curriculum by demonstrating better performance of those trained following FRS compared with controls on a transfer test. We therefore argue for its implementation across training programs before surgeons apply these skills clinically.Item Response to “Proving the Effectiveness of the Fundamentals of Robotic Surgery (FRS) Skills Curriculum A Single-blinded, Multispecialty, Multi-institutional Randomized Control Trial” Not only surgeon's manual skills...”(Wolters Kluwer, 2020-12) Satava, Richard M.; Stefanidis, Dimitrios; Levy, Jeffrey S.; Smith, Roger; Martin, John R.; Monfared, Sara; Timsina, Lava R.; Wardkes Darzi, Ara; Moglia, Andrea; Brand, Timothy C.; Dorin, Ryan P.; Dumon, Kristoffel R.; Francone, Todd D.; Georgiou, Evangelos; Goh, Alvin C.; Marcet, Jorge E.; Martino, Martin A.; Sudan, Ranjan; Vale, Justin; Gallagher, Anthony G.; Surgery, School of Medicine