- Browse by Author
Browsing by Author "Hung, Andrew J."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item An Assessment Tool to Provide Targeted Feedback to Robotic Surgical Trainees: Development and Validation of the End-to-End Assessment of Suturing Expertise (EASE)(American Urological Association, 2022-11) Haque, Taseen F.; Hui, Alvin; You, Jonathan; Ma, Runzhuo; Nguyen, Jessica H.; Lei, Xiaomeng; Cen, Steven; Aron, Monish; Collins, Justin W.; Djaladat, Hooman; Ghazi, Ahmed; Yates, Kenneth A.; Abreu, Andre L.; Daneshmand, Siamak; Desai, Mihir M.; Goh, Alvin C.; Hu, Jim C.; Lebastchi, Amir H.; Lendvay, Thomas S.; Porter, James; Schuckman, Anne K.; Sotelo, Rene; Sundaram, Chandru P.; Gill, Inderbir S.; Hung, Andrew J.; Urology, School of MedicinePurpose: To create a suturing skills assessment tool that comprehensively defines criteria around relevant sub-skills of suturing and to confirm its validity. Materials and Methods: 5 expert surgeons and an educational psychologist participated in a cognitive task analysis (CTA) to deconstruct robotic suturing into an exhaustive list of technical skill domains and sub-skill descriptions. Using the Delphi methodology, each CTA element was systematically reviewed by a multi-institutional panel of 16 surgical educators and implemented in the final product when content validity index (CVI) reached ≥0.80. In the subsequent validation phase, 3 blinded reviewers independently scored 8 training videos and 39 vesicourethral anastomoses (VUA) using EASE; 10 VUA were also scored using Robotic Anastomosis Competency Evaluation (RACE), a previously validated, but simplified suturing assessment tool. Inter-rater reliability was measured with intra-class correlation (ICC) for normally distributed values and prevalence-adjusted bias-adjusted Kappa (PABAK) for skewed distributions. Expert (≥100 prior robotic cases) and trainee (<100 cases) EASE scores from the non-training cases were compared using a generalized linear mixed model. Results: After two rounds of Delphi process, panelists agreed on 7 domains, 18 sub-skills, and 57 detailed sub-skill descriptions with CVI ≥ 0.80. Inter-rater reliability was moderately high (ICC median: 0.69, range: 0.51-0.97; PABAK: 0.77, 0.62-0.97). Multiple EASE sub-skill scores were able to distinguish surgeon experience. The Spearman’s rho correlation between overall EASE and RACE scores was 0.635 (p=0.003). Conclusions: Through a rigorous CTA and Delphi process, we have developed EASE, whose suturing sub-skills can distinguish surgeon experience while maintaining rater reliability.Item Artificial Intelligence Methods and Artificial Intelligence-Enabled Metrics for Surgical Education: A Multidisciplinary Consensus(Wolters Kluwer, 2022) Vedula, S. Swaroop; Ghazi, Ahmed; Collins, Justin W.; Pugh, Carla; Stefanidis, Dimitrios; Meireles, Ozanan; Hung, Andrew J.; Schwaitzberg, Steven; Levy, Jeffrey S.; Sachdeva, Ajit K.; Collaborative for Advanced Assessment of Robotic Surgical Skills; Surgery, School of MedicineBackground: Artificial intelligence (AI) methods and AI-enabled metrics hold tremendous potential to advance surgical education. Our objective was to generate consensus guidance on specific needs for AI methods and AI-enabled metrics for surgical education. Study design: The study included a systematic literature search, a virtual conference, and a 3-round Delphi survey of 40 representative multidisciplinary stakeholders with domain expertise selected through purposeful sampling. The accelerated Delphi process was completed within 10 days. The survey covered overall utility, anticipated future (10-year time horizon), and applications for surgical training, assessment, and feedback. Consensus was agreement among 80% or more respondents. We coded survey questions into 11 themes and descriptively analyzed the responses. Results: The respondents included surgeons (40%), engineers (15%), affiliates of industry (27.5%), professional societies (7.5%), regulatory agencies (7.5%), and a lawyer (2.5%). The survey included 155 questions; consensus was achieved on 136 (87.7%). The panel listed 6 deliverables each for AI-enhanced learning curve analytics and surgical skill assessment. For feedback, the panel identified 10 priority deliverables spanning 2-year (n = 2), 5-year (n = 4), and 10-year (n = 4) timeframes. Within 2 years, the panel expects development of methods to recognize anatomy in images of the surgical field and to provide surgeons with performance feedback immediately after an operation. The panel also identified 5 essential that should be included in operative performance reports for surgeons. Conclusions: The Delphi panel consensus provides a specific, bold, and forward-looking roadmap for AI methods and AI-enabled metrics for surgical education.Item Development and validation of an objective scoring tool to evaluate surgical dissection: Dissection Assessment for Robotic Technique (DART)(American Urological Association Education and Research, Inc., 2021) Vanstrum, Erik B.; Ma, Runzhuo; Maya-Silva, Jacqueline; Sanford, Daniel; Nguyen, Jessica H.; Lei, Xiaomeng; Chevinksy, Michael; Ghoreifi, Alireza; Han, Jullet; Polotti, Charles F.; Powers, Ryan; Yip, Wesley; Zhang, Michael; Aron, Monish; Collins, Justin; Daneshmand, Siamak; Davis, John W.; Desai, Mihir M.; Gerjy, Roger; Goh, Alvin C.; Kimmig, Rainer; Lendvay, Thomas S.; Porter, James; Sotelo, Rene; Sundaram, Chandru P.; Cen, Steven; Gill, Inderbir S.; Hung, Andrew J.; Urology, School of MedicinePurpose: Evaluation of surgical competency has important implications for training new surgeons, accreditation, and improving patient outcomes. A method to specifically evaluate dissection performance does not yet exist. This project aimed to design a tool to assess surgical dissection quality. Methods: Delphi method was used to validate structure and content of the dissection evaluation. A multi-institutional and multi-disciplinary panel of 14 expert surgeons systematically evaluated each element of the dissection tool. Ten blinded reviewers evaluated 46 de-identified videos of pelvic lymph node and seminal vesicle dissections during the robot-assisted radical prostatectomy. Inter-rater variability was calculated using prevalence-adjusted and bias-adjusted kappa. The area under the curve from receiver operating characteristic curve was used to assess discrimination power for overall DART scores as well as domains in discriminating trainees (≤100 robotic cases) from experts (>100). Results: Four rounds of Delphi method achieved language and content validity in 27/28 elements. Use of 3- or 5-point scale remained contested; thus, both scales were evaluated during validation. The 3-point scale showed improved kappa for each domain. Experts demonstrated significantly greater total scores on both scales (3-point, p< 0.001; 5-point, p< 0.001). The ability to distinguish experience was equivalent for total score on both scales (3-point AUC= 0.92, CI 0.82-1.00, 5-point AUC= 0.92, CI 0.83-1.00). Conclusions: We present the development and validation of Dissection Assessment for Robotic Technique (DART), an objective and reproducible 3-point surgical assessment to evaluate tissue dissection. DART can effectively differentiate levels of surgeon experience and can be used in multiple surgical steps.