- Browse by Author
Browsing by Author "Nguyen, Jessica H."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item An Assessment Tool to Provide Targeted Feedback to Robotic Surgical Trainees: Development and Validation of the End-to-End Assessment of Suturing Expertise (EASE)(American Urological Association, 2022-11) Haque, Taseen F.; Hui, Alvin; You, Jonathan; Ma, Runzhuo; Nguyen, Jessica H.; Lei, Xiaomeng; Cen, Steven; Aron, Monish; Collins, Justin W.; Djaladat, Hooman; Ghazi, Ahmed; Yates, Kenneth A.; Abreu, Andre L.; Daneshmand, Siamak; Desai, Mihir M.; Goh, Alvin C.; Hu, Jim C.; Lebastchi, Amir H.; Lendvay, Thomas S.; Porter, James; Schuckman, Anne K.; Sotelo, Rene; Sundaram, Chandru P.; Gill, Inderbir S.; Hung, Andrew J.; Urology, School of MedicinePurpose: To create a suturing skills assessment tool that comprehensively defines criteria around relevant sub-skills of suturing and to confirm its validity. Materials and Methods: 5 expert surgeons and an educational psychologist participated in a cognitive task analysis (CTA) to deconstruct robotic suturing into an exhaustive list of technical skill domains and sub-skill descriptions. Using the Delphi methodology, each CTA element was systematically reviewed by a multi-institutional panel of 16 surgical educators and implemented in the final product when content validity index (CVI) reached ≥0.80. In the subsequent validation phase, 3 blinded reviewers independently scored 8 training videos and 39 vesicourethral anastomoses (VUA) using EASE; 10 VUA were also scored using Robotic Anastomosis Competency Evaluation (RACE), a previously validated, but simplified suturing assessment tool. Inter-rater reliability was measured with intra-class correlation (ICC) for normally distributed values and prevalence-adjusted bias-adjusted Kappa (PABAK) for skewed distributions. Expert (≥100 prior robotic cases) and trainee (<100 cases) EASE scores from the non-training cases were compared using a generalized linear mixed model. Results: After two rounds of Delphi process, panelists agreed on 7 domains, 18 sub-skills, and 57 detailed sub-skill descriptions with CVI ≥ 0.80. Inter-rater reliability was moderately high (ICC median: 0.69, range: 0.51-0.97; PABAK: 0.77, 0.62-0.97). Multiple EASE sub-skill scores were able to distinguish surgeon experience. The Spearman’s rho correlation between overall EASE and RACE scores was 0.635 (p=0.003). Conclusions: Through a rigorous CTA and Delphi process, we have developed EASE, whose suturing sub-skills can distinguish surgeon experience while maintaining rater reliability.Item Development and validation of an objective scoring tool to evaluate surgical dissection: Dissection Assessment for Robotic Technique (DART)(American Urological Association Education and Research, Inc., 2021) Vanstrum, Erik B.; Ma, Runzhuo; Maya-Silva, Jacqueline; Sanford, Daniel; Nguyen, Jessica H.; Lei, Xiaomeng; Chevinksy, Michael; Ghoreifi, Alireza; Han, Jullet; Polotti, Charles F.; Powers, Ryan; Yip, Wesley; Zhang, Michael; Aron, Monish; Collins, Justin; Daneshmand, Siamak; Davis, John W.; Desai, Mihir M.; Gerjy, Roger; Goh, Alvin C.; Kimmig, Rainer; Lendvay, Thomas S.; Porter, James; Sotelo, Rene; Sundaram, Chandru P.; Cen, Steven; Gill, Inderbir S.; Hung, Andrew J.; Urology, School of MedicinePurpose: Evaluation of surgical competency has important implications for training new surgeons, accreditation, and improving patient outcomes. A method to specifically evaluate dissection performance does not yet exist. This project aimed to design a tool to assess surgical dissection quality. Methods: Delphi method was used to validate structure and content of the dissection evaluation. A multi-institutional and multi-disciplinary panel of 14 expert surgeons systematically evaluated each element of the dissection tool. Ten blinded reviewers evaluated 46 de-identified videos of pelvic lymph node and seminal vesicle dissections during the robot-assisted radical prostatectomy. Inter-rater variability was calculated using prevalence-adjusted and bias-adjusted kappa. The area under the curve from receiver operating characteristic curve was used to assess discrimination power for overall DART scores as well as domains in discriminating trainees (≤100 robotic cases) from experts (>100). Results: Four rounds of Delphi method achieved language and content validity in 27/28 elements. Use of 3- or 5-point scale remained contested; thus, both scales were evaluated during validation. The 3-point scale showed improved kappa for each domain. Experts demonstrated significantly greater total scores on both scales (3-point, p< 0.001; 5-point, p< 0.001). The ability to distinguish experience was equivalent for total score on both scales (3-point AUC= 0.92, CI 0.82-1.00, 5-point AUC= 0.92, CI 0.83-1.00). Conclusions: We present the development and validation of Dissection Assessment for Robotic Technique (DART), an objective and reproducible 3-point surgical assessment to evaluate tissue dissection. DART can effectively differentiate levels of surgeon experience and can be used in multiple surgical steps.