- Browse by Author
Browsing by Author "Collins, Justin W."
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item An Assessment Tool to Provide Targeted Feedback to Robotic Surgical Trainees: Development and Validation of the End-to-End Assessment of Suturing Expertise (EASE)(American Urological Association, 2022-11) Haque, Taseen F.; Hui, Alvin; You, Jonathan; Ma, Runzhuo; Nguyen, Jessica H.; Lei, Xiaomeng; Cen, Steven; Aron, Monish; Collins, Justin W.; Djaladat, Hooman; Ghazi, Ahmed; Yates, Kenneth A.; Abreu, Andre L.; Daneshmand, Siamak; Desai, Mihir M.; Goh, Alvin C.; Hu, Jim C.; Lebastchi, Amir H.; Lendvay, Thomas S.; Porter, James; Schuckman, Anne K.; Sotelo, Rene; Sundaram, Chandru P.; Gill, Inderbir S.; Hung, Andrew J.; Urology, School of MedicinePurpose: To create a suturing skills assessment tool that comprehensively defines criteria around relevant sub-skills of suturing and to confirm its validity. Materials and Methods: 5 expert surgeons and an educational psychologist participated in a cognitive task analysis (CTA) to deconstruct robotic suturing into an exhaustive list of technical skill domains and sub-skill descriptions. Using the Delphi methodology, each CTA element was systematically reviewed by a multi-institutional panel of 16 surgical educators and implemented in the final product when content validity index (CVI) reached ≥0.80. In the subsequent validation phase, 3 blinded reviewers independently scored 8 training videos and 39 vesicourethral anastomoses (VUA) using EASE; 10 VUA were also scored using Robotic Anastomosis Competency Evaluation (RACE), a previously validated, but simplified suturing assessment tool. Inter-rater reliability was measured with intra-class correlation (ICC) for normally distributed values and prevalence-adjusted bias-adjusted Kappa (PABAK) for skewed distributions. Expert (≥100 prior robotic cases) and trainee (<100 cases) EASE scores from the non-training cases were compared using a generalized linear mixed model. Results: After two rounds of Delphi process, panelists agreed on 7 domains, 18 sub-skills, and 57 detailed sub-skill descriptions with CVI ≥ 0.80. Inter-rater reliability was moderately high (ICC median: 0.69, range: 0.51-0.97; PABAK: 0.77, 0.62-0.97). Multiple EASE sub-skill scores were able to distinguish surgeon experience. The Spearman’s rho correlation between overall EASE and RACE scores was 0.635 (p=0.003). Conclusions: Through a rigorous CTA and Delphi process, we have developed EASE, whose suturing sub-skills can distinguish surgeon experience while maintaining rater reliability.Item Artificial Intelligence Methods and Artificial Intelligence-Enabled Metrics for Surgical Education: A Multidisciplinary Consensus(Wolters Kluwer, 2022) Vedula, S. Swaroop; Ghazi, Ahmed; Collins, Justin W.; Pugh, Carla; Stefanidis, Dimitrios; Meireles, Ozanan; Hung, Andrew J.; Schwaitzberg, Steven; Levy, Jeffrey S.; Sachdeva, Ajit K.; Collaborative for Advanced Assessment of Robotic Surgical Skills; Surgery, School of MedicineBackground: Artificial intelligence (AI) methods and AI-enabled metrics hold tremendous potential to advance surgical education. Our objective was to generate consensus guidance on specific needs for AI methods and AI-enabled metrics for surgical education. Study design: The study included a systematic literature search, a virtual conference, and a 3-round Delphi survey of 40 representative multidisciplinary stakeholders with domain expertise selected through purposeful sampling. The accelerated Delphi process was completed within 10 days. The survey covered overall utility, anticipated future (10-year time horizon), and applications for surgical training, assessment, and feedback. Consensus was agreement among 80% or more respondents. We coded survey questions into 11 themes and descriptively analyzed the responses. Results: The respondents included surgeons (40%), engineers (15%), affiliates of industry (27.5%), professional societies (7.5%), regulatory agencies (7.5%), and a lawyer (2.5%). The survey included 155 questions; consensus was achieved on 136 (87.7%). The panel listed 6 deliverables each for AI-enhanced learning curve analytics and surgical skill assessment. For feedback, the panel identified 10 priority deliverables spanning 2-year (n = 2), 5-year (n = 4), and 10-year (n = 4) timeframes. Within 2 years, the panel expects development of methods to recognize anatomy in images of the surgical field and to provide surgeons with performance feedback immediately after an operation. The panel also identified 5 essential that should be included in operative performance reports for surgeons. Conclusions: The Delphi panel consensus provides a specific, bold, and forward-looking roadmap for AI methods and AI-enabled metrics for surgical education.Item Expert Consensus Recommendations for Robotic Surgery Credentialing(Wolters Kluwer, 2020-11) Stefanidis, Dimitrios; Huffman, Elizabeth M.; Collins, Justin W.; Martino, Martin A.; Satava, Richard M.; Levy, Jeffrey S.; Surgery, School of MedicineObjective: To define criteria for robotic credentialing using expert consensus. Background: A recent review of institutional robotic credentialing policies identified significant variability and determined current policies are largely inadequate to ensure surgeon proficiency and may threaten patient safety. Methods: 28 national robotic surgery experts were invited to participate in a consensus conference. After review of available institutional policies and discussion, the group developed a 91 proposed criteria. Using a modified Delphi process the experts were asked to indicate their agreement with the proposed criteria in three electronic survey rounds after the conference. Criteria that achieved 80% or more in agreement (consensus) in all rounds were included in the final list. Results: All experts agreed that there is a need for standardized robotic surgery credentialing criteria across institutions that promote surgeon proficiency. 49 items reached consensus in the first round, 19 in the second, and 8 in the third for a total of 76 final items. Experts agreed that privileges should be granted based on video review of surgical performance and attainment of clearly defined objective proficiency benchmarks. Parameters for ongoing outcome monitoring were determined and recommendations for technical skills training, proctoring, and performance assessment were defined. Conclusions: Using a systematic approach, detailed credentialing criteria for robotic surgery were defined. Implementation of these criteria uniformly across institutions will promote proficiency of robotic surgeons and has the potential to positively impact patient outcomes.Item Utilising an Accelerated Delphi Process to Develop Guidance and Protocols for Telepresence Applications in Remote Robotic Surgery Training(Elsevier, 2020-12) Collins, Justin W.; Ghazi, Ahmed; Stoyanov, Danail; Hung, Andrew; Coleman, Mark; Cecil, Tom; Ericsson, Anders; Anvari, Mehran; Wang, Yulun; Beaulieu, Yanick; Haram, Nadine; Sridhar, Ashwin; Marescaux, Jacques; Diana, Michele; Marcus, Hani J.; Levy, Jeffrey; Dasgupta, Prokar; Stefanidis, Dimitrios; Martino, Martin; Feins, Richard; Patel, Vipul; Slack, Mark; Satava, Richard M.; Kelly, John D.; Surgery, School of MedicineContext The role of robot-assisted surgery continues to expand at a time when trainers and proctors have travel restrictions during the coronavirus disease 2019 (COVID-19) pandemic. Objective To provide guidance on setting up and running an optimised telementoring service that can be integrated into current validated curricula. We define a standardised approach to training candidates in skill acquisition via telepresence technologies. We aim to describe an approach based on the current evidence and available technologies, and define the key elements within optimised telepresence services, by seeking consensus from an expert committee comprising key opinion leaders in training. Evidence acquisition This project was carried out in phases: a systematic review of the current literature, a teleconference meeting, and then an initial survey were conducted based on the current evidence and expert opinion, and sent to the committee. Twenty-four experts in training, including clinicians, academics, and industry, contributed to the Delphi process. An accelerated Delphi process underwent three rounds and was completed within 72 h. Additions to the second- and third-round surveys were formulated based on the answers and comments from the previous rounds. Consensus opinion was defined as ≥80% agreement. Evidence synthesis There was 100% consensus regarding an urgent need for international agreement on guidance for optimised telepresence. Consensus was reached in multiple areas, including (1) infrastructure and functionality; (2) definitions and terminology; (3) protocols for training, communication, and safety issues; and (4) accountability including ethical and legal issues. The resulting formulated guidance showed good internal consistency among experts, with a Cronbach alpha of 0.90. Conclusions Using the Delphi methodology, we achieved international consensus among experts for development and content validation of optimised telepresence services for robotic surgery training. This guidance lays the foundation for launching telepresence services in robotic surgery. This guidance will require further validation.