- Browse by Author
Browsing by Author "Walker, Andrew"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Evaluation of federated learning variations for COVID-19 diagnosis using chest radiographs from 42 US and European hospitals(Oxford University Press, 2022) Peng, Le; Luo, Gaoxiang; Walker, Andrew; Zaiman, Zachary; Jones, Emma K.; Gupta, Hemant; Kersten, Kristopher; Burns, John L.; Harle, Christopher A.; Magoc, Tanja; Shickel, Benjamin; Steenburg, Scott D.; Loftus, Tyler; Melton, Genevieve B.; Wawira Gichoya, Judy; Sun, Ju; Tignanelli, Christopher J.; Radiology and Imaging Sciences, School of MedicineObjective: Federated learning (FL) allows multiple distributed data holders to collaboratively learn a shared model without data sharing. However, individual health system data are heterogeneous. "Personalized" FL variations have been developed to counter data heterogeneity, but few have been evaluated using real-world healthcare data. The purpose of this study is to investigate the performance of a single-site versus a 3-client federated model using a previously described Coronavirus Disease 19 (COVID-19) diagnostic model. Additionally, to investigate the effect of system heterogeneity, we evaluate the performance of 4 FL variations. Materials and methods: We leverage a FL healthcare collaborative including data from 5 international healthcare systems (US and Europe) encompassing 42 hospitals. We implemented a COVID-19 computer vision diagnosis system using the Federated Averaging (FedAvg) algorithm implemented on Clara Train SDK 4.0. To study the effect of data heterogeneity, training data was pooled from 3 systems locally and federation was simulated. We compared a centralized/pooled model, versus FedAvg, and 3 personalized FL variations (FedProx, FedBN, and FedAMP). Results: We observed comparable model performance with respect to internal validation (local model: AUROC 0.94 vs FedAvg: 0.95, P = .5) and improved model generalizability with the FedAvg model (P < .05). When investigating the effects of model heterogeneity, we observed poor performance with FedAvg on internal validation as compared to personalized FL algorithms. FedAvg did have improved generalizability compared to personalized FL algorithms. On average, FedBN had the best rank performance on internal and external validation. Conclusion: FedAvg can significantly improve the generalization of the model compared to other personalization FL algorithms; however, at the cost of poor internal validity. Personalized FL may offer an opportunity to develop both internal and externally validated algorithms.Item A Prospective Multicenter Study Evaluating Learning Curves and Competence in Endoscopic Ultrasound and Endoscopic Retrograde Cholangiopancreatography Among Advanced Endoscopy Trainees: The Rapid Assessment of Trainee Endoscopy Skills (RATES) Study(Elsevier, 2017) Wani, Sachin; Keswani, Rajesh; Hall, Matt; Han, Samuel; Ali, Meer Akbar; Brauer, Brian; Carlin, Linda; Chak, Amitabh; Collins, Dan; Cote, Gregory A.; Diehl, David L.; DiMaio, Christopher J.; Dries, Andrew; El-Hajj, Ihab; Ellert, Swan; Fairley, Kimberley; Faulx, Ashley; Fujii-Lau, Larissa; Gaddam, Srinivas; Gan, Seng-Ian; Gaspar, Jonathan P.; Gautamy, Chitiki; Gordon, Stuart; Harris, Cynthia; Hyder, Sarah; Jones, Ross; Kim, Stephen; Komanduri, Srinadh; Law, Ryan; Lee, Linda; Mounzer, Rawad; Mullady, Daniel; Muthusamy, V. Raman; Olyaee, Mojtaba; Pfau, Patrick; Saligram, Shreyas; Piraka, Cyrus; Rastogi, Amit; Rosenkranz, Laura; Rzouq, Fadi; Saxena, Aditi; Shah, Raj J.; Simon, Violette C.; Small, Aaron; Sreenarasimhaiah, Jayaprakash; Walker, Andrew; Wang, Andrew Y.; Watson, Rabindra R.; Wilson, Robert H.; Yachimski, Patrick; Yang, Dennis; Edmundowicz, Steven; Early, Dayna S.; Department of Medicine, IU School of MedicineBackground and aims Based on the Next Accreditation System, trainee assessment should occur on a continuous basis with individualized feedback. We aimed to validate endoscopic ultrasound (EUS) and endoscopic retrograde cholangiopancreatography (ERCP) learning curves among advanced endoscopy trainees (AETs) using a large national sample of training programs and to develop a centralized database that allows assessment of performance in relation to peers. Methods ASGE recognized training programs were invited to participate and AETs were graded on ERCP and EUS exams using a validated competency assessment tool that assesses technical and cognitive competence in a continuous fashion. Grading for each skill was done using a 4-point scoring system and a comprehensive data collection and reporting system was built to create learning curves using cumulative sum analysis. Individual results and benchmarking to peers were shared with AETs and trainers quarterly. Results Of the 62 programs invited, 20 programs and 22 AETs participated in this study. At the end of training, median number of EUS and ERCP performed/AET was 300 (range 155-650) and 350 (125-500). Overall, 3786 exams were graded (EUS:1137; ERCP–biliary 2280, pancreatic 369). Learning curves for individual endpoints, and overall technical/cognitive aspects in EUS and ERCP demonstrated substantial variability and were successfully shared with all programs. The majority of trainees achieved overall technical (EUS: 82%; ERCP: 60%) and cognitive (EUS: 76%; ERCP: 100%) competence at conclusion of training. Conclusions These results demonstrate the feasibility of establishing a centralized database to report individualized learning curves and confirm the substantial variability in time to achieve competence among AETs in EUS and ERCP.