- Browse by Author
Browsing by Author "Tignanelli, Christopher J."
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item A pragmatic, stepped-wedge, hybrid type II trial of interoperable clinical decision support to improve venous thromboembolism prophylaxis for patients with traumatic brain injury(Springer Nature, 2024-08-05) Tignanelli, Christopher J.; Shah, Surbhi; Vock, David; Siegel, Lianne; Serrano, Carlos; Haut, Elliott; Switzer, Sean; Martin, Christie L.; Rizvi, Rubina; Peta, Vincent; Jenkins, Peter C.; Lemke, Nicholas; Thyvalikakath, Thankam; Osheroff, Jerome A.; Torres, Denise; Vawdrey, David; Callcut, Rachael A.; Butler, Mary; Melton, Genevieve B.; Surgery, School of MedicineBackground: Venous thromboembolism (VTE) is a preventable medical condition which has substantial impact on patient morbidity, mortality, and disability. Unfortunately, adherence to the published best practices for VTE prevention, based on patient centered outcomes research (PCOR), is highly variable across U.S. hospitals, which represents a gap between current evidence and clinical practice leading to adverse patient outcomes. This gap is especially large in the case of traumatic brain injury (TBI), where reluctance to initiate VTE prevention due to concerns for potentially increasing the rates of intracranial bleeding drives poor rates of VTE prophylaxis. This is despite research which has shown early initiation of VTE prophylaxis to be safe in TBI without increased risk of delayed neurosurgical intervention or death. Clinical decision support (CDS) is an indispensable solution to close this practice gap; however, design and implementation barriers hinder CDS adoption and successful scaling across health systems. Clinical practice guidelines (CPGs) informed by PCOR evidence can be deployed using CDS systems to improve the evidence to practice gap. In the Scaling AcceptabLE cDs (SCALED) study, we will implement a VTE prevention CPG within an interoperable CDS system and evaluate both CPG effectiveness (improved clinical outcomes) and CDS implementation. Methods: The SCALED trial is a hybrid type 2 randomized stepped wedge effectiveness-implementation trial to scale the CDS across 4 heterogeneous healthcare systems. Trial outcomes will be assessed using the RE2-AIM planning and evaluation framework. Efforts will be made to ensure implementation consistency. Nonetheless, it is expected that CDS adoption will vary across each site. To assess these differences, we will evaluate implementation processes across trial sites using the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework (a determinant framework) using mixed-methods. Finally, it is critical that PCOR CPGs are maintained as evidence evolves. To date, an accepted process for evidence maintenance does not exist. We will pilot a "Living Guideline" process model for the VTE prevention CDS system. Discussion: The stepped wedge hybrid type 2 trial will provide evidence regarding the effectiveness of CDS based on the Berne-Norwood criteria for VTE prevention in patients with TBI. Additionally, it will provide evidence regarding a successful strategy to scale interoperable CDS systems across U.S. healthcare systems, advancing both the fields of implementation science and health informatics.Item Comparison of a Trauma Comorbidity Index with Other Measures of Comorbidities to Estimate Risk of Trauma Mortality(Wiley Online Library, 2021-04-29) Jenkins, Peter C.; Dixon, Brian E.; Savage, Stephanie A.; Carroll, Aaron E.; Newgard, Craig D.; Tignanelli, Christopher J.; Hemmila, Mark R.; Timsina, Lava; Surgery, School of MedicineBackground Comorbidities influence the outcomes of injured patients, yet a lack of consensus exists regarding how to quantify that association. This study details the development and internal validation of a trauma comorbidity index (TCI) designed for use with trauma registry data and compares its performance to other existing measures to estimate the association between comorbidities and mortality. Methods Indiana state trauma registry data (2013-2015) was used to compare the TCI with the Charlson and Elixhauser comorbidity indices, a count of comorbidities, and comorbidities as separate variables. The TCI approach utilized a randomly selected training cohort and was internally validated in a distinct testing cohort. The C-statistic of the adjusted models was tested using each comorbidity measure in the testing cohort to assess model discrimination. C-statistics were compared using a Wald test, and stratified analyses were performed based on predicted risk of mortality. Multiple imputation was used to address missing data. Results The study included 84,903 patients (50% each in training and testing cohorts). The Indiana TCI model demonstrated no significant difference between testing and training cohorts (p = 0.33). It produced a C-statistic of 0.924 in the testing cohort, which was significantly greater than that of models using the other indices (p < 0.05). The C-statistics of models using the Indiana TCI and the inclusion of comorbidities as separate variables – the method used by the American College of Surgeons Trauma Quality Improvement Program – were comparable (p = 0.11) but use of the TCI approach reduced the number of comorbidity-related variables in the mortality model from 19 to one. Conclusions When examining trauma mortality, the TCI approach using Indiana state trauma registry data demonstrated superior model discrimination and/or parsimony compared to other measures of comorbidities.Item Evaluation of federated learning variations for COVID-19 diagnosis using chest radiographs from 42 US and European hospitals(Oxford University Press, 2022) Peng, Le; Luo, Gaoxiang; Walker, Andrew; Zaiman, Zachary; Jones, Emma K.; Gupta, Hemant; Kersten, Kristopher; Burns, John L.; Harle, Christopher A.; Magoc, Tanja; Shickel, Benjamin; Steenburg, Scott D.; Loftus, Tyler; Melton, Genevieve B.; Wawira Gichoya, Judy; Sun, Ju; Tignanelli, Christopher J.; Radiology and Imaging Sciences, School of MedicineObjective: Federated learning (FL) allows multiple distributed data holders to collaboratively learn a shared model without data sharing. However, individual health system data are heterogeneous. "Personalized" FL variations have been developed to counter data heterogeneity, but few have been evaluated using real-world healthcare data. The purpose of this study is to investigate the performance of a single-site versus a 3-client federated model using a previously described Coronavirus Disease 19 (COVID-19) diagnostic model. Additionally, to investigate the effect of system heterogeneity, we evaluate the performance of 4 FL variations. Materials and methods: We leverage a FL healthcare collaborative including data from 5 international healthcare systems (US and Europe) encompassing 42 hospitals. We implemented a COVID-19 computer vision diagnosis system using the Federated Averaging (FedAvg) algorithm implemented on Clara Train SDK 4.0. To study the effect of data heterogeneity, training data was pooled from 3 systems locally and federation was simulated. We compared a centralized/pooled model, versus FedAvg, and 3 personalized FL variations (FedProx, FedBN, and FedAMP). Results: We observed comparable model performance with respect to internal validation (local model: AUROC 0.94 vs FedAvg: 0.95, P = .5) and improved model generalizability with the FedAvg model (P < .05). When investigating the effects of model heterogeneity, we observed poor performance with FedAvg on internal validation as compared to personalized FL algorithms. FedAvg did have improved generalizability compared to personalized FL algorithms. On average, FedBN had the best rank performance on internal and external validation. Conclusion: FedAvg can significantly improve the generalization of the model compared to other personalization FL algorithms; however, at the cost of poor internal validity. Personalized FL may offer an opportunity to develop both internal and externally validated algorithms.Item Performance of a Chest Radiograph AI Diagnostic Tool for COVID-19: A Prospective Observational Study(Radiological Society of North America, 2022-06-01) Sun, Ju; Peng, Le; Li, Taihui; Adila, Dyah; Zaiman, Zach; Melton-Meaux, Genevieve B.; Ingraham, Nicholas E.; Murray, Eric; Boley, Daniel; Switzer, Sean; Burns, John L.; Huang, Kun; Allen, Tadashi; Steenburg, Scott D.; Wawira Gichoya, Judy; Kummerfeld, Erich; Tignanelli, Christopher J.; Radiology and Imaging Sciences, School of MedicinePurpose: To conduct a prospective observational study across 12 U.S. hospitals to evaluate real-time performance of an interpretable artificial intelligence (AI) model to detect COVID-19 on chest radiographs. Materials and methods: A total of 95 363 chest radiographs were included in model training, external validation, and real-time validation. The model was deployed as a clinical decision support system, and performance was prospectively evaluated. There were 5335 total real-time predictions and a COVID-19 prevalence of 4.8% (258 of 5335). Model performance was assessed with use of receiver operating characteristic analysis, precision-recall curves, and F1 score. Logistic regression was used to evaluate the association of race and sex with AI model diagnostic accuracy. To compare model accuracy with the performance of board-certified radiologists, a third dataset of 1638 images was read independently by two radiologists. Results: Participants positive for COVID-19 had higher COVID-19 diagnostic scores than participants negative for COVID-19 (median, 0.1 [IQR, 0.0-0.8] vs 0.0 [IQR, 0.0-0.1], respectively; P < .001). Real-time model performance was unchanged over 19 weeks of implementation (area under the receiver operating characteristic curve, 0.70; 95% CI: 0.66, 0.73). Model sensitivity was higher in men than women (P = .01), whereas model specificity was higher in women (P = .001). Sensitivity was higher for Asian (P = .002) and Black (P = .046) participants compared with White participants. The COVID-19 AI diagnostic system had worse accuracy (63.5% correct) compared with radiologist predictions (radiologist 1 = 67.8% correct, radiologist 2 = 68.6% correct; McNemar P < .001 for both). Conclusion: AI-based tools have not yet reached full diagnostic potential for COVID-19 and underperform compared with radiologist prediction.