ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Barboi, Cristina"

Now showing 1 - 4 of 4
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Comparison of Severity of Illness Scores and Artificial Intelligence Models That Are Predictive of Intensive Care Unit Mortality: Meta-analysis and Review of the Literature
    (JMIR, 2022-05-31) Barboi, Cristina; Tzavelis, Andreas; Muhammad, Lutfiyya NaQiyba; Anesthesia, School of Medicine
    Background: Severity of illness scores-Acute Physiology and Chronic Health Evaluation, Simplified Acute Physiology Score, and Sequential Organ Failure Assessment-are current risk stratification and mortality prediction tools used in intensive care units (ICUs) worldwide. Developers of artificial intelligence or machine learning (ML) models predictive of ICU mortality use the severity of illness scores as a reference point when reporting the performance of these computational constructs. Objective: This study aimed to perform a literature review and meta-analysis of articles that compared binary classification ML models with the severity of illness scores that predict ICU mortality and determine which models have superior performance. This review intends to provide actionable guidance to clinicians on the performance and validity of ML models in supporting clinical decision-making compared with the severity of illness score models. Methods: Between December 15 and 18, 2020, we conducted a systematic search of PubMed, Scopus, Embase, and IEEE databases and reviewed studies published between 2000 and 2020 that compared the performance of binary ML models predictive of ICU mortality with the performance of severity of illness score models on the same data sets. We assessed the studies' characteristics, synthesized the results, meta-analyzed the discriminative performance of the ML and severity of illness score models, and performed tests of heterogeneity within and among studies. Results: We screened 461 abstracts, of which we assessed the full text of 66 (14.3%) articles. We included in the review 20 (4.3%) studies that developed 47 ML models based on 7 types of algorithms and compared them with 3 types of the severity of illness score models. Of the 20 studies, 4 (20%) were found to have a low risk of bias and applicability in model development, 7 (35%) performed external validation, 9 (45%) reported on calibration, 12 (60%) reported on classification measures, and 4 (20%) addressed explainability. The discriminative performance of the ML-based models, which was reported as AUROC, ranged between 0.728 and 0.99 and between 0.58 and 0.86 for the severity of illness score-based models. We noted substantial heterogeneity among the reported models and considerable variation among the AUROC estimates for both ML and severity of illness score model types. Conclusions: ML-based models can accurately predict ICU mortality as an alternative to traditional scoring models. Although the range of performance of the ML models is superior to that of the severity of illness score models, the results cannot be generalized due to the high degree of heterogeneity. When presented with the option of choosing between severity of illness score or ML models for decision support, clinicians should select models that have been externally validated, tested in the practice environment, and updated to the patient population and practice environment.
  • Loading...
    Thumbnail Image
    Item
    Improving the User Interface and Guiding the Development of Effective Training Material for a Clinical Research Recruitment and Retention Dashboard: Usability Testing Study
    (JMIR, 2025-02-24) Gardner, Leah Leslie; Parvari, Pezhman Raeisian; Seidman, Mark; Holden, Richard J.; Fowler, Nicole R.; Zarzaur, Ben L.; Summanwar, Diana; Barboi, Cristina; Boustani, Malaz; Medicine, School of Medicine
    Background: Participant recruitment and retention are critical to the success of clinical trials, yet challenges such as low enrollment rates and high attrition remain ongoing obstacles. RecruitGPS is a scalable dashboard with integrated control charts to address these issues by providing real-time data monitoring and analysis, enabling researchers to better track and improve recruitment and retention. Objective: This study aims to identify the challenges and inefficiencies users encounter when interacting with the RecruitGPS dashboard. By identifying these issues, the study aims to inform strategies for improving the dashboard's user interface and create targeted, effective instructional materials that address user needs. Methods: Twelve clinical researchers from the Midwest region of the United States provided feedback through a 10-minute, video-recorded usability test session, during which participants were instructed to explore the various tabs of the dashboard, identify challenges, and note features that worked well while thinking aloud. Following the video session, participants took a survey on which they answered System Usability Scale (SUS) questions, ease of navigation questions, and a Net Promoter Score (NPS) question. Results: A quantitative analysis of survey responses revealed an average SUS score of 61.46 (SD 23.80; median 66.25) points, indicating a need for improvement in the user interface. The NPS was 8, with 4 of 12 (33%) respondents classified as promoters and 3 of 12 (25%) as detractors, indicating a slightly positive satisfaction. When participants compared RecruitGPS to other recruitment and study management tools they had used, 8 of 12 (67%) of participants rated RecruitGPS as better or much better. Only 1 of 12 (8%) participants rated RecruitGPS as worse but not much worse. A qualitative analysis of participants' interactions with the dashboard diagnosed a confusing part of the dashboard that could be eliminated or made optional and provided valuable insight for the development of instructional videos and documentation. Participants liked the dashboard's data visualization capabilities, including intuitive graphs and trend tracking; progress indicators, such as color-coded status indicators and comparison metrics; and the overall dashboard's layout and design, which consolidated relevant data on a single page. Users also valued the accuracy and real-time updates of data, especially the integration with external sources like Research Electronic Data Capture (REDCap). Conclusions: RecruitGPS demonstrates significant potential to improve the efficiency of clinical trials by providing researchers with real-time insights into participant recruitment and retention. This study offers valuable recommendations for targeted refinements to enhance the user experience and maximize the dashboard's effectiveness. Additionally, it highlights navigation challenges that can be addressed through the development of clear and focused instructional videos.
  • Loading...
    Thumbnail Image
    Item
    Longitudinal Evaluation of the HABC Monitor Among Trauma Survivors
    (Dove Press, 2025-03-04) Alhader, Abdelfattah; Perkins, Anthony; Monahan, Patrick O.; Zarzaur, Ben L.; Barboi, Cristina; Boustani, Malaz A.; Medicine, School of Medicine
    Purpose: To examine the sensitivity to change of the Healthy Aging Brain Care Monitor (HABC-M) through a longitudinal analytical comparison with reference standards. Patients and methods: We used longitudinal data from 120 participants in a multicenter randomized controlled trial evaluating the effectiveness of the Trauma Medical Home (TMH). We used the following reference standards: The depression and anxiety subdomains of the Hospital Anxiety and Depression Scale (HADS), the Patient-Reported Outcomes Measurement Information System Sleep Disturbance Short Form 4a (PROMIS-SF), and the Pain, Enjoyment of Life, and General Activity Scale (PEG). We assessed sensitivity to change using three longitudinal comparative analytical methods. The correlation of the HABC-M score with reference standards' scores over time, the correlation of changes in the HABC-M score with changes in reference standards' scores, and a longitudinal analysis to compare changes in the HABC-M against reference standards' known change categories. Results: Throughout the six-month period, the HABC-M exhibited moderate to high correlations with the HADS (r = 0.66, p<0.001 for the depression subdomain and r = 0.42, p<0.001 for the anxiety subdomain), the PROMIS-SF (r = 0.57, p<0.001), and the PEG (r = 0.47, p<0.001). The changes in HABC-M significantly correlated with changes in reference standards at various time points. HABC-M scores were significantly different across known change categories established by the four reference standards, with standardized response mean (SRM) values ranging from 1.08 to 1.44. Conclusion: The HABC-M is capable of monitoring the recovery of older trauma survivors.
  • Loading...
    Thumbnail Image
    Item
    Prognostic models for predicting insomnia treatment outcomes: A systematic review
    (Elsevier, 2024) Holler, Emma; Du, Yu; Barboi, Cristina; Owora, Arthur; Anesthesia, School of Medicine
    Objective: To identify and critically evaluate models predicting insomnia treatment response in adult populations. Methods: Pubmed, EMBASE, and PsychInfo databases were searched from January 2000 to January 2023 to identify studies reporting the development or validation of multivariable models predicting insomnia treatment outcomes in adults. Data were extracted according to CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) guidelines and study quality was assessed using the Prediction model study Risk Of Bias Assessment Tool (PROBAST). Results: Eleven studies describing 53 prediction models were included and appraised. Treatment response was most frequently assessed using wake after sleep onset (n = 10; 18.9%), insomnia severity index (n = 10; 18.9%), and sleep onset latency (n = 9, 17%). Dysfunctional Beliefs About Sleep (DBAS) score was the most common predictor in final models (n = 33). R2 values ranged from 0.06 to 0.80 for models predicting continuous response and area under the curve (AUC) ranged from 0.73 to 0.87 for classification models. Only two models were internally validated, and none were externally validated. All models were rated as having a high risk of bias according to PROBAST, which was largely driven by the analysis domain. Conclusion: Prediction models may be a useful tool to assist clinicians in selecting the optimal treatment strategy for patients with insomnia. However, no externally validated models currently exist. These results highlight an important gap in the literature and underscore the need for the development and validation of modern, methodologically rigorous models.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University