ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Disaster medicine"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Accuracy of a Commercial Large Language Model (ChatGPT) to Perform Disaster Triage of Simulated Patients Using the Simple Triage and Rapid Treatment (START) Protocol: Gage Repeatability and Reproducibility Study
    (JMIR, 2024-09-30) Franc, Jeffrey Micheal; Hertelendy, Attila Julius; Cheng, Lenard; Hata, Ryan; Verde, Manuela; Emergency Medicine, School of Medicine
    Background: The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simple Triage and Rapid Treatment (START) protocol. However, LLMs experience several common errors including hallucinations (also called confabulations) and prompt dependency. Objective: This study addresses the research problem: "Can ChatGPT adequately triage simulated disaster patients using the START protocol?" by measuring three outcomes: repeatability, reproducibility, and accuracy. Methods: Nine prompts were developed by 5 disaster medicine physicians. A Python script queried ChatGPT Version 4 for each prompt combined with 391 validated simulated patient vignettes. Ten repetitions of each combination were performed for a total of 35,190 simulated triages. A reference standard START triage code for each simulated case was assigned by 2 disaster medicine specialists (JMF and MV), with a third specialist (LC) added if the first two did not agree. Results were evaluated using a gage repeatability and reproducibility study (gage R and R). Repeatability was defined as variation due to repeated use of the same prompt. Reproducibility was defined as variation due to the use of different prompts on the same patient vignette. Accuracy was defined as agreement with the reference standard. Results: Although 35,102 (99.7%) queries returned a valid START score, there was considerable variability. Repeatability (use of the same prompt repeatedly) was 14% of the overall variation. Reproducibility (use of different prompts) was 4.1% of the overall variation. The accuracy of ChatGPT for START was 63.9% with a 32.9% overtriage rate and a 3.1% undertriage rate. Accuracy varied by prompt with a maximum of 71.8% and a minimum of 46.7%. Conclusions: This study indicates that ChatGPT version 4 is insufficient to triage simulated disaster patients via the START protocol. It demonstrated suboptimal repeatability and reproducibility. The overall accuracy of triage was only 63.9%. Health care professionals are advised to exercise caution while using commercial LLMs for vital medical determinations, given that these tools may commonly produce inaccurate data, colloquially referred to as hallucinations or confabulations. Artificial intelligence-guided tools should undergo rigorous statistical evaluation-using methods such as gage R and R-before implementation into clinical settings.
  • Loading...
    Thumbnail Image
    Item
    The 2023 Model Core Content of Disaster Medicine
    (Cambridge University Press, 2023) Wexler, Bryan J.; Schultz, Carl; Biddinger, Paul D.; Ciottone, Gregory; Cornelius, Angela; Fuller, Robert; Lefort, Roxanna; Milsten, Andrew; Phillips, James; Nemeth, Ira; Emergency Medicine, School of Medicine
    Introduction: Disaster Medicine (DM) is the clinical specialty whose expertise includes the care and management of patients and populations outside conventional care protocols. While traditional standards of care assume the availability of adequate resources, DM practitioners operate in situations where resources are not adequate, necessitating a modification in practice. While prior academic efforts have succeeded in developing a list of core disaster competencies for emergency medicine residency programs, international fellowships, and affiliated health care providers, no official standardized curriculum or consensus has yet been published to date for DM fellowship programs based in the United States. Study objective: The objective of this work is to define the core curriculum for DM physician fellowships in the United States, drawing consensus among existing DM fellowship directors. Methods: A panel of DM experts was created from the members of the Council of Disaster Medicine Fellowship Directors. This council is an independent group of DM fellowship directors in the United States that have met annually at the American College of Emergency Physicians (ACEP)'s Scientific Assembly for the last eight years with meeting support from the Disaster Preparedness and Response Committee. Using a modified Delphi technique, the panel members revised and expanded on the existing Society of Academic Emergency Medicine (SAEM) DM fellowship curriculum, with the final draft being ratified by an anonymous vote. Multiple publications were reviewed during the process to ensure all potential topics were identified. Results: The results of this effort produced the foundational curriculum, the 2023 Model Core Content of Disaster Medicine. Conclusion: Members from the Council of Disaster Medicine Fellowship Directors have developed the 2023 Model Core Content for Disaster Medicine in the United States. This living document defines the foundational curriculum for DM fellowships, providing the basis of a standardized experience, contributing to the development of a board-certified subspecialty, and informing fellowship directors and DM practitioners of content and topics that may appear on future certification examinations.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University