- Browse by Author
Browsing by Author "Wilkins, Consuelo H."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item EHR-based cohort assessment for multicenter RCTs: a fast and flexible model for identifying potential study sites(Oxford University Press, 2022) Nelson, Sarah J.; Drury, Bethany; Hood, Daniel; Harper, Jeremy; Bernard, Tiffany; Weng, Chunhua; Kennedy, Nan; LaSalle, Bernie; Gouripeddi, Ramkiran; Wilkins, Consuelo H.; Biomedical Engineering and Informatics, Luddy School of Informatics, Computing, and EngineeringObjective: The Recruitment Innovation Center (RIC), partnering with the Trial Innovation Network and institutions in the National Institutes of Health-sponsored Clinical and Translational Science Awards (CTSA) Program, aimed to develop a service line to retrieve study population estimates from electronic health record (EHR) systems for use in selecting enrollment sites for multicenter clinical trials. Our goal was to create and field-test a low burden, low tech, and high-yield method. Materials and methods: In building this service line, the RIC strove to complement, rather than replace, CTSA hubs' existing cohort assessment tools. For each new EHR cohort request, we work with the investigator to develop a computable phenotype algorithm that targets the desired population. CTSA hubs run the phenotype query and return results using a standardized survey. We provide a comprehensive report to the investigator to assist in study site selection. Results: From 2017 to 2020, the RIC developed and socialized 36 phenotype-dependent cohort requests on behalf of investigators. The average response rate to these requests was 73%. Discussion: Achieving enrollment goals in a multicenter clinical trial requires that researchers identify study sites that will provide sufficient enrollment. The fast and flexible method the RIC has developed, with CTSA feedback, allows hubs to query their EHR using a generalizable, vetted phenotype algorithm to produce reliable counts of potentially eligible study participants. Conclusion: The RIC's EHR cohort assessment process for evaluating sites for multicenter trials has been shown to be efficient and helpful. The model may be replicated for use by other programs.Item Leveraging artificial intelligence to summarize abstracts in lay language for increasing research accessibility and transparency(Oxford University Press, 2024) Shyr, Cathy; Grout, Randall W.; Kennedy, Nan; Akdas, Yasemin; Tischbein, Maeve; Milford, Joshua; Tan, Jason; Quarles, Kaysi; Edwards, Terri L.; Novak, Laurie L.; White, Jules; Wilkins, Consuelo H.; Harris, Paul A.; Pediatrics, School of MedicineObjective: Returning aggregate study results is an important ethical responsibility to promote trust and inform decision making, but the practice of providing results to a lay audience is not widely adopted. Barriers include significant cost and time required to develop lay summaries and scarce infrastructure necessary for returning them to the public. Our study aims to generate, evaluate, and implement ChatGPT 4 lay summaries of scientific abstracts on a national clinical study recruitment platform, ResearchMatch, to facilitate timely and cost-effective return of study results at scale. Materials and methods: We engineered prompts to summarize abstracts at a literacy level accessible to the public, prioritizing succinctness, clarity, and practical relevance. Researchers and volunteers assessed ChatGPT-generated lay summaries across five dimensions: accuracy, relevance, accessibility, transparency, and harmfulness. We used precision analysis and adaptive random sampling to determine the optimal number of summaries for evaluation, ensuring high statistical precision. Results: ChatGPT achieved 95.9% (95% CI, 92.1-97.9) accuracy and 96.2% (92.4-98.1) relevance across 192 summary sentences from 33 abstracts based on researcher review. 85.3% (69.9-93.6) of 34 volunteers perceived ChatGPT-generated summaries as more accessible and 73.5% (56.9-85.4) more transparent than the original abstract. None of the summaries were deemed harmful. We expanded ResearchMatch's technical infrastructure to automatically generate and display lay summaries for over 750 published studies that resulted from the platform's recruitment mechanism. Discussion and conclusion: Implementing AI-generated lay summaries on ResearchMatch demonstrates the potential of a scalable framework generalizable to broader platforms for enhancing research accessibility and transparency.