- Browse by Subject
Browsing by Subject "Generative AI"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item RAPID: DRL-AI: Investigating A Community-Inclusive AI Chatbot to Support Teachers in Developing Culturally Focused and Universally Designed STEM Activities(2024-09-14) Price, Jeremy; Chakraborty, SunandanResearch to uncover and build out the initial feature set for a generative AI chatbot to support teachers in developing more culturally responsive and sustaining STEM lesson plans and activities.Item Supplemental Files for "Generative A.I. & Writing Anxiety: A Collective Case Study of ChatGPT Use by Graduate Students"(2025-01-09) Piper, Gemmicka; Ameen, Mahasin; Lowe, M. SaraItem Zero-shot Learning with Minimum Instruction to Extract Social Determinants and Family History from Clinical Notes using GPT Model(IEEE, 2023) Bhate, Neel Jitesh; Mittal, Ansh; He, Zhe; Luo, Xiao; Computer Science, Luddy School of Informatics, Computing, and EngineeringDemographics, social determinants of health, and family history documented in the unstructured text within the electronic health records are increasingly being studied to understand how this information can be utilized with the structured data to improve healthcare outcomes. After the GPT models were released, many studies have applied GPT models to extract this information from the narrative clinical notes. Different from the existing work, our research focuses on investigating the zero-shot learning on extracting this information together by providing minimum information to the GPT model. We utilize de-identified real-world clinical notes annotated for demographics, various social determinants, and family history information. Given that the GPT model might provide text different from the text in the original data, we explore two sets of evaluation metrics, including the traditional NER evaluation metrics and semantic similarity evaluation metrics, to completely understand the performance. Our results show that the GPT-3.5 method achieved an average of 0.975 F1 on demographics extraction, 0.615 F1 on social determinants extraction, and 0.722 F1 on family history extraction. We believe these results can be further improved through model fine-tuning or few-shots learning. Through the case studies, we also identified the limitations of the GPT models, which need to be addressed in future research.