- Browse by Subject
Browsing by Subject "Large language models"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Asking Data Analysis Questions with PandasAI(2023-11-08) Dolan, LeviAs easily accessible AI models have increased in visibility, one area of interest for those working with datasets programmatically is how AI might streamline common data analysis tasks. The recently-released PandasAI library is a Python library that connects to an OpenAI model (known for ChatGPT) and allows users to ask natural language-style questions about dataframes created in Pandas syntax. This lightning talk demonstrates how to start exploring this data analysis method using sample World Bank and World Happiness Report data. Potential limitations are also discussed.Item Zero-shot learning to extract assessment criteria and medical services from the preventive healthcare guidelines using large language models(Oxford University Press, 2024) Luo, Xiao; Tahabi, Fattah Muhammad; Marc, Tressica; Haunert, Laura Ann; Storey, Susan; Biostatistics and Health Data Science, Richard M. Fairbanks School of Public HealthObjectives: The integration of these preventive guidelines with Electronic Health Records (EHRs) systems, coupled with the generation of personalized preventive care recommendations, holds significant potential for improving healthcare outcomes. Our study investigates the feasibility of using Large Language Models (LLMs) to automate the assessment criteria and risk factors from the guidelines for future analysis against medical records in EHR. Materials and methods: We annotated the criteria, risk factors, and preventive medical services described in the adult guidelines published by United States Preventive Services Taskforce and evaluated 3 state-of-the-art LLMs on extracting information in these categories from the guidelines automatically. Results: We included 24 guidelines in this study. The LLMs can automate the extraction of all criteria, risk factors, and medical services from 9 guidelines. All 3 LLMs perform well on extracting information regarding the demographic criteria or risk factors. Some LLMs perform better on extracting the social determinants of health, family history, and preventive counseling services than the others. Discussion: While LLMs demonstrate the capability to handle lengthy preventive care guidelines, several challenges persist, including constraints related to the maximum length of input tokens and the tendency to generate content rather than adhering strictly to the original input. Moreover, the utilization of LLMs in real-world clinical settings necessitates careful ethical consideration. It is imperative that healthcare professionals meticulously validate the extracted information to mitigate biases, ensure completeness, and maintain accuracy. Conclusion: We developed a data structure to store the annotated preventive guidelines and make it publicly available. Employing state-of-the-art LLMs to extract preventive care criteria, risk factors, and preventive care services paves the way for the future integration of these guidelines into the EHR.