Natural language processing-driven state machines to extract social factors from unstructured clinical documentation
dc.contributor.author | Allen, Katie S. | |
dc.contributor.author | Hood, Dan R. | |
dc.contributor.author | Cummins, Jonathan | |
dc.contributor.author | Kasturi, Suranga | |
dc.contributor.author | Mendonca, Eneida A. | |
dc.contributor.author | Vest, Joshua R. | |
dc.contributor.department | Health Policy and Management, School of Public Health | |
dc.date.accessioned | 2023-11-29T11:56:11Z | |
dc.date.available | 2023-11-29T11:56:11Z | |
dc.date.issued | 2023-04-18 | |
dc.description.abstract | Objective: This study sought to create natural language processing algorithms to extract the presence of social factors from clinical text in 3 areas: (1) housing, (2) financial, and (3) unemployment. For generalizability, finalized models were validated on data from a separate health system for generalizability. Materials and methods: Notes from 2 healthcare systems, representing a variety of note types, were utilized. To train models, the study utilized n-grams to identify keywords and implemented natural language processing (NLP) state machines across all note types. Manual review was conducted to determine performance. Sampling was based on a set percentage of notes, based on the prevalence of social need. Models were optimized over multiple training and evaluation cycles. Performance metrics were calculated using positive predictive value (PPV), negative predictive value, sensitivity, and specificity. Results: PPV for housing rose from 0.71 to 0.95 over 3 training runs. PPV for financial rose from 0.83 to 0.89 over 2 training iterations, while PPV for unemployment rose from 0.78 to 0.88 over 3 iterations. The test data resulted in PPVs of 0.94, 0.97, and 0.95 for housing, financial, and unemployment, respectively. Final specificity scores were 0.95, 0.97, and 0.95 for housing, financial, and unemployment, respectively. Discussion: We developed 3 rule-based NLP algorithms, trained across health systems. While this is a less sophisticated approach, the algorithms demonstrated a high degree of generalizability, maintaining >0.85 across all predictive performance metrics. Conclusion: The rule-based NLP algorithms demonstrated consistent performance in identifying 3 social factors within clinical text. These methods may be a part of a strategy to measure social factors within an institution. | |
dc.eprint.version | Final published version | |
dc.identifier.citation | Allen KS, Hood DR, Cummins J, Kasturi S, Mendonca EA, Vest JR. Natural language processing-driven state machines to extract social factors from unstructured clinical documentation. JAMIA Open. 2023;6(2):ooad024. Published 2023 Apr 18. doi:10.1093/jamiaopen/ooad024 | |
dc.identifier.uri | https://hdl.handle.net/1805/37204 | |
dc.language.iso | en_US | |
dc.publisher | Oxford University Press | |
dc.relation.isversionof | 10.1093/jamiaopen/ooad024 | |
dc.relation.journal | JAMIA Open | |
dc.rights | Attribution-NonCommercial 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | |
dc.source | PMC | |
dc.subject | Clinical data | |
dc.subject | Natural language processing | |
dc.subject | Social determinants of health | |
dc.subject | Social factors | |
dc.title | Natural language processing-driven state machines to extract social factors from unstructured clinical documentation | |
dc.type | Article |