A large language model for electronic health records

dc.contributor.authorYang, Xi
dc.contributor.authorChen, Aokun
dc.contributor.authorPourNejatian, Nima
dc.contributor.authorShin, Hoo Chang
dc.contributor.authorSmith, Kaleb E.
dc.contributor.authorParisien, Christopher
dc.contributor.authorCompas, Colin
dc.contributor.authorMartin, Cheryl
dc.contributor.authorCosta, Anthony B.
dc.contributor.authorFlores, Mona G.
dc.contributor.authorZhang, Ying
dc.contributor.authorMagoc, Tanja
dc.contributor.authorHarle, Christopher A.
dc.contributor.authorLipori, Gloria
dc.contributor.authorMitchell, Duane A.
dc.contributor.authorHogan, William R.
dc.contributor.authorShenkman, Elizabeth A.
dc.contributor.authorBian, Jiang
dc.contributor.authorWu, Yonghui
dc.contributor.departmentHealth Policy and Management, Richard M. Fairbanks School of Public Health
dc.date.accessioned2025-03-11T12:49:58Z
dc.date.available2025-03-11T12:49:58Z
dc.date.issued2022-12-26
dc.description.abstractThere is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model—GatorTron—using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on five clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve five clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og
dc.eprint.versionFinal published version
dc.identifier.citationYang X, Chen A, PourNejatian N, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194. Published 2022 Dec 26. doi:10.1038/s41746-022-00742-2
dc.identifier.urihttps://hdl.handle.net/1805/46310
dc.language.isoen_US
dc.publisherSpringer Nature
dc.relation.isversionof10.1038/s41746-022-00742-2
dc.relation.journalNPJ: Digital Medicine
dc.rightsAttribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.sourcePMC
dc.subjectMedical research
dc.subjectHealth care
dc.subjectArtificial intelligence (AI) systems
dc.subjectElectronic health records (EHRs)
dc.subjectNatural language processing (NLP)
dc.titleA large language model for electronic health records
dc.typeArticle
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Yang2022Large-CCBY.pdf
Size:
1.6 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.04 KB
Format:
Item-specific license agreed upon to submission
Description: