ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Vreeman, Daniel J."

Now showing 1 - 10 of 14
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Consolidation of CDA-based documents from multiple sources : a modular approach
    (2016-09) Hosseini Asanjan, Seyed Masoud; Jones, Josette F.; Dixon, Brian E.; Vreeman, Daniel J.; Faiola, Anthony; Wu, Huanmei
    Physicians receive multiple CCDs for a single patient encompassing various encounters and medical history recorded in different information systems. It is cumbersome for providers to explore different pages of CCDs to find specific data which can be duplicated or even conflicted. This study describes the steps towards a system that integrates multiple CCDs into one consolidated document for viewing or processing patient-level data. Also, the impact of the system on healthcare providers’ perceived workload is evaluated. A modular system is developed to consolidate and de-duplicate CDA-based documents. The system is engineered to be scalable, extensible and open source. The system’s performance and output has evaluated first based on synthesized data and later based on real-world CCDs obtained from INPC database. The accuracy of the consolidation system along with the gaps in identification of the duplications were assessed. Finally, the impact of the system on healthcare providers’ workload is evaluated using NASA TLX tool. All of the synthesized CCDs were successfully consolidated, and no data were lost. The de-duplication accuracy was 100% based on synthesized data and the processing time for each document was 1.12 seconds. For real-world CCDs, our system de-duplicated 99.1% of the problems, 87.0% of allergies, and 91.7% of medications. Although the accuracy of the system is still very promising, however, there is a minor inaccuracy. Due to system improvements, the processing time for each document is reduced to average 0.38 seconds for each CCD. The result of NASA TLX evaluation shows that the system significantly decreases healthcare providers’ perceived workload. Also, it is observed that information reconciliation reduces the medical errors. The time for review of medical documents review time is significantly reduced after CCD consolidation. Given increasing adoption and use of Health Information Exchange (HIE) to share data and information across the care continuum, duplication of information is inevitable. A novel system designed to support automated consolidation and de-duplication of information across clinical documents as they are exchanged shows promise. Future work is needed to expand the capabilities of the system and further test it using heterogeneous vocabularies across multiple HIE scenarios.
  • Loading...
    Thumbnail Image
    Item
    A corpus-based approach for automated LOINC mapping
    (Oxford University Press, 2014-01-01) Fidahussein, Mustafa; Vreeman, Daniel J.; Department of Medicine, IU School of Medicine
    Objective To determine whether the knowledge contained in a rich corpus of local terms mapped to LOINC (Logical Observation Identifiers Names and Codes) could be leveraged to help map local terms from other institutions. Methods We developed two models to test our hypothesis. The first based on supervised machine learning was created using Apache's OpenNLP Maxent and the second based on information retrieval was created using Apache's Lucene. The models were validated by a random subsampling method that was repeated 20 times and that used 80/20 splits for training and testing, respectively. We also evaluated the performance of these models on all laboratory terms from three test institutions. Results For the 20 iterations used for validation of our 80/20 splits Maxent and Lucene ranked the correct LOINC code first for between 70.5% and 71.4% and between 63.7% and 65.0% of local terms, respectively. For all laboratory terms from the three test institutions Maxent ranked the correct LOINC code first for between 73.5% and 84.6% (mean 78.9%) of local terms, whereas Lucene's performance was between 66.5% and 76.6% (mean 71.9%). Using a cut-off score of 0.46 Maxent always ranked the correct LOINC code first for over 57% of local terms. Conclusions This study showed that a rich corpus of local terms mapped to LOINC contains collective knowledge that can help map terms from other institutions. Using freely available software tools, we developed a data-driven automated approach that operates on term descriptions from existing mappings in the corpus. Accurate and efficient automated mapping methods can help to accelerate adoption of vocabulary standards and promote widespread health information exchange.
  • Loading...
    Thumbnail Image
    Item
    Evaluating Congruence Between Laboratory LOINC Value Sets for Quality Measures, Public Health Reporting, and Mapping Common Tests
    (American Medical Informatics Association, 2013-11-16) Wu, Jianmin; Finnell, John T.; Vreeman, Daniel J.; Emergency Medicine, School of Medicine
    Laboratory test results are important for secondary data uses like quality measures and public health reporting, but mapping local laboratory codes to LOINC is a challenge. We evaluated the congruence between laboratory LOINC value sets for quality measures, public health reporting, and mapping common tests. We found a modest proportion of the LOINC codes from the Value Set Authority Center (VSAC) were present in the LOINC Top 2000 Results (16%) and the Reportable Condition Mapping Table (52%), and only 25 terms (3%) were shared with the Notifiable Condition Detector Top 129. More than a third of the VSAC Quality LOINCs were unique to that value set. A relatively small proportion of the VSAC Quality LOINCs were used by our hospital laboratories. Our results illustrate how mapping based only on test frequency might hinder these secondary uses of laboratory test results.
  • Loading...
    Thumbnail Image
    Item
    Impact of document consolidation on healthcare providers’ perceived workload and information reconciliation tasks: a mixed methods study
    (Oxford University Press, 2019-02) Hosseini, Masoud; Faiola, Anthony; Jones, Josette; Vreeman, Daniel J.; Wu, Huanmei; Dixon, Brian E.; Medicine, School of Medicine
    Background Information reconciliation is a common yet complex and often time-consuming task performed by healthcare providers. While electronic health record systems can receive “outside information” about a patient in electronic documents, rarely does the computer automate reconciling information about a patient across all documents. Materials and Methods Using a mixed methods design, we evaluated an information system designed to reconcile information across multiple electronic documents containing health records for a patient received from a health information exchange (HIE) network. Nine healthcare providers participated in scenario-based sessions in which they manually consolidated information across multiple documents. Accuracy of consolidation was measured along with the time spent completing 3 different reconciliation scenarios with and without support from the information system. Participants also attended an interview about their experience. Perceived workload was evaluated quantitatively using the NASA-TLX tool. Qualitative analysis focused on providers’ impression of the system and the challenges faced when reconciling information in practice. Results While 5 providers made mistakes when trying to manually reconcile information across multiple documents, no participants made a mistake when the system supported their work. Overall perceived workload decreased significantly for scenarios supported by the system (37.2% in referrals, 18.4% in medications, and 31.5% in problems scenarios, P < 0.001). Information reconciliation time was reduced significantly when the system supported provider tasks (58.8% in referrals, 38.1% in medications, and 65.1% in problem scenarios). Conclusion Automating retrieval and reconciliation of information across multiple electronic documents shows promise for reducing healthcare providers’ task complexity and workload.
  • Loading...
    Thumbnail Image
    Item
    Learning from the Crowd in Terminology Mapping: The LOINC Experience
    (Oxford, 2015-05) Dixon, Brian E.; Hook, John; Vreeman, Daniel J.; Department of Epidemiology, Richard M. Fairbanks School of Public Health
    National policies in the United States require the use of standard terminology for data exchange between clinical information systems. However, most electronic health record systems continue to use local and idiosyncratic ways of representing clinical observations. To improve mappings between local terms and standard vocabularies, we sought to make existing mappings (wisdom) from healt care organizations (the Crowd) available to individuals engaged in mapping processes. We developed new functionality to display counts of local terms and organizations that had previously mapped to a given Logical Observation Identifiers Names and Codes (LOINC) code. Further, we enabled users to view the details of those mappings, including local term names and the organizations that create the mappings. Users also would have the capacity to contribute their local mappings to a shared mapping repository. In this article, we describe the new functionality and its availability to implementers who desire resources to make mapping more efficient and effective.
  • Loading...
    Thumbnail Image
    Item
    Learning from the crowd while mapping to LOINC
    (Oxford University Press, 2015-11) Vreeman, Daniel J.; Hook, John; Dixon, Brian E.; Department of Medicine, IU School of Medicine
    OBJECTIVE: To describe the perspectives of Regenstrief LOINC Mapping Assistant (RELMA) users before and after the deployment of Community Mapping features, characterize the usage of these new features, and analyze the quality of mappings submitted to the community mapping repository. METHODS: We evaluated Logical Observation Identifiers Names and Codes (LOINC) community members' perceptions about new "wisdom of the crowd" information and how they used the new RELMA features. We conducted a pre-launch survey to capture users' perceptions of the proposed functionality of these new features; monitored how the new features and data available via those features were accessed; conducted a follow-up survey about the use of RELMA with the Community Mapping features; and analyzed community mappings using automated methods to detect potential errors. RESULTS: Despite general satisfaction with RELMA, nearly 80% of 155 respondents to our pre-launch survey indicated that having information on how often other users had mapped to a particular LOINC term would be helpful. During the study period, 200 participants logged into the RELMA Community Mapping features an average of 610 times per month and viewed the mapping detail pages a total of 6686 times. Fifty respondents (25%) completed our post-launch survey, and those who accessed the Community Mapping features unanimously indicated that they were useful. Overall, 95.3% of the submitted mappings passed our automated validation checks. CONCLUSION: When information about other institutions' mappings was made available, study participants who accessed it agreed that it was useful and informed their mapping choices. Our findings suggest that a crowd-sourced repository of mappings is valuable to users who are mapping local terms to LOINC terms.
  • Loading...
    Thumbnail Image
    Item
    The LOINC RSNA radiology playbook - a unified terminology for radiology procedures
    (Oxford Academic, 2018-07-01) Vreeman, Daniel J.; Abhyankar, Swapna; Wang, Kenneth C.; Carr, Christopher; Collins, Beverly; Rubin, Daniel L.; Langlotz, Curtis P.; Medicine, School of Medicine
    Objective: This paper describes the unified LOINC/RSNA Radiology Playbook and the process by which it was produced. Methods: The Regenstrief Institute and the Radiological Society of North America (RSNA) developed a unification plan consisting of six objectives 1) develop a unified model for radiology procedure names that represents the attributes with an extensible set of values, 2) transform existing LOINC procedure codes into the unified model representation, 3) create a mapping between all the attribute values used in the unified model as coded in LOINC (ie, LOINC Parts) and their equivalent concepts in RadLex, 4) create a mapping between the existing procedure codes in the RadLex Core Playbook and the corresponding codes in LOINC, 5) develop a single integrated governance process for managing the unified terminology, and 6) publicly distribute the terminology artifacts. Results: We developed a unified model and instantiated it in a new LOINC release artifact that contains the LOINC codes and display name (ie LONG_COMMON_NAME) for each procedure, mappings between LOINC and the RSNA Playbook at the procedure code level, and connections between procedure terms and their attribute values that are expressed as LOINC Parts and RadLex IDs. We transformed all the existing LOINC content into the new model and publicly distributed it in standard releases. The organizations have also developed a joint governance process for ongoing maintenance of the terminology. Conclusions: The LOINC/RSNA Radiology Playbook provides a universal terminology standard for radiology orders and results.
  • Loading...
    Thumbnail Image
    Item
    Possibilities and implications of using the ICF and other vocabulary standards in electronic health records
    (Wiley, 2015-12) Vreeman, Daniel J.; Richoz, Christophe; Department of Medicine, IU School of Medicine
    There is now widespread recognition of the powerful potential of electronic health record (EHR) systems to improve the health-care delivery system. The benefits of EHRs grow even larger when the health data within their purview are seamlessly shared, aggregated and processed across different providers, settings and institutions. Yet, the plethora of idiosyncratic conventions for identifying the same clinical content in different information systems is a fundamental barrier to fully leveraging the potential of EHRs. Only by adopting vocabulary standards that provide the lingua franca across these local dialects can computers efficiently move, aggregate and use health data for decision support, outcomes management, quality reporting, research and many other purposes. In this regard, the International Classification of Functioning, Disability, and Health (ICF) is an important standard for physiotherapists because it provides a framework and standard language for describing health and health-related states. However, physiotherapists and other health-care professionals capture a wide range of data such as patient histories, clinical findings, tests and measurements, procedures, and so on, for which other vocabulary standards such as Logical Observation Identifiers Names and Codes and Systematized Nomenclature Of Medicine Clinical Terms are crucial for interoperable communication between different electronic systems. In this paper, we describe how the ICF and other internationally accepted vocabulary standards could advance physiotherapy practise and research by enabling data sharing and reuse by EHRs. We highlight how these different vocabulary standards fit together within a comprehensive record system, and how EHRs can make use of them, with a particular focus on enhancing decision-making. By incorporating the ICF and other internationally accepted vocabulary standards into our clinical information systems, physiotherapists will be able to leverage the potent capabilities of EHRs and contribute our unique clinical perspective to other health-care providers within the emerging electronic health information infrastructure.
  • Loading...
    Thumbnail Image
    Item
    Recent Developments in Clinical Terminologies — SNOMED CT, LOINC, and RxNorm
    (Thieme Publishing, 2018-08) Bodenreider, Oliver; Cornet, Ronald; Vreeman, Daniel J.; Medicine, School of Medicine
    Objective: To discuss recent developments in clinical terminologies. SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) is the world's largest clinical terminology, developed by an international consortium. LOINC (Logical Observation Identifiers, Names, and Codes) is an international terminology widely used for clinical and laboratory observations. RxNorm is the standard drug terminology in the U.S. Methods and results: We present a brief review of the history, current state, and future development of SNOMED CT, LOINC and RxNorm. We also analyze their similarities and differences, and outline areas for greater interoperability among them. Conclusions: With different starting points, representation formalisms, funding sources, and evolutionary paths, SNOMED CT, LOINC, and RxNorm have evolved over the past few decades into three major clinical terminologies supporting key use cases in clinical practice. Despite their differences, partnerships have been created among their development teams to facilitate interoperability and minimize duplication of effort.
  • Loading...
    Thumbnail Image
    Item
    Reconciling disparate information in continuity of care documents: Piloting a system to consolidate structured clinical documents
    (Elsevier, 2017-10) Hosseini, Masoud; Jones, Josette; Faiola, Anthony; Vreeman, Daniel J.; Wu, Huanmei; Dixon, Brian E.; Department of BioHealth Informatics, School of Informatics and Computing
    Background Due to the nature of information generation in health care, clinical documents contain duplicate and sometimes conflicting information. Recent implementation of Health Information Exchange (HIE) mechanisms in which clinical summary documents are exchanged among disparate health care organizations can proliferate duplicate and conflicting information. Materials and methods To reduce information overload, a system to automatically consolidate information across multiple clinical summary documents was developed for an HIE network. The system receives any number of Continuity of Care Documents (CCDs) and outputs a single, consolidated record. To test the system, a randomly sampled corpus of 522 CCDs representing 50 unique patients was extracted from a large HIE network. The automated methods were compared to manual consolidation of information for three key sections of the CCD: problems, allergies, and medications. Results Manual consolidation of 11,631 entries was completed in approximately 150 h. The same data were automatically consolidated in 3.3 min. The system successfully consolidated 99.1% of problems, 87.0% of allergies, and 91.7% of medications. Almost all of the inaccuracies were caused by issues involving the use of standardized terminologies within the documents to represent individual information entries. Conclusion This study represents a novel, tested tool for de-duplication and consolidation of CDA documents, which is a major step toward improving information access and the interoperability among information systems. While more work is necessary, automated systems like the one evaluated in this study will be necessary to meet the informatics needs of providers and health systems in the future.
  • «
  • 1 (current)
  • 2
  • »
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University