- Browse by Subject
Browsing by Subject "Informatics"
Now showing 1 - 10 of 19
Results Per Page
Sort Options
Item Analyzing the Clinical Outcomes of a Rapid Mass Conversion From Rosuvastatin to Atorvastatin in a VA Medical Center Outpatient Setting(SAGE, 2017-10) Naville-Cook, Chad; Rhea, Leroy; Triboletti, Mark; White, Christina; Pharmacology and Toxicology, School of MedicineBackground: Medication conversions occur frequently within the Veterans Health Administration. This manual process involves several pharmacists over an extended period of time. Macros can automate the process of converting a list of patients from one medication to a therapeutic alternative. Objectives: To develop a macro that would convert active rosuvastatin prescriptions to atorvastatin and to create an electronic dashboard to evaluate clinical outcomes. Methods: A conversion protocol was approved by the Pharmacy & Therapeutics Committee. A macro was developed using Microsoft Visual Basic. Outpatients with active prescriptions for rosuvastatin were reviewed and excluded if they had a documented allergy to atorvastatin or a significant drug-drug interaction. An electronic dashboard was created to compare safety and efficacy endpoints pre- and postconversion. Primary endpoints included low-density lipoprotein (LDL), creatine phosphokinase (CPK), aspartate transaminase (AST), alanine transaminase (ALT), and alkaline phosphatase. Secondary endpoints evaluated cardiovascular events, including the incidences of myocardial infarction, stroke, and stent placement. Results: The macro was used to convert 1520 patients from rosuvastatin to atorvastatin over a period of 20 hours saving $5760 in pharmacist labor. There were no significant changes in LDL, AST, ALT, or secondary endpoints (P > .05). There was a significant increase in alkaline phosphatase (P = .0035). Conclusions: A rapid mass medication conversion from rosuvastatin to atorvastatin saved time and money and resulted in no clinically significant changes in safety or efficacy endpoints. Macros and clinical dashboards can be applied to any Veterans Health Administration facility.Item Evaluating and Extending the Informed Consent Ontology for Representing Permissions from the Clinical Domain(IOS Press, 2022) Umberfield, Elizabeth E.; Stansbury, Cooper; Ford, Kathleen; Jiang, Yun; Kardia, Sharon L.R.; Thomer, Andrea K.; Harris, Marcelline R.; Health Policy and Management, School of Public HealthThe purpose of this study was to evaluate, revise, and extend the Informed Consent Ontology (ICO) for expressing clinical permissions, including reuse of residual clinical biospecimens and health data. This study followed a formative evaluation design and used a bottom-up modeling approach. Data were collected from the literature on US federal regulations and a study of clinical consent forms. Eleven federal regulations and fifteen permission-sentences from clinical consent forms were iteratively modeled to identify entities and their relationships, followed by community reflection and negotiation based on a series of predetermined evaluation questions. ICO included fifty-two classes and twelve object properties necessary when modeling, demonstrating appropriateness of extending ICO for the clinical domain. Twenty-six additional classes were imported into ICO from other ontologies, and twelve new classes were recommended for development. This work addresses a critical gap in formally representing permissions clinical permissions, including reuse of residual clinical biospecimens and health data. It makes missing content available to the OBO Foundry, enabling use alongside other widely-adopted biomedical ontologies. ICO serves as a machine-interpretable and interoperable tool for responsible reuse of residual clinical biospecimens and health data at scale.Item Fostering Governance and Information Partnerships for Chronic Disease Surveillance: The Multi-State EHR-Based Network for Disease Surveillance(Wolters Kluwer, 2024) Kraus, Emily McCormick; Saintus, Lina; Martinez, Amanda K.; Brand, Bill; Begley, Elin; Merritt, Robert K.; Hamilton, Andrew; Rubin, Rick; Sullivan, Amy; Karras, Bryant Thomas; Grannis, Shaun; Brooks, Ian M.; Mui, Joyce Y.; Carton, Thomas W.; Hohman, Katherine H.; Klompas, Michael; Dixon, Brian E.; Medicine, School of MedicineContext: Electronic health records (EHRs) are an emerging chronic disease surveillance data source and facilitating this data sharing is complex. Program: Using the experience of the Multi-State EHR-Based Network for Disease Surveillance (MENDS), this article describes implementation of a governance framework that aligns technical, statutory, and organizational requirements to facilitate EHR data sharing for chronic disease surveillance. Implementation: MENDS governance was cocreated with data contributors and health departments representing Texas, New Orleans, Louisiana, Chicago, Washington, and Indiana through engagement from 2020 to 2022. MENDS convened a governance body, executed data-sharing agreements, and developed a master governance document to codify policies and procedures. Results: The MENDS governance committee meets regularly to develop policies and procedures on data use and access, timeliness and quality, validation, representativeness, analytics, security, small cell suppression, software implementation and maintenance, and privacy. Resultant policies are codified in a master governance document. Discussion: The MENDS governance approach resulted in a transparent governance framework that cultivates trust across the network. MENDS's experience highlights the time and resources needed by EHR-based public health surveillance networks to establish effective governance.Item Frequency and Correlates of Pediatric High-Flow Nasal Cannula Use for Bronchiolitis, Asthma, and Pneumonia(Daedalus Enterprises, 2022) Rogerson, Colin M.; Carroll, Aaron E.; Tu, Wanzhu; He, Tian; Schleyer, Titus K.; Rowan, Courtney M.; Owora, Arthur H.; Mendonca, Eneida A.; Pediatrics, School of MedicineBackground: Heated humidified high-flow nasal cannula (HFNC) is a respiratory support device historically used in pediatrics for infants with bronchiolitis. No large-scale analysis has determined the current frequency or demographic distribution of HFNC use in children. The objective of this study was to determine the frequency and correlates of HFNC use in children presenting to the hospital for asthma, bronchiolitis, or pneumonia. Methods: This longitudinal observational study was based on electronic health record data from a large regional health information exchange, the Indiana Network for Patient Care (INPC). Subjects were age 0-18 y with recorded hospital encounters at an INPC hospital between 2010-2019 with International Classification of Diseases codes for bronchiolitis, asthma, or pneumonia. Annual proportions of HFNC use among all hospital encounters were assessed using generalized additive models. Log-binomial regression models were used to identify correlates of incident HFNC use and determine risk ratios of specific subjects receiving HFNC. Results: The study sample included 242,381 unique subjects with 412,712 hospital encounters between 2010-2019. The 10-y period prevalence of HFNC use was 2.54% (6,155/242,381) involving 7,974 encounters. Hospital encounters utilizing HFNC increased by 400%, from 326 in 2010 to 1,310 in 2019. This increase was evenly distributed across all 3 diagnostic categories (bronchiolitis, asthma, and pneumonia). Sex, race, age, and ethnicity all significantly influenced the risk of HFNC use. Over the 10-y period, the percentage of all hospital encounters using HFNC increased from 1.11% in 2010 to 3.15% in 2018. Subjects with multiple diagnoses had significantly higher risk of receiving HFNC. Conclusions: The use of HFNC in children presenting to the hospital with common respiratory diseases has increased substantially over the past decade and is no longer confined to treating infants with bronchiolitis. Demographic and diagnostic factors significantly influenced the frequency of HFNC use.Item Genomics, bio specimens, and other biological data: Current status and future directions(Wiley, 2018-10) Rosenstein, Barry S.; Rao, Arvind; Moran, Jean M.; Spratt, Daniel E.; Mendonca, Marc S.; Al-Lazikani, Bissan; Mayo, Charles S.; Speers, Corey; Radiation Oncology, School of MedicineItem In Silico Target Prediction by Training Naive Bayesian Models on Chemogenomics Databases(2006-06-29T19:50:21Z) Nidhi; Merchant, MaheshThe completion of Human Genome Project is seen as a gateway to the discovery of novel drug targets (Jacoby, Schuffenhauer, & Floersheim, 2003). How much of this information is actually translated into knowledge, e.g., the discovery of novel drug targets, is yet to be seen. The traditional route of drug discovery has been from target to compound. Conventional research techniques are focused around studying animal and cellular models which is followed by the development of a chemical concept. Modern approaches that have evolved as a result of progress in molecular biology and genomics start out with molecular targets which usually originate from the discovery of a new gene .Subsequent target validation to establish suitability as a drug target is followed by high throughput screening assays in order to identify new active chemical entities (Hofbauer, 1997). In contrast, chemogenomics takes the opposite approach to drug discovery (Jacoby, Schuffenhauer, & Floersheim, 2003). It puts to the forefront chemical entities as probes to study their effects on biological targets and then links these effects to the genetic pathways of these targets (Figure 1a). The goal of chemogenomics is to rapidly identify new drug molecules and drug targets by establishing chemical and biological connections. Just as classical genetic experiments are classified into forward and reverse, experimental chemogenomics methods can be distinguished as forward and reverse depending on the direction of investigative process i.e. from phenotype to target or from target to phenotype respectively (Jacoby, Schuffenhauer, & Floersheim, 2003). The identification and characterization of protein targets are critical bottlenecks in forward chemogenomics experiments. Currently, methods such as affinity matrix purification (Taunton, Hassig, & Schreiber, 1996) and phage display (Sche, McKenzie, White, & Austin, 1999) are used to determine targets for compounds. None of the current techniques used for target identification after the initial screening are efficient. In silico methods can provide complementary and efficient ways to predict targets by using chemogenomics databases to obtain information about chemical structures and target activities of compounds. Annotated chemogenomics databases integrate chemical and biological domains and can provide a powerful tool to predict and validate new targets for compounds with unknown effects (Figure 1b). A chemogenomics database contains both chemical properties and biological activities associated with a compound. The MDL Drug Data Report (MDDR) (Molecular Design Ltd., San Leandro, California) is one of the well known and widely used databases that contains chemical structures and corresponding biological activities of drug like compounds. The relevance and quality of information that can be derived from these databases depends on their annotation schemes as well as the methods that are used for mining this data. In recent years chemists and biologist have used such databases to carry out similarity searches and lookup biological activities for compounds that are similar to the probe molecules for a given assay. With the emergence of new chemogenomics databases that follow a well-structured and consistent annotation scheme, new automated target prediction methods are possible that can give insights to the biological world based on structural similarity between compounds. The usefulness of such databases lies not only in predicting targets, but also in establishing the genetic connections of the targets discovered, as a consequence of the prediction. The ability to perform automated target prediction relies heavily on a synergy of very recent technologies, which includes: i) Highly structured and consistently annotated chemogenomics databases. Many such databases have surfaced very recently; WOMBAT (Sunset Molecular Discovery LLC, Santa Fe, New Mexico), KinaseChemBioBase (Jubilant Biosys Ltd., Bangalore, India) and StARLITe (Inpharmatica Ltd., London, UK), to name a few. ii) Chemical descriptors (Xue & Bajorath, 2000) that capture the structure-activity relationship of the molecules as well as computational techniques (Kitchen, Stahura, & Bajorath, 2004) that are specifically tailored to extract information from these descriptors. iii) Data pipelining environments that are fast, integrate multiple computational steps, and support large datasets. A combination of all these technologies may be employed to bridge the gap between chemical and biological domains which remains a challenge in the pharmaceutical industry.Item Informatics education for translational research teams: An unrealized opportunity to strengthen the national research infrastructure(Cambridge University Press, 2022-10-28) Mendonca, Eneida A.; Richesson, Rachel L.; Hochheiser, Harry; Cooper, Dan M.; Bruck, Meg N.; Berner, Eta S.; Pediatrics, School of MedicineObjective: To identify the informatics educational needs of clinical and translational research professionals whose primary focus is not informatics. Introduction: Informatics and data science skills are essential for the full spectrum of translational research, and an increased understanding of informatics issues on the part of translational researchers can alleviate the demand for informaticians and enable more productive collaborations when informaticians are involved. Identifying the level of interest in different topics among various types of of translational researchers will help set priorities for development and dissemination of informatics education. Methods: We surveyed clinical and translational science researchers in Clinical and Translational Science Award (CTSA) programs about their educational needs and preferences. Results: Researchers from 23 out of the 62 CTSA hubs responded to the survey. 67% of respondents across roles and topics expressed interest in learning about informatics topics. There was high interest in all 30 topics included in the survey, with some variation in interest depending on the role of the respondents. Discussion: Our data support the need to advance training in clinical and biomedical informatics. As the complexity and use of information technology and data science in research studies grows, informaticians will continue to be a limited resource for research collaboration, education, and training. An increased understanding of informatics issues across translational research teams can alleviate this burden and allow for more productive collaborations. To inform a roadmap for informatics education for research professionals, we suggest strategies to use the results of this needs assessment to develop future informatics education.Item Informatics Interventions for Maternal Morbidity: A Scoping Review(National Library of Medicine, 2023-06-23) Inderstrodt, Jill; Stumpff, Julia C.; Smollen, Rebecca; Sridhar, Shreya; El-Azab, Sarah A.; Ojo, Opeyemi; Haggstrom, David A.Individuals of childbearing age in the U.S. currently enter pregnancy less healthy than previous generations, putting them at risk for maternal morbidities such as preeclampsia, gestational diabetes mellitus (GDM), and postpartum mental health conditions. These conditions leave mothers at risk for long-term health complications that, when left unscreened and unmonitored, can be deadly. One approach to ensuring long-term health for mothers is designing informatics interventions that: (a) prevent maternal morbidities, (b) treat perinatal conditions, and (c) allow for continuity of treatment. This scoping review examines the extent, range, and nature of informatics interventions that have been tested on maternal morbidities that can have long-term health effects on mothers. It uses MEDLINE, EMBASE, and Cochrane Library to chart demographic, population, and intervention data regarding informatics and maternal morbidity. Studies (n=79) were extracted for analysis that satisfied the following conditions: (a) tested a medical or clinical informatics intervention; (b) tested on adults with a uterus or doctors who treat people with a uterus; and (c) tested on the following conditions: preeclampsia, GDM, preterm birth, severe maternal morbidity as defined by the CDC, and perinatal mental health conditions. Of the 79 studies extracted, 38% (n=30) tested technologies for GDM, 38% (n=30) tested technologies for postpartum depression, and 15.2% (n=12) tested technologies for preeclampsia. In terms of technologies, 35.4% (n = 28) tested a smartphone or tablet app, 29.1% (n=23) tested a telehealth intervention, and 15.2% (n=12) tested remote monitoring technologies (blood pressure, blood glucose). Most (86.1%; n=68) of the technologies were tested for patient physical or mental health outcomes. This scoping review reveals that most tested informatics interventions are those aimed at three conditions (GDM, preeclampsia, mental health) and that there may be opportunities to treat other common causes of maternal mortality (i.e. postpartum hemorrhage) using proven technologies such as mobile applications.Item Measuring Practicing Clinicians’ Information Literacy: An Exploratory Analysis in the Context of Panel Management(Thieme, 2017-02-15) Dixon, Brian E.; Barboza, Katherine; Jensen, Ashley E.; Bennett, Katelyn J.; Sherman, Scott E.; Schwartz, Mark D.; Epidemiology, School of Public HealthBACKGROUND: As healthcare moves towards technology-driven population health management, clinicians must adopt complex digital platforms to access health information and document care. OBJECTIVES: This study explored information literacy, a set of skills required to effectively navigate population health information systems, among primary care providers in one Veterans' Affairs (VA) medical center. METHODS: Information literacy was assessed during an 8-month randomized trial that tested a population health (panel) management intervention. Providers were asked about their use and comfort with two VA digital tools for panel management at baseline, 16 weeks, and post-intervention. An 8-item scale (range 0-40) was used to measure information literacy (Cronbach's α=0.84). Scores between study arms and provider types were compared using paired t-tests and ANOVAs. Associations between self-reported digital tool use and information literacy were measured via Pearson's correlations. RESULTS: Providers showed moderate levels of information literacy (M= 27.4, SD 6.5). There were no significant differences in mean information literacy between physicians (M=26.4, SD 6.7) and nurses (M=30.5, SD 5.2, p=0.57 for difference), or between intervention (M=28.4, SD 6.5) and control groups (M=25.1, SD 6.2, p=0.12 for difference). Information literacy was correlated with higher rates of self-reported information system usage (r=0.547, p=0.001). Clinicians identified data access, accuracy, and interpretability as potential information literacy barriers. CONCLUSIONS: While exploratory in nature, cautioning generalizability, the study suggests that measuring and improving clinicians' information literacy may play a significant role in the implementation and use of digital information tools, as these tools are rapidly being deployed to enhance communication among care teams, improve health care outcomes, and reduce overall costs.Item Natural Language Processing and Extracting Information From Medical Reports(2006-06-29T19:24:21Z) Pfeiffer II, Richard D.; McDaniel, Anna M.The purpose of this study is to examine the current use of natural language processing for extracting meaningful data from free text in medical reports. The use of natural language processing has been used to process information from various genres. To evaluate the use of natural language processing, a synthesized review of primary research papers specific to natural language processing and extracting data from medical reports. A three phased approach is used to describe the process of gathering the final metrics for validating the use of natural language processing. The main purpose of any NLP is to extract or understand human language and to process it into meaning for a specified area of interest or end-user. There are three types of approaches: symbolic, statistical, and connectionist. There are identified problems with natural language processing and the different approaches. Problems noted about natural language processing in the research are: acquisition, coverage, robustness, and extensibility. Metrics were gathered from primary research papers to evaluate the success of the natural language processors. Recall average of the four papers was 85%. Precision average of five papers was 87.7%. Accuracy average was 97%. Sensitivity average was 84%, while specificity was 97.4%. Based on the results of the primary research there was no definitive way to validate one NLP approach as an industry standard The research reviewed it is clear that there has been at least limited success with information extraction from free text with use of natural language processing. It is important to understand the continuum of data, information, and knowledge in the previous and future research of natural language processing. In the industry of health informatics this is a technology necessary for improving healthcare and research.