ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Melnick, Edward R."

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Six habits of highly successful health information technology: powerful strategies for design and implementation
    (Oxford Academic, 2019-10) Ray, Jessica M.; Ratwani, Raj M.; Sinsky, Christine A.; Frankel, Richard M.; Friedberg, Mark W.; Powsner, Seth M.; Rosenthal, David I.; Wachter, Robert M.; Melnick, Edward R.; Regenstrief Institute, Indiana University School of Medicine
    Healthcare information technologies are now a routine component of patient–clinician interactions. Originally designed for operational functions including billing and regulatory compliance, these systems have had unintended consequences including increased exam room documentation, divided attention during the visit, and use of scribes to alleviate documentation burdens. In an age in which technology is ubiquitous in everyday life, we must re-envision healthcare technology to support both clinical operations and, above all, the patient–clinician relationship. We present 6 habits for designing user-centered health technologies: (1) put patient care first, (2) assemble a team with the right skills, (3) relentlessly ask WHY, (4) keep it simple, (5) be Darwinian, and (6) don’t lose the forest for the trees. These habits should open dialogues between developers, implementers, end users, and stakeholders, as well as outline a path for better, more usable technology that puts patients and their clinicians back at the center of care.
  • Loading...
    Thumbnail Image
    Item
    Structure and Funding of Clinical Informatics Fellowships: A National Survey of Program Directors
    (Thieme, 2024) Patel, Tushar N.; Chaise, Aaron J.; Hanna, John J.; Patel, Kunal P.; Kochendorfer, Karl M.; Medford, Richard J.; Mize, Dara E.; Melnick, Edward R.; Hron, Jonathan D.; Youens, Kenneth; Pandita, Deepti; Leu, Michael G.; Ator, Gregory A.; Yu, Feliciano; Genes, Nicholas; Baker, Carrie K.; Bell, Douglas S.; Pevnick, Joshua M.; Conrad, Steven A.; Chandawarkar, Aarti R.; Rogers, Kendall M.; Kaelber, David C.; Singh, Ila R.; Levy, Bruce P.; Finnell, John T.; Kannry, Joseph; Pageler, Natalie M.; Mohan, Vishnu; Lehmann, Christoph U.; Emergency Medicine, School of Medicine
    Background: In 2011, the American Board of Medical Specialties established clinical informatics (CI) as a subspecialty in medicine, jointly administered by the American Board of Pathology and the American Board of Preventive Medicine. Subsequently, many institutions created CI fellowship training programs to meet the growing need for informaticists. Although many programs share similar features, there is considerable variation in program funding and administrative structures. Objectives: The aim of our study was to characterize CI fellowship program features, including governance structures, funding sources, and expenses. Methods: We created a cross-sectional online REDCap survey with 44 items requesting information on program administration, fellows, administrative support, funding sources, and expenses. We surveyed program directors of programs accredited by the Accreditation Council for Graduate Medical Education between 2014 and 2021. Results: We invited 54 program directors, of which 41 (76%) completed the survey. The average administrative support received was $27,732/year. Most programs (85.4%) were accredited to have two or more fellows per year. Programs were administratively housed under six departments: Internal Medicine (17; 41.5%), Pediatrics (7; 17.1%), Pathology (6; 14.6%), Family Medicine (6; 14.6%), Emergency Medicine (4; 9.8%), and Anesthesiology (1; 2.4%). Funding sources for CI fellowship program directors included: hospital or health systems (28.3%), clinical departments (28.3%), graduate medical education office (13.2%), biomedical informatics department (9.4%), hospital information technology (9.4%), research and grants (7.5%), and other sources (3.8%) that included philanthropy and external entities. Conclusion: CI fellowships have been established in leading academic and community health care systems across the country. Due to their unique training requirements, these programs require significant resources for education, administration, and recruitment. There continues to be considerable heterogeneity in funding models between programs. Our survey findings reinforce the need for reformed federal funding models for informatics practice and training.
  • Loading...
    Thumbnail Image
    Item
    Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures
    (Oxford University Press, 2022) Rule, Adam; Melnick, Edward R.; Apathy, Nate C.; Health Policy and Management, Richard M. Fairbanks School of Public Health
    Objective: The aim of this article is to compare the aims, measures, methods, limitations, and scope of studies that employ vendor-derived and investigator-derived measures of electronic health record (EHR) use, and to assess measure consistency across studies. Materials and methods: We searched PubMed for articles published between July 2019 and December 2021 that employed measures of EHR use derived from EHR event logs. We coded the aims, measures, methods, limitations, and scope of each article and compared articles employing vendor-derived and investigator-derived measures. Results: One hundred and two articles met inclusion criteria; 40 employed vendor-derived measures, 61 employed investigator-derived measures, and 1 employed both. Studies employing vendor-derived measures were more likely than those employing investigator-derived measures to observe EHR use only in ambulatory settings (83% vs 48%, P = .002) and only by physicians or advanced practice providers (100% vs 54% of studies, P < .001). Studies employing vendor-derived measures were also more likely to measure durations of EHR use (P < .001 for 6 different activities), but definitions of measures such as time outside scheduled hours varied widely. Eight articles reported measure validation. The reported limitations of vendor-derived measures included measure transparency and availability for certain clinical settings and roles. Discussion: Vendor-derived measures are increasingly used to study EHR use, but only by certain clinical roles. Although poorly validated and variously defined, both vendor- and investigator-derived measures of EHR time are widely reported. Conclusion: The number of studies using event logs to observe EHR use continues to grow, but with inconsistent measure definitions and significant differences between studies that employ vendor-derived and investigator-derived measures.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University