- Browse by Subject
Browsing by Subject "clinical reasoning"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item THE EFFECT OF CURRICULAR SEQUENCING OF HUMAN PATIENT SIMULATION LEARNING EXPERIENCES ON STUDENTS’ SELF-PERCEPTIONS OF CLINICAL REASONING ABILITIES(2011-11-18) Jensen, Rebecca Sue; Ebright, Patricia; Pesut, Daniel J.; Fisher, Mary L., Ph.D.; Welch, Janet L.It is unknown whether timing of human patient simulation (HPS) in a semester, demographic (age, gender, and ethnicity), and situational (type of program and previous baccalaureate degree and experience in healthcare) variables affects students’ perceptions of their clinical reasoning abilities. Nursing students were divided into two groups, mid and end of semester HPS experiences. Students’ perceptions of clinical reasoning abilities were measured at Baseline (beginning of semester) and Time 2 (end of semester), along with demographic and situational variables. Dependent variable was Difference scores where Baseline scores were subtracted from Time 2 scores to reveal changes in students’ perceptions of clinical reasoning. Students who were older and had previous healthcare experience had higher scores, as well as students in the AS program, indicating larger changes in students’ perceptions of clinical reasoning abilities from Baseline to Time 2. Timing of HPS, mid or end of semester, had no effect on Difference scores, and thus students’ perceptions of clinical reasoning abilities.Item Implementing A Clinical Reasoning Curriculum for Pediatrics Residents: A Pilot Study(2024-04-26) Perry, Kelsey; Warren, Kyle; Trivedi, Nishant; Wilson, MichaelClinical reasoning is a complex entity that has been described as a cognitive and non-cognitive process by which a health care professional consciously and unconsciously interacts with the patient and environment to collect and interpret patient data, weigh the benefits and risks of actions, and understand patient preferences to determine a working diagnostic and therapeutic management plan whose purpose is to improve a patient’s well-being1. This concept has been a focus of medical education efforts in hopes of fostering master clinicians and ultimately improving patient outcomes by reducing diagnostic errors (which have an estimated incidence of 10 to 20%)2-4. Clinical reasoning curricula, though more prevalent in undergraduate medical education, have begun to emerge in graduate medical education programs with promising results demonstrating improvement in validated clinical reasoning metrics.5-8 However, almost all of this work has been in the field of internal medicine. Given the parallel breadth of disease states and changing physiology pediatricians face, and a greater than 60% misdiagnosis rate for certain pediatrics conditions, there is a need for a similar curriculum development efforts in pediatrics residencies to improve the diagnostic reasoning ability of future pediatricians.9 A pilot study, looking at the early implementation of a clinical reasoning curriculum for pediatrics residents at IU, was conducted in summer 2023. The overall objective of this developing curriculum is to improve pediatric residents’ clinical reasoning knowledge and skills. The first goal was to better understand the baseline perceptions and knowledge of clinical reasoning. A needs assessment was conducted via administration of voluntary, anonymous, electronic survey that was emailed to all combined and categorical pediatrics residents. These results suggested an unmet need in regards to clinical reasoning education for pediatrics trainees. Thus two one hour lectures, modeled off of the ACP’s Teaching Clinical Reasoning text, were given to residents during scheduled noon conference timeslots. The first lecture defined clinical reasoning, discussed the impact of diagnostic error, and modeled a framework to label and access diagnostic reasoning techniques. The second lecture took place one week later and consisted of five interactive cases that allowed the group to practice clinical reasoning in different settings (classroom/case conferences, bedside, teacher, team leader, remediating a learner). 7 residents attended both sessions and were participants in this pilot study, completing both a pre and post survey. Though not powered for statistical significance, there was a trend towards enhanced perception on the importance of clinical reasoning and improved performance on knowledge questions. This suggests that implementing an expanded longitudinal clinical reasoning curriculum within the pediatrics residency could enhance clinical reasoning understanding and ability.Item A Psychometric Evaluation of Script Concordance Tests for Measuring Clinical Reasoning(2013-06) Wilson, Adam Benjamin; Pike, Gary R. (Gary Robert), 1952-; Humbert, Aloysius J.; Brokaw, James J.; Seifert, Mark F.Purpose: Script concordance tests (SCTs) are assessments purported to measure clinical data interpretation. The aims of this research were to (1) test the psychometric properties of SCT items, (2) directly examine the construct validity of SCTs, and (3) explore the concurrent validity of six SCT scoring methods while also considering validity at the item difficulty and item type levels. Methods: SCT scores from a problem solving SCT (SCT-PS; n=522) and emergency medicine SCT (SCT-EM; n=1040) were used to investigate the aims of this research. An item analysis was conducted to optimize the SCT datasets, to categorize items into levels of difficulty and type, and to test for gender biases. A confirmatory factor analysis tested whether SCT scores conformed to a theorized unidimensional factor structure. Exploratory factor analyses examined the effects of six SCT scoring methods on construct validity. The concurrent validity of each scoring method was also tested via a one-way multivariate analysis of variance (MANOVA) and Pearson’s product moment correlations. Repeated measures analysis of variance (ANOVA) and one-way ANOVA tested the discriminatory power of the SCTs according to item difficulty and type. Results: Item analysis identified no gender biases. A combination of moderate model-fit indices and poor factor loadings from the confirmatory factor analysis suggested that the SCTs under investigation did not conform to a unidimensional factor structure. Exploratory factor analyses of six different scoring methods repeatedly revealed weak factor loadings, and extracted factors consistently explained only a small portion of the total variance. Results of the concurrent validity study showed that all six scoring methods discriminated between medical training levels in spite of lower reliability coefficients on 3-point scoring methods. In addition, examinees as MS4s significantly (p<0.001) outperformed their MS2 SCT scores in all difficulty categories. Cross-sectional analysis of SCT-EM data reported significant differences (p<0.001) between experienced EM physicians, EM residents, and MS4s at each level of difficulty. When considering item type, diagnostic and therapeutic items differentiated between all three training levels, while investigational items could not readily distinguish between MS4s and EM residents. Conclusions: The results of this research contest the assertion that SCTs measure a single common construct. These findings raise questions about the latent constructs measured by SCTs and challenge the overall utility of SCT scores. The outcomes of the concurrent validity study provide evidence that multiple scoring methods reasonably differentiate between medical training levels. Concurrent validity was also observed when considering item difficulty and item type.