- Browse by Subject
Browsing by Subject "Educational measurement"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Anatomy Nights: An international public engagement event increases audience knowledge of brain anatomy(PLOS, 2022-06-09) Sanders, Katherine A.; Philp, Janet A.C.; Jordan, Crispin Y.; Cale, Andrew S.; Cunningham, Claire L.; Organ, Jason M.; Anatomy, Cell Biology and Physiology, School of MedicineAnatomy Nights is an international public engagement event created to bring anatomy and anatomists back to public spaces with the goal of increasing the public's understanding of their own anatomy by comparison with non-human tissues. The event consists of a 30-minute mini-lecture on the anatomy of a specific anatomical organ followed by a dissection of animal tissues to demonstrate the same organ anatomy. Before and after the lecture and dissection, participants complete research surveys designed to assess prior knowledge and knowledge gained as a result of participation in the event, respectively. This study reports the results of Anatomy Nights brain events held at four different venues in the UK and USA in 2018 and 2019. Two general questions were asked of the data: 1) Do participant post-event test scores differ from pre-event scores; and 2) Are there differences in participant scores based on location, educational background, and career. We addressed these questions using a combination of generalized linear models (R's glm function; R version 4.1.0 [R Core Team, 2014]) that assumed a binomial distribution and implemented a logit link function, as well as likelihood estimates to compare models. Survey data from 91 participants indicate that scores improve on post-event tests compared to pre-event tests, and these results hold irrespective of location, educational background, and career. In the pre-event tests, participants performed well on naming structures with an English name (frontal lobe and brainstem), and showed signs of improvement on other anatomical names in the post-test. Despite this improvement in knowledge, we found no evidence that participation in Anatomy Nights improved participants' ability to apply this knowledge to neuroanatomical contexts (e.g., stroke).Item Assessing the Transition of Training in Health Systems Science From Undergraduate to Graduate Medical Education(Accreditation Council for Graduate Medical Education, 2021) Santen, Sally A.; Hamstra, Stanley J.; Yamazaki, Kenji; Gonzalo, Jed; Lomis, Kim; Allen, Bradley; Lawson, Luan; Holmboe, Eric S.; Triola, Marc; George, Paul; Gorman, Paul N.; Skochelak, Susan; Medicine, School of MedicineBackground: The American Medical Association Accelerating Change in Medical Education (AMA-ACE) consortium proposes that medical schools include a new 3-pillar model incorporating health systems science (HSS) and basic and clinical sciences. One of the goals of AMA-ACE was to support HSS curricular innovation to improve residency preparation. Objective: This study evaluates the effectiveness of HSS curricula by using a large dataset to link medical school graduates to internship Milestones through collaboration with the Accreditation Council for Graduate Medical Education (ACGME). Methods: ACGME subcompetencies related to the schools' HSS curricula were identified for internal medicine, emergency medicine, family medicine, obstetrics and gynecology (OB/GYN), pediatrics, and surgery. Analysis compared Milestone ratings of ACE school graduates to non-ACE graduates at 6 and 12 months using generalized estimating equation models. Results: At 6 months both groups demonstrated similar HSS-related levels of Milestone performance on the selected ACGME competencies. At 1 year, ACE graduates in OB/GYN scored minimally higher on 2 systems-based practice (SBP) subcompetencies compared to non-ACE school graduates: SBP01 (1.96 vs 1.82, 95% CI 0.03-0.24) and SBP02 (1.87 vs 1.79, 95% CI 0.01-0.16). In internal medicine, ACE graduates scored minimally higher on 3 HSS-related subcompetencies: SBP01 (2.19 vs 2.05, 95% CI 0.04-0.26), PBLI01 (2.13 vs 2.01; 95% CI 0.01-0.24), and PBLI04 (2.05 vs 1.93; 95% CI 0.03-0.21). For the other specialties examined, there were no significant differences between groups. Conclusions: Graduates from schools with training in HSS had similar Milestone ratings for most subcompetencies and very small differences in Milestone ratings for only 5 subcompetencies across 6 specialties at 1 year, compared to graduates from non-ACE schools. These differences are likely not educationally meaningful.Item A Critical Disconnect: Residency Selection Factors Lack Correlation With Intern Performance(Accreditation Council for Graduate Medical Education, 2020) Burkhardt, John C.; Parekh, Kendra P.; Gallahue, Fiona E.; London, Kory S.; Edens, Mary A.; Humbert, A.J.; Pillow, M. Tyson; Santen, Sally A.; Hopson, Laura R.; Emergency Medicine, School of MedicineBackground: Emergency medicine (EM) residency programs want to employ a selection process that will rank best possible applicants for admission into the specialty. Objective: We tested if application data are associated with resident performance using EM milestone assessments. We hypothesized that a weak correlation would exist between some selection factors and milestone outcomes. Methods: Utilizing data from 5 collaborating residency programs, a secondary analysis was performed on residents trained from 2013 to 2018. Factors in the model were gender, underrepresented in medicine status, United States Medical Licensing Examination Step 1 and 2 Clinical Knowledge (CK), Alpha Omega Alpha (AOA), grades (EM, medicine, surgery, pediatrics), advanced degree, Standardized Letter of Evaluation global assessment, rank list position, and controls for year assessed and program. The primary outcomes were milestone level achieved in the core competencies. Multivariate linear regression models were fitted for each of the 23 competencies with comparisons made between each model's results. Results: For the most part, academic performance in medical school (Step 1, 2 CK, grades, AOA) was not associated with residency clinical performance on milestones. Isolated correlations were found between specific milestones (eg, higher surgical grade increased wound care score), but most had no correlation with residency performance. Conclusions: Our study did not find consistent, meaningful correlations between the most common selection factors and milestones at any point in training. This may indicate our current selection process cannot consistently identify the medical students who are most likely to be high performers as residents.Item Evaluation of an educational program for essential newborn care in resource-limited settings: Essential Care for Every Baby(Springer Nature, 2015-06-24) Thukral, Anu; Lockyer, Jocelyn; Bucher, Sherri L.; Berkelhamer, Sara; Bose, Carl; Deorari, Ashok; Esamai, Fabian; Faremo, Sonia; Keenan, William J.; McMillan, Douglas; Niermeyer, Susan; Singhal, Nalini; Pediatrics, School of MedicineBackground: Essential Care for Every Baby (ECEB) is an evidence-based educational program designed to increase cognitive knowledge and develop skills of health care professionals in essential newborn care in low-resource areas. The course focuses on the immediate care of the newborn after birth and during the first day or until discharge from the health facility. This study assessed the overall design of the course; the ability of facilitators to teach the course; and the knowledge and skills acquired by the learners. Methods: Testing occurred at 2 global sites. Data from a facilitator evaluation survey, a learner satisfaction survey, a multiple choice question (MCQ) examination, performance on two objective structured clinical evaluations (OSCE), and pre- and post-course confidence assessments were analyzed using descriptive statistics. Pre-post course differences were examined. Comments on the evaluation form and post-course group discussions were analyzed to identify potential program improvements. Results: Using ECEB course material, master trainers taught 12 facilitators in India and 11 in Kenya who subsequently taught 62 providers of newborn care in India and 64 in Kenya. Facilitators and learners were satisfied with their ability to teach and learn from the program. Confidence (3.5 to 5) and MCQ scores (India: pre 19.4, post 24.8; Kenya: pre 20.8, post 25.0) improved (p < 0.001). Most participants demonstrated satisfactory skills on the OSCEs. Qualitative data suggested the course was effective, but also identified areas for course improvement. These included additional time for hands-on practice, including practice in a clinical setting, the addition of video learning aids and the adaptation of content to conform to locally recommended practices. Conclusion: ECEB program was highly acceptable, demonstrated improved confidence, improved knowledge and developed skills. ECEB may improve newborn care in low resource settings if it is part of an overall implementation plan that addresses local needs and serves to further strengthen health systems.Item Is ChatGPT 3.5 smarter than Otolaryngology trainees? A comparison study of board style exam questions(Public Library of Science, 2024-09-26) Patel, Jaimin; Robinson, Peyton; Illing, Elisa; Anthony, Benjamin; Otolaryngology -- Head and Neck Surgery, School of MedicineObjectives: This study compares the performance of the artificial intelligence (AI) platform Chat Generative Pre-Trained Transformer (ChatGPT) to Otolaryngology trainees on board-style exam questions. Methods: We administered a set of 30 Otolaryngology board-style questions to medical students (MS) and Otolaryngology residents (OR). 31 MSs and 17 ORs completed the questionnaire. The same test was administered to ChatGPT version 3.5, five times. Comparisons of performance were achieved using a one-way ANOVA with Tukey Post Hoc test, along with a regression analysis to explore the relationship between education level and performance. Results: The average scores increased each year from MS1 to PGY5. A one-way ANOVA revealed that ChatGPT outperformed trainee years MS1, MS2, and MS3 (p = <0.001, 0.003, and 0.019, respectively). PGY4 and PGY5 otolaryngology residents outperformed ChatGPT (p = 0.033 and 0.002, respectively). For years MS4, PGY1, PGY2, and PGY3 there was no statistical difference between trainee scores and ChatGPT (p = .104, .996, and 1.000, respectively). Conclusion: ChatGPT can outperform lower-level medical trainees on Otolaryngology board-style exam but still lacks the ability to outperform higher-level trainees. These questions primarily test rote memorization of medical facts; in contrast, the art of practicing medicine is predicated on the synthesis of complex presentations of disease and multilayered application of knowledge of the healing process. Given that upper-level trainees outperform ChatGPT, it is unlikely that ChatGPT, in its current form will provide significant clinical utility over an Otolaryngologist.Item Measurement of Nontechnical Skills During Robotic-Assisted Surgery Using Sensor-Based Communication and Proximity Metrics(American Medical Association, 2021-11-01) Cha, Jackie S.; Athanasiadis, Dimitrios; Anton, Nicholas E.; Stefanidis, Dimitrios; Yu, Denny; Surgery, School of MedicineThis cohort study uses sensor-based communication and proximity metrics to assess surgeon nontechnical skills during robotic-assisted surgery.Item Tools to Assess Behavioral and Social Science Competencies in Medical Education: A Systematic Review(Wolters Kluwer, 2016-05) Carney, Patricia A.; Palmer, Ryan T.; Miller, Marissa Fuqua; Thayer, Erin K.; Estroff, Sue E.; Litzelman, Debra K.; Biagioli, Frances E.; Teal, Cayla R.; Lambros, Ann; Hatt, William J.; Satterfield, Jason M.; Medicine, School of MedicinePURPOSE: Behavioral and social science (BSS) competencies are needed to provide quality health care, but psychometrically validated measures to assess these competencies are difficult to find. Moreover, they have not been mapped to existing frameworks, like those from the Liaison Committee on Medical Education (LCME) and Accreditation Council for Graduate Medical Education (ACGME). This systematic review aimed to identify and evaluate the quality of assessment tools used to measure BSS competencies. METHOD: The authors searched the literature published between January 2002 and March 2014 for articles reporting psychometric or other validity/reliability testing, using OVID, CINAHL, PubMed, ERIC, Research and Development Resource Base, SOCIOFILE, and PsycINFO. They reviewed 5,104 potentially relevant titles and abstracts. To guide their review, they mapped BSS competencies to existing LCME and ACGME frameworks. The final included articles fell into three categories: instrument development, which were of the highest quality; educational research, which were of the second highest quality; and curriculum evaluation, which were of lower quality. RESULTS: Of the 114 included articles, 33 (29%) yielded strong evidence supporting tools to assess communication skills, cultural competence, empathy/compassion, behavioral health counseling, professionalism, and teamwork. Sixty-two (54%) articles yielded moderate evidence and 19 (17%) weak evidence. Articles mapped to all LCME standards and ACGME core competencies; the most common was communication skills. CONCLUSIONS: These findings serve as a valuable resource for medical educators and researchers. More rigorous measurement validation and testing and more robust study designs are needed to understand how educational strategies contribute to BSS competency development.