- Browse by Subject
Browsing by Subject "Medical student assessment"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Comparing Student Satisfaction Metrics: Strategic Student Survey versus Traditional Tools(2024-04-26) Kochhar, Komal; Masseria, Anthony; Walsh, Sarah; Skillman, Brian; Duham, Jennifer; Wallach, PaulBackground: Utilizing multiple methods to assess student satisfaction across the medical curriculum can provide a longitudinal view of the student experience and allow for more timely interventions. To this end, we developed the Strategic Student Survey (S3) to assess student satisfaction across all 4 years, which serves to complement other established assessments such as the AAMC Year Two Questionnaire (Y2Q), End of Clerkship (EOC) evaluations, and the AAMC Graduation Questionnaire (GQ). Objective: To determine the extent to which our internal survey (Strategic Student Survey) results mirror those of the Y2Q, EOC evaluations, and GQ. Methods: The S3 consists of ~50 questions derived from the Liaison Committee for Medical Education’s Independent Student Analysis survey instrument and customized with our school-specific elements. The S3 was administered annually to all medical students from first to fourth year (MS1 – MS4), starting in 2018. The S3 results were collated and grouped by class year with the corresponding Y2Q, EOC, and GQ results. Responses to questions that were common across the S3, Y2Q, EOC, and GQ were compared for the last 3 years. Results: S3 outcomes closely aligned with responses from the other instruments. For instance, the Class of 2023 “strongly agreed” or “agreed” with the statement “I am satisfied with the quality of my medical education” as follows: S3 responses (From MS2s: 77%, From MS3s: 88%, From MS4s: 93%); Y2Q responses (From MS2s: 80%); GQ responses (From MS4s: 91%). Similarly, the evaluation of 8 clerkships demonstrated a consistent pattern of high ratings across the S3, EOC, and GQ. For example, the Classes of 2021, 2022, 2023 rated the quality of the Internal Medicine clerkship as “Excellent” or “Very Good” in this way: S3 responses (From MS3s: 86%, 87%, and 89%, respectively, for these 3 class years); EOC responses (From MS3s: 87%, 91%, and 86%); GQ responses (From MS4s: 88%, 87%, and 92%). Conclusion: These findings indicate that the S3 is a reliable alternative to the Y2Q, EOC, and GQ for gauging student satisfaction. In fact, initial review of the data suggest that S3 may even be a better predictor of GQ responses than EOC. Because S3 is administered during each class year of medical school, it allows for the early identification and address of student concerns, contributing to maintaining LCME accreditation standards.Item "Do I really have to complete another evaluation?" exploring relationships among physicians' evaluative load, evaluative strain, and the quality of clinical clerkship evaluations(2017-06) Traser, Courtney Jo; Brokaw, James J.Background. Despite widespread criticism of physician-performed evaluations of medical students’ clinical skills, clinical clerkship evaluations (CCEs) remain the foremost means by which to assess trainees’ clinical prowess. Efforts undertaken to improve the quality of feedback students receive have ostensibly led to higher assessment demands on physician faculty; the consequences of which remain unknown. Accordingly, this study investigated the extent to which physicians’ evaluative responsibilities influenced the quality of CCEs and qualitatively explored physicians’ perceptions of these evaluations. Methods. A questionnaire was delivered to physicians (n = 93) at Indiana University School of Medicine to gauge their perceived evaluative responsibilities. Evaluation records of each participant were obtained and were used to calculate one’s measurable quantity of CCEs, the timeliness of CCE submissions, and the quality of the Likert-scale and written feedback data included in each evaluation. A path analysis estimated the extent to which one’s evaluative responsibilities affected the timeliness of CCE submissions and CCE quality. Semi-structured interviews with a subset of participants (n = 8) gathered perceptions of the evaluations and the evaluative process. Results. One’s measurable quantity of evaluations did not influence one’s perceptions of the evaluative task, but did directly influence the quality of the Likert-scale items. Moreover, one’s perceptions of the evaluative task directly influenced the timeliness of CCE submissions and indirectly influenced the quality of the closed-ended CCE items. Tardiness in the submission of CCEs had a positive effect on the amount of score differentiation among the Likert-scale data. Neither evaluative responsibilities nor the timeliness of CCE submissions influenced the quality of written feedback. Qualitative analysis revealed mixed opinions on the utility of CCEs and highlighted the temporal burden and practical limitations of completing CCEs. Conclusions. These findings suggest physicians’ perceptions of CCEs are independent of their assigned evaluative quantity, yet influence both the timeliness of evaluation submissions and evaluative quality. Further elucidation of the mechanisms underlying the positive influence of evaluation quantity and timely CCE submissions on CCE quality are needed to fully rationalize these findings and improve the evaluative process. Continued research is needed to pinpoint which factors influence the quality of written feedback.