- Browse by Subject
Browsing by Subject "inter-rater reliability"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Inter-rater Reliability of a Clinical Documentation Rubric Within Pharmacotherapy Problem-Based Learning Courses(American Association of Colleges of Pharmacy, 2020-07-01) Villa, Kristin R.; Sprunger, Tracy L.; Walton, Alison M.; Costello, Tracy J.; Isaacs, Alex N.; Medicine, School of MedicineObjective. To evaluate a clinical documentation rubric for pharmacotherapy problem-based learning (PBL) courses using inter-rater reliability (IRR) among different evaluators. Methods. A rubric was adapted for use in grading student pharmacists’ clinical documentation in pharmacotherapy PBL courses. Multiple faculty evaluators used the rubric to assess student pharmacists’ clinical documentation. The mean rubric score given by the evaluators and the standard deviation were calculated. Intra-class correlation coefficients (ICC) were calculated to determine the inter-rater reliability (IRR) of the rubric. Results. Three hundred seventeen clinical documentation submissions were scored twice by multiple evaluators using the rubric. The mean initial evaluation score was 9.1 (SD=0.9) and the mean second evaluation score was 9.1 (SD=0.9), with no significant difference found between the two. The overall ICC was 0.7 across multiple graders, indicating good IRR. Conclusion. The clinical documentation rubric demonstrated overall good IRR between multiple evaluators when used in pharmacotherapy PBL courses. The rubric will undergo additional evaluation and continuous quality improvement to ensure that student pharmacists are provided with the formative feedback they need.Item Skills on Wheels: Program evaluation and modifications to increase the reliability and validity of the Wheelchair Skills Test(2024) Hadley, Raegan; Chase, Tony; Department of Occupational Therapy, School of Health and Human Sciences; Chase, TonyThe Skills on Wheels pediatric wheelchair training program lacks program protocols that support the reliability and validity of assessment administration while also showing a lack of skill retention at one-year follow-up which demonstrates the need for program evaluation and modification to support accurate data collection and scoring to draw conclusions from. This is important as this population faces many barriers including insufficient wheelchair skills training when receiving a wheelchair therefore making it difficult to navigate in the community and with others. This can have a great impact on them and the Skills on Wheels program aims to address these gaps to ensure equal and fair participation in daily activities. Therefore it is important that the skills taught during programming are retained and have a long term impact. The purpose of this capstone will be to evaluate and improve the overall functioning and protocols for current programming to increase the accuracy of assessment administration and scoring thus aiming to address potential discrepancies in data from which conclusions are drawn about skill retention issues. The capstone student developed and implemented an in-depth training regarding the Wheelchair Skills Test and evidence based skill training interventions. The capstone student also developed and implemented a protocol for the Wheelchair Skills Test administration to decrease biases and increase inter-rater reliability through a consistent group of trained individuals who are blinded to the subjects skill ability. Results found that volunteers felt both more prepared and accurate in their scoring than in years past and their confidence increased. Additionally, the scoring results from 2024 showed a more realistic skill range and increase among participants than years past supporting higher accuracy. Skills on Wheels would benefit from continuing to utilize protocols and training developed during this capstone experience to continue to enhance the reliability and validity of the program and support accuracy in data findings. This capstone began a process of program evaluation to continue to identify areas that impact the skills training the participants receive and the scoring of the assessment that determines outcomes of the program and participants.Item A video anchored rating scale leads to high inter-rater reliability of inexperienced and expert raters in the absence of rater training(Elsevier, 2020-02) Patnaik, Ronit; Anton, Nicholas E.; Stefanidis, Dimitrios; Surgery, School of MedicineBackground Our objective was to assess the impact of incorporating videos in a behaviorally anchored performance rating scale on the inter-rater reliability (IRR) of expert, intermediate and novice raters. Methods The Intra-corporeal Suturing Assessment Tool (ISAT) was modified to include short video clips demonstrating poor, average, and expert performances. Blinded raters used this tool to assess videos of trainees performing suturing on a porcine model. Three attending surgeons, 4 residents, and 4 novice raters participated; no rater training was provided. The IRR was then compared among rater groups. Results The IRR using the modified ISAT was high at 0.80 (p < 0.001). Ratings were significantly correlated with trainee objective suturing scores for all rater groups (experts: R = 0.84, residents: R = 0.81, and novices: R = 0.69; p < 0.001). Conclusions Incorporating video anchors (to define performance) in the ISAT led to high IRR and enabled novices to achieve similar consistency in their ratings as experts.