- Browse by Subject
Browsing by Subject "Assessment"
Now showing 1 - 10 of 63
Results Per Page
Sort Options
Item 2023 Community Engagement Associates Mentor Questionnaire Report(2023-04-24) Hahn, ThomasThis report provides the results of the end-of-year questionnaire to faculty/staff mentors of students participating in the Community Engagement Associates (CEA) Scholarship Program for AY 2022-2023. The CEA program is an employment program in which community engaged faculty and staff apply for funding to employ students to provide support for courses, programs, or projects that advance the community engagement mission of IUPUI.Item 2023 Direct Assessment of University Profiles through Written Reflections of Engaged Learning Experiences Using the AAC&U Written Communication, Integrative Learning, and Civic Engagement VALUE Rubrics(2023-11-01) Hahn, ThomasThis report describes a direct assessment activity within the IUPUI Institute for Engaged Learning (IEL) for students participating in IEL programs during AY 2022-2023. The IEL Assessment Workgroup assessed written reflection artifacts of 100 students from 6 co-curricular programs. Using selected rows from the Written Communication, Integrative Learning, and Civic Engagement VALUE Rubrics, the raters assessed the Communicator, Problem Solver, and Community Contributor Profiles of Undergraduate Learning.Item Accounting education literature review (2022)(Elsevier, 2023-01-11) Apostolou, Barbara; Churyk, Natalie; Hassell, John M.; Matuszewski, LindaThis review of the accounting education literature includes 109 articles published during 2022 in five accounting education journals: (1) Journal of Accounting Education, (2) Accounting Education, (3) Advances in Accounting Education: Teaching and Curriculum Innovations, (4) Issues in Accounting Education, and (5) The Accounting Educators’ Journal. We update 17 prior accounting education literature reviews by organizing and summarizing contributions to the accounting education literature made during 2022. Articles are categorized into five sections corresponding to traditional knowledge bases: (1) curriculum and instruction, (2) instruction by content area, (3) educational technology, (4) students, and (5) faculty. We summarize and describe the research technique of the empirical articles. Suggestions for research are presented. Articles classified as cases and instructional resources published in the same five journals during 2022 are tabulated in appendices categorized by instructional content area.Item Application of Different Standard Error Estimates in Reliable Change Methods(Oxford University Press, 2021) Hammers, Dustin B.; Duff, Kevin; Neurology, School of MedicineObjective: This study attempted to clarify the applicability of standard error (SE) terms in clinical research when examining the impact of short-term practice effects on cognitive performance via reliable change methodology. Method: This study compared McSweeney's SE of the estimate (SEest) to Crawford and Howell's SE for prediction of the regression (SEpred) using a developmental sample of 167 participants with either normal cognition or mild cognitive impairment (MCI) assessed twice over 1 week. One-week practice effects in older adults: Tools for assessing cognitive change. Using these SEs, previously published standardized regression-based (SRB) reliable change prediction equations were then applied to an independent sample of 143 participants with MCI. Results: This clinical developmental sample yielded nearly identical SE values (e.g., 3.697 vs. 3.719 for HVLT-R Total Recall SEest and SEpred, respectively), and the resultant SRB-based discrepancy z scores were comparable and strongly correlated (r = 1.0, p < .001). Consequently, observed follow-up scores for our sample with MCI were consistently below expectation compared to predictions based on Duff's SRB algorithms. Conclusions: These results appear to replicate and extend previous work showing that the calculation of the SEest and SEpred from a clinical sample of cognitively intact and MCI participants yields similar values and can be incorporated into SRB reliable change statistics with comparable results. As a result, neuropsychologists utilizing reliable change methods in research investigation (or clinical practice) should carefully balance mathematical accuracy and ease of use, among other factors, when determining which SE metric to use.Item Assessing and validating reliable change across ADNI protocols(Taylor & Francis, 2022) Hammers, Dustin B.; Kostadinova, Ralitsa; Unverzagt, Frederick W.; Apostolova, Liana G.; Alzheimer’s Disease Neuroimaging Initiative; Neurology, School of MedicineObjective: Reliable change methods can aid in determining whether changes in cognitive performance over time are meaningful. The current study sought to develop and cross-validate 12-month standardized regression-based (SRB) equations for the neuropsychological measures commonly administered in the Alzheimer's Disease Neuroimaging Initiative (ADNI) longitudinal study. Method: Prediction algorithms were developed using baseline score, retest interval, the presence/absence of a 6-month evaluation, age, education, sex, and ethnicity in two different samples (n = 192 each) of robustly cognitively intact community-dwelling older adults from ADNI - matched for demographic and testing factors. The developed formulae for each sample were then applied to one of the samples to determine goodness-of-fit and appropriateness of combining samples for a single set of SRB equations. Results: Minimal differences were seen between Observed 12-month and Predicted 12-month scores on most neuropsychological tests from ADNI, and when compared across samples the resultant Predicted 12-month scores were highly correlated. As a result, samples were combined and SRB prediction equations were successfully developed for each of the measures. Conclusions: Establishing cross-validation for these SRB prediction equations provides initial support of their use to detect meaningful change in the ADNI sample, and provides the basis for future research with clinical samples to evaluate potential clinical utility. While some caution should be considered for measuring true cognitive change over time - particularly in clinical samples - when using these prediction equations given the relatively lower coefficients of stability observed, use of these SRBs reflects an improvement over current practice in ADNI.Item Assessing Library Subject Guides using Google Analytics(2013-01-28) Durrant, SummerEach year librarians invest considerable time and energy in creating and maintaining web-based subject guides. But how can these guides be assessed? This poster discusses how Indiana University-Purdue University Indianapolis (IUPUI) University Library is using Google Analytics to collect and analyze website usage statistics to assess subject guides hosted on Springshare’s LibGuides platform.Item Assessing the Process of Team-Based Projects to Promote Quality Contribution and Equitable Assessment(Indiana University, 2024-03-08) Zhu, LiugenThis article explores the development and implementation of a multifaceted evaluation approach aimed at assessing both the product and process aspects of team-based projects. Specifically, it delves into the practice of utilizing peer evaluation to evaluate the intricate process dynamics within team-based projects. In support of this methodology, this article references relevant literature and theories to underscore the importance of assessing the process and advocating for equitable assessment practices. The article includes the peer evaluation form, an informative FAQ page, a group discussion assignment, and a hypothetical example illustrating the grading process.Item Assessment in Space Designed for Experimentation: The University of Washington Libraries Research Commons(2014-08-04) Ray, Lauren; Macy, Katharine V.Item Assessment in Space Designed for Experimentation: The University of Washington Libraries Research Commons(2014-08) Ray, Lauren; Macy, Katharine V.Since opening in 2010, the University of Washington Libraries Research Commons has used a number of quantitative and qualitative assessment methods to evaluate its space, services and programs. Because it was designed for constant experimentation and change, Research Commons assessment has been driven by the desire to stay true to user needs, make the case for growth, and test new models of space design, programming, and services. This paper will describe assessment activities and projects kept in spirit with the experimental, agile nature of the space, and how the focus shifted from space assessment to programmatic assessment. In order to respond to changing user needs and push for innovation, the Research Commons has evolved to examine space, services, and programs in an integrated holistic manner. This has allowed the staff to not only understand what users do within the space and their preferences, but also how effective are programming and services offered at meeting those user needs.Item Assessment of Biomedical Science Content Acquisition Performance through PBL Group Interaction(Office of the Vice Chancellor for Research, 2010-04-09) Romito, L.Objective: To assess the relationship between biomedical science content acquisition performance and PBL group interaction. PBL process activities should enable students to learn and apply biomedical science content to clinical situations and enhance understanding. However, learning and exam preparation may be largely driven by post-case individual study and the publicized Learning Objectives. Methods: To determine whether students were actually learning SABS content during PBL process activities, just prior to the Learning Objectives dissemination, we administered a quiz assessing content recall and application as well as a student and facilitator survey to determine students’ role in group regarding the assessed topic. Results: Year 1 mean score: content=84%; application=61%. Year 2 mean score: content=68%; application=20%. Survey response categories were: C1-those whose group did not research the topic, C2-those who did not personally research the topic, but who were in a group where the topic was researched and presented by others, and C3-those who researched the topic and contributed to/were the primary discussants. Year 2. Students scoring 100% were in: C1 (12.3%), C2 (15.5%), and C3 (15.5%). Students scoring 0% were in: C1 (30%), C2 (33%), and C3 (22%). Year 1. Students scoring 100% were in: C1 (50%), C2 (48%), and C3 (55.3%). Students scoring 0% were in: C1 (11%), C2 (9%), and C3 (2.3%). For Year 2, self-reported role in group correlated with scores of 50% (r=0.68) and 0 % (r=-0.78). For Year 1, self-reported role in group correlated with scores of 100% (r=0.78) and 0% (r=0.97). Conclusion: Year 1 and 2 students performed better on test items assessing content recall rather than application. Students who reported being more active in the PBL group process activities tended to have better assessment performance.