- Browse by Subject
Browsing by Subject "evaluation"
Now showing 1 - 10 of 15
Results Per Page
Sort Options
Item Assessing a Longitudinal Educational Experience for Continuous Quality Improvement(Indiana University School of Medicine Education Day, 2024-04-26) Masseria, Anthony; Birnbaum, Deborah R.This presentation explores the use of assessment tools to promote adaptability and continuous quality improvement (CQI) in a large educational program. The Scholarly Concentrations Program is a statewide program complementing the core medical school curriculum and empowering students to delve into topics of personal interest. The pilot was launched with a “CQI” mindset, and after three years, a robust assessment plan is gathering feedback. While “building the plane as we fly it”, the program has grown from 100 students in its first year to over 400 in its third. A robust, longitudinal evaluation plan is critical. The intended goal is to use this program example to replicate it with other large educational programs anywhere.Item Concussion-Related Protocols and Preparticipation Assessments Used for Incoming Student-Athletes in National Collegiate Athletic Association Member Institutions(Journal of Athletic Training/NATA, 2015-11) Kerr, Zachary Y.; Snook, Erin M.; Lynall, Robert C.; Dompier, Thomas P.; Sales, Latrice; Parsons, John T.; Hainline, Brian; School of Health and Rehabilitation SciencesCONTEXT: National Collegiate Athletic Association (NCAA) legislation requires that member institutions have policies to guide the recognition and management of sport-related concussions. Identifying the nature of these policies and the mechanisms of their implementation can help identify areas of needed improvement. OBJECTIVE: To estimate the characteristics and prevalence of concussion-related protocols and preparticipation assessments used for incoming NCAA student-athletes. DESIGN: Cross-sectional study. SETTING: Web-based survey. PATIENTS OR OTHER PARTICIPANTS: Head athletic trainers from all 1113 NCAA member institutions were contacted; 327 (29.4%) completed the survey. INTERVENTION(S): Participants received an e-mail link to the Web-based survey. Weekly reminders were sent during the 4-week window. MAIN OUTCOME MEASURE(S): Respondents described concussion-related protocols and preparticipation assessments (eg, concussion history, neurocognitive testing, balance testing, symptom checklists). Descriptive statistics were compared by division and football program status. RESULTS: Most universities provided concussion education to student-athletes (95.4%), had return-to-play policies (96.6%), and obtained the number of previous concussions sustained by incoming student-athletes (97.9%). Fewer had return-to-learn policies (63.3%). Other concussion-history-related information (e.g., symptoms, hospitalization) was more often collected by Division I universities. Common preparticipation neurocognitive and balance tests were the Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT; 77.1%) and Balance Error Scoring System (46.5%). In total, 43.7% complied with recommendations for preparticipation assessments that included concussion history, neurocognitive testing, balance testing, and symptom checklists. This was due to moderate use of balance testing (56.6%); larger proportions used concussion history (99.7%), neurocognitive testing (83.2%), and symptom checklists (91.7%). More Division I universities (55.2%) complied with baseline assessment recommendations than Division II (38.2%, χ2 = 5.49, P = .02) and Division III (36.1%, χ2 = 9.11, P = .002) universities. CONCLUSIONS: National Collegiate Athletic Association member institutions implement numerous strategies to monitor student-athletes. Division II and III universities may need additional assistance to collect in-depth concussion histories and conduct balance testing. Universities should continue developing or adapting (or both) return-to-learn policies.Item Developing Common Metrics for the Clinical and Translational Science Awards (CTSAs): Lessons Learned.(Wiley, 2015-10) Rubio, Doris M.; Blank, Arthur E.; Dozier, Ann; Hites, Lisle; Gilliam, Victoria A.; Hunt, Joe; Rainwater, Julie; Trochim, William M.; Indiana CTSIThe National Institutes of Health (NIH) Roadmap for Medical Research initiative, funded by the NIH Common Fund and offered through the Clinical and Translational Science Award (CTSA) program, developed more than 60 unique models for achieving the NIH goal of accelerating discoveries toward better public health. The variety of these models enabled participating academic centers to experiment with different approaches to fit their research environment.A central challenge related to the diversity of approaches is the ability to determine the success and contribution of each model. This paper describes the effort by the Evaluation Key Function Committee to develop and test a methodology for identifying a set of common metrics to assess the efficiency of clinical research processes and for pilot testing these processes for collecting and analyzing metrics. The project involved more than one-fourth of all CTSAs and resulted in useful information regarding the challenges in developing common metrics, the complexity and costs of acquiring data for the metrics, and limitations on the utility of the metrics in assessing clinical research performance. The results of this process led to the identification of lessons learned and recommendations for development and use of common metrics to evaluate the CTSA effort.Item Educating Assessors: Preparing Librarians with Micro and Macro Skills(2016) Applegate, Rachel; Department of Library and Information Science, School of Informatics and ComputingObjective – To examine the fit between libraries’ needs for evaluation skills, and library education and professional development opportunities. Many library position descriptions and many areas of library science education focus on professional skills and activities, such as delivering information literacy, designing programs, and managing resources. Only some positions, some parts of positions, and some areas of education specifically address assessment/evaluation skills. The growth of the Library Assessment Conference, the establishment of the ARL-ASSESS listserv, and other evidence indicates that assessment skills are increasingly important. Method – Four bodies of evidence were examined for the prevalence of assessment needs and assessment education: the American Library Association core competencies; job ads from large public and academic libraries; professional development courses and sessions offered by American Library Association (ALA) divisions and state library associations; and course requirements contained in ALA-accredited Masters of Library Science (MLS) programs. Results – While one-third of job postings made some mention of evaluation responsibilities, less than 10% of conference or continuing education offerings addressed assessment skills. In addition, management as a topic is a widespread requirement in MLS programs (78%), while research (58%) and assessment (15%) far less common. Conclusions – Overall, there seems to be more need for assessment/evaluation skills than there are structured offerings to educate people in developing those skills. In addition, roles are changing: some of the most professional-level activities of graduate-degreed librarians involve planning, education, and assessment. MLS students need to understand that these macro skills are essential to leadership, and current librarians need opportunities to add to their skill sets.Item Evaluation and Civil Society(Springer, 2020) Benjamin, Lehn M.; Doan, Dana R. H.Item An evaluation of a research experience for teachers in nanotechnology(IEEE, 2017-10) Hess, Justin L.; Chase, Anthony; Minner, Dan; Rizkalla, Maher; Agarwal, Mangilal; Mechanical Engineering, School of Engineering and TechnologyThis study involves the evaluation of the second implementation of a Research Experiences for Teacher Advancement In Nanotechnology (RETAIN) program offered at Indiana University-Purdue University Indianapolis (IUPUI). RETAIN represents a professional development model for providing high school teachers with laboratory research experiences in nanoscience and related content areas. In this intensive summer program, teachers spend six weeks conducting nanotechnology-related research in an IUPUI lab. As part of the RETAIN program, teachers complete six credit hours of coursework, wherein they translate their research experiences into the design of classroom modules. Teachers are expected to then implement their modules within their own classrooms during the subsequent academic year. This evaluation focuses on teachers' experiences in IUPUI labs during the summer of 2016, along with three teachers' implementation of nanotechnology labs within their courses during the 2016-2017 school year. To evaluate RETAIN, we explored teacher satisfaction, changes in teachers' content knowledge and nanotechnology perceptions, as well as changes in teachers' epistemological beliefs. Further, we explored the impact of the three teachers' module integration on their students' STEM attitudes and nanotechnology perceptions. The findings indicated that teachers were generally satisfied with the research and course experiences. Further, as a result of RETAIN participation, teachers showed increased nanotechnology content knowledge and knowledge of nanotechnology-related careers. Lastly, three teachers' integration of nanotechnology modules indicated that their students had significantly improved perceptions of nanotechnology's potential coupled with more knowledge of nanotechnology-related careers. The paper concludes with considerations of the quantitative findings in light of teachers' written reflections and author observations of teacher module integration in their classrooms.Item Evaluation of Canvas-Based Online Homework for Engineering: American Society for Engineering Education(2017) Jones, Alan; Mechanical Engineering, School of Engineering and TechnologyItem Examining Visiting Student Evaluation Forms(2023-04-28) Rigueiro, Gabriel; Dammann, Erin; Guillaud, Daniel; Packiasabapathy, Senthil; Mitchell, Sally; Yu, CorinnaBackground: Each medical school has clinical evaluation forms with competencies that align with their institutional and course learning objectives. The differences between evaluation forms and the items being assessed presents a challenge for elective course directors to evaluate and complete forms for visiting students. The aim of this project was to compare common characteristics of visiting student evaluation forms presented to an elective course director on Anesthesiology & Perioperative Medicine (APM) in 2022-2023. Materials & Methods: Each medical school has clinical evaluation forms with competencies that align with their institutional and course learning objectives. The differences between evaluation forms and the items being assessed presents a challenge for elective course directors to evaluate and complete forms for visiting students. The aim of this project was to compare common characteristics of visiting student evaluation forms presented to an elective course director on Anesthesiology & Perioperative Medicine (APM) in 2022-2023. Results: Schools (n=33) included ACGME competencies for communication (94%, 31), professionalism (91%, 30), medical knowledge (79%, 26), practice-based improvement (79%, 26), patient care (76%, 25), and systems-based practice (61%, 20) in their evaluation forms. Clinical reasoning skills included history & physical (82%, 27), assessment & plan (79%, 26), differential diagnosis (64%, 21), and charting/note-taking (61%, 20). Additional categories included inter-professionalism (85%, 28), osteopathic principles and practices (64%, 21), self- awareness/receptiveness to feedback (48%, 16), and procedural skills (42%, 14). Formative and summative comments were requested from 94% (31) of schools. Discussion: While many competencies for visiting medical student evaluation forms align with IU School of Medicine evaluations, some subcategories of ACGME core competencies like charting/note-taking are not assessed in the APM elective. Visiting students do not obtain electronic medical record access due to time-prohibitive training requirements, and thus, do not chart during their rotation. Mock paper records for the preanesthetic evaluation history and physical, intraoperative anesthesia record, and postoperative notes and orders could be created as additional assignments to assess students in this skill. Formative/summative comments may or may not comment on the delivery of patient care. Comments frequently discuss teamwork, work ethic, and medical knowledge which are easily evaluated. The time-pressured environment of the OR can limit student opportunity to perform the preoperative anesthetic evaluation. A differential diagnosis during a preoperative history and physical is challenging on the APM elective because patients present to surgery after diagnostic workup. However, differential diagnoses for perioperative symptoms like tachycardia and hypertension could be assessed through Canvas case log discussions. Students currently share an abbreviated written patient presentation with a learning point. They could include perioperative differential diagnoses and treatment plans and share an article from the literature to demonstrate evidence-based learning with more specific questions about systems-based practice. The perioperative environment provides an excellent opportunity to evaluate students in their interprofessional and communication skills working with surgeons, nurses, technicians, assistants, and other learners. Additional questions could be included in the APM evaluation to capture these relationships more fully. Conclusion: Analyzing visiting student evaluations for competencies and skills provides insight into areas for improvement in the APM elective curriculum and clinical evaluation form.Item Exit Interviews: A Decade of Data to Improve Student Learning Experiences(Wiley, 2015-09) Kacius, Carole; Stone, Cynthia; Bigatti, Silvia M.; Department of Health Policy and Management, Richard M. Fairbanks School of Public HealthItem Industry Advisory Board Assessment and Evaluation(ASEE, 2017-02) McIntyre, Charles; Fox, Patricia; Technology and Leadership Communication, School of Engineering and TechnologyVirtually all academic programs in any given discipline have an Industry Advisory Board (IAB) whose purpose is to add value to the academic program. It must be noted that the term “IAB” is generic in nature and refers to any Industry Advisory Board, Committee, Council, or otherwise named advisory group. An IAB exists to advise, assist, support, and advocate for their associated academic program and the constituents of that program. Similar to what accreditation requires of an academic program, an IAB must periodically assess and evaluate their performance, which can lead to corrective actions and have a profound impact upon an IAB and the academic program. The contents of this paper describe two methods that an IAB can use for assessment and evaluation, namely IAB Self-Assessment and IAB Benchmarking.