- Browse by Subject
Browsing by Subject "research impact metrics"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Changing the culture of P&T through conversations about research metrics(IUPUI University Library, 2019-02-26) Coates, Heather L.Since 2012, librarians at the IUPUI University Library have been providing support for faculty use of metrics in dossiers for promotion and tenure. During these consultations to learn about their research, faculty were willing to discuss their values as a scholar, the types of work they feel are most important and valuable, the pressures and expectations of their departments and schools, among other things. The richness of these conversations led us to expand our metrics services beyond provision of data. We developed a proactive strategy to help faculty take charge of their digital profiles and scholarly dissemination, as well as outreach and trainings to engage with campus administrators, associate deans for research, and department chairs, with the goal of promoting responsible use of metrics in the promotion and tenure process. This presentation will describe our approach to consultations, training, and advocacy in developing P&T standards and processes that align with institutional and disciplinary values and promote scholar choice in methodology, product, and dissemination.Item Evaluating the Impact of Community-Engaged Scholarship: Implications for Promotion and Tenure(2019-02-15) Coates, Heather L.This invited presentation provides an introduction to key concepts of research evaluation, indicators, and research metrics including citation and alt-metrics. Through various examples, it explores considerations for using metrics responsibly in the evaluation of research outputs and scholars.Item Metrics for Evaluating the Impact of Data Sets(MIT Press Direct, 2022-01) Champieux, Robin; Coates, Heather L.Research is a social activity, involving a complex array of resources, actors, activities, attitudes, and traditions (Sugimoto & Larivière 2018). There are many norms, including the sharing of new work in the form of books and journal articles and the use of citations and acknowledgments to recognize the influence of earlier work, but what it means to produce impactful scholarship is difficult to define and measure. The goals, methods, metrics, and utility of evaluating the impact of data sets are situated within this broader context of scholarly communication and evaluation. An understanding of the dynamic history, current practices, concepts, and critiques of measuring impact for and beyond research data sets can help researchers navigate the scholarly dissemination landscape more strategically and gain agency in regard to how they and their work are evaluated and described. What is research impact? As Roemer and Borchardt (2015) describe, the concept involves two important ideas: the change a work influences and the strength of this effect. These effects can include, but are not limited to, advances in understanding and decision making, policy creation and change, economic development, and societal benefits. For example, rich documentation of an endangered language might lead to and support community and governmental revitalization efforts. However, the linkages between a specific scholarly product and its effects are rarely direct, there are disciplinary differences between how research is communicated and endorsed, and some outcomes take a very long time to manifest (Greenhalgh et al. 2016). This makes the assessment of research impact very labor intensive, even at a small scale, so researchers and decision makers often rely on data and metrics that are regarded as indicative of certain kinds of impact.Item Upskilling the promotion and tenure process: Training administrators for responsible use of research impact metrics(IUPUI University Library, 2018-10) Coates, Heather L.; Odell, Jere D.; Pike, CaitlinSchool and departmental administrators are tasked with evaluating the research output of their faculty as part of the promotion and tenure review process. At our institution, this evaluation is communicated in a letter describing the dissemination venues for the candidate’s research publications, typically journals. Seen in one light, the letter is an opportunity for the school or departmental administrator to advocate for the candidate. However, the focus on dissemination venue rather than on the article or product itself wastes an opportunity to describe the value of the candidate’s work in the context of their discipline and institution. Instead of providing rich information about the work, these letters often copy content from the publisher website and provide Journal Impact Factors, when available, without context. To encourage schools and departments to produce stronger letters in the assessment of a candidate’s dissemination venues, we developed a targeted training for Associate Deans for Research and Department Chairs. The opportunity to develop this training resulted from a broader conversation with faculty about journal cuts and other changes in the library’s strategy for providing access to scholarly content. The faculty asked the library to provide training about changes in scholarly publishing, citation metrics, and altmetrics. Given the time constraints of the audience, our training focuses on providing practical guidance for using and understanding new sources of evidence when writing and reading evaluation letters for promotion and tenure. In addition to describing the content and the institutional context for the training sessions, we will discuss the long-term implications of this effort.Item Upskilling the promotion and tenure process: Training administrators for responsible use of research impact metrics(2018-10) Coates, Heather L.; Odell, Jere D.; Pike, CaitlinSchool and departmental administrators are tasked with evaluating the research output of their faculty as part of the promotion and tenure review process. At our institution, this evaluation is communicated in a letter describing the dissemination venues for the candidate’s research publications, typically journals. Seen in one light, the letter is an opportunity for the school or departmental administrator to advocate for the candidate. However, the focus on dissemination venue rather than on the article or product itself wastes an opportunity to describe the value of the candidate’s work in the context of their discipline and institution. Instead of providing rich information about the work, these letters often copy content from the publisher website and provide Journal Impact Factors, when available, without context. To encourage schools and departments to produce stronger letters in the assessment of a candidate’s dissemination venues, we developed a targeted training for Associate Deans for Research and Department Chairs. The opportunity to develop this training resulted from a broader conversation with faculty about journal cuts and other changes in the library’s strategy for providing access to scholarly content. The faculty asked the library to provide training about changes in scholarly publishing, citation metrics, and altmetrics. Given the time constraints of the audience, our training focuses on providing practical guidance for using and understanding new sources of evidence when writing and reading evaluation letters for promotion and tenure. In addition to describing the content and the institutional context for the training sessions, we will discuss the long-term implications of this effort.