Category: metrics

The evaluative inquiry: a new approach to research evaluation

Contemporary research evaluation systems are often criticised for negative effects they can have on academic environments and even on knowledge production itself. Established in response to many of these criticisms, the evaluative inquiry is a new, less standardised approach to research assessment. Tjitske Holtrop outlines the four principles that give shape to the evaluative inquiry’s method: employing versatile methods; shifting […]

The growing, high-stakes audit culture within the academy has brought about a different kind of publishing crisis

The spate of high-profile cases of fraudulent publications has revealed a widening replication, or outright deception, crisis in the social sciences. To Marc Spooner, researchers “cooking up” findings and the deliberate faking of science is a result of extreme pressures to publish, brought about by an increasingly pervasive audit culture within the academy. By now most readers will have heard […]

Flipping a journal to open access will boost its citation performance – but to what degree varies by publisher, field and rank

Many observers have drawn the logical conclusion that the increased exposure and visibility afforded by open access leads to improved citation performance of open access journals. Yang Li, Chaojiang Wu, Erjia Yan and Kai Li report on research examining the perceived open access advantage, paying particular attention to journals which have “flipped” to open access from a subscription model. Findings […]

Developing approaches to research impact assessment and evaluation: lessons from a Canadian health research funder

Assessing research impact is complex and challenging, but essential for understanding the link between research funding investments and outcomes both within and beyond academia. Julia Langton provides an overview of how a Canadian health research funder approaches impact assessment; urging caution in the use of quantitative data, highlighting the importance of organisation-wide capacity-building, and outlining the value of a community […]

Better, fairer, more meaningful research evaluation – in seven hashtags

Considering the future of research assessment, Elizabeth Gadd outlines how she believes research evaluation could be made better, fairer, and more meaningful. The resulting seven guiding principles, neatly framed as hashtags, range from understanding our responsibilities to researchers as people, through to ensuring our evaluations are a more formative process, offering valuable, constructive feedback. Imperial College recently held an event […]

There is an absence of scientific authority over research assessment as a professional practice, leaving a gap that has been filled by database providers

Research metrics have become more established as a means to assess research performance. This is understandable given research institutions’ and funders’ demand for assessment techniques that are relatively cheap and universally applicable, even though use of such metrics remains strongly contested within scientific communities. But to what extent does the academic research field of evaluative citation analysis confer legitimacy to […]

How to compare apples with oranges: using interdisciplinary “exchange rates” to evaluate publications across disciplines

Academic research performance is typically assessed on the basis of scientific productivity. While the number of publications may provide an accurate and useful metric of research performance within one discipline, interdisciplinary comparisons of publication counts prove much more problematic. To solve this problem, Timo Korkeamäki, Jukka Sihvonen, and Sami Vähämaa introduce interdisciplinary “exchange rates”, which can be used to convert […]

Making visible the impact of researchers working in languages other than English: developing the PLOTE index

As outlined in the Leiden Manifesto, if impact is understood in terms of citations to international publications, a bias is created against research which is regionally focused and engaged with local society problems. This is particularly critical for researchers working in contexts with languages other than English. Peter Dahler-Larsen has developed the PLOTE index, a new indicator which hopes to […]

Six principles for assessing scientists for hiring, promotion, and tenure

The negative consequences of relying too heavily on metrics to assess research quality are well known, potentially fostering practices harmful to scientific research such as p-hacking, salami science, or selective reporting. The “flourish or perish” culture defined by these metrics in turn drives the system of career advancement in academia, a system that empirical evidence has shown to be problematic […]