Category: hefcemetrics

The changing imperative to demonstrate social science impact

In less than a decade the impact agenda has evolved from being a controversial idea to an established part of most national research systems. Over the same period the conceptualisation of research impact in the social sciences and the ability to create and measure research impact through digital communication media has also developed significantly. In this post, Ziyad Marar argues […]

Getting our hands dirty: why academics should design metrics and address the lack of transparency.

Metrics in academia are often an opaque mess, filled with biases and ill-judged assumptions that are used in overly deterministic ways. By getting involved with their design, academics can productively push metrics in a more transparent direction. Chris Elsden, Sebastian Mellor and Rob Comber introduce an example of designing metrics within their own institution. Using the metric of grant income, their tool ResViz shows […]

Evaluating research assessment: Metrics-based analysis exposes implicit bias in REF2014 results.

The recent UK research assessment exercise, REF2014, attempted to be as fair and transparent as possible. However, Alan Dix, a member of the computing sub-panel, reports how a post-hoc analysis of public domain REF data reveals substantial implicit and emergent bias in terms of discipline sub-areas (theoretical vs applied), institutions (Russel Group vs post-1992), and gender. While metrics are generally […]

A call for inclusive indicators that explore research activities in “peripheral” topics and developing countries.

Science and Technology (S&T) systems all over the world are routinely monitored and assessed with indicators that were created to measure the natural sciences in developed countries. Ismael Ràfols and Jordi Molas-Gallart argue these indicators are often inappropriate in other contexts. They urge S&T analysts to create data and indicators that better reflect research activities and contributions in these “peripheral” spaces. […]

Ancient Cultures of Conceit Reloaded? A comparative look at the rise of metrics in higher education.

When considering the power of metrics and audit culture in higher education, are we at risk of romanticising the past? Have academics ever really worked in an environment free from ‘measurement’? Roger Burrows draws on his own recollection of the 1986 Research Selectivity Exercise (RSE), scholarly work on academic labour and fictional portrayals of academic life, which all demonstrate the substantial expansion of the role of […]

The ResearchGate Score: a good example of a bad metric

According to ResearchGate, the academic social networking site, their RG Score is “a new way to measure your scientific reputation”. With such high aims, Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at the opaque metric. By reverse engineering the score, they find that a significant weight is linked to ‘impact points’ – a similar metric to the widely […]

Bringing together bibliometrics research from different disciplines – what can we learn from each other?

Currently, there is little exchange between the different communities interested in the domain of bibliometrics. A recent conference aimed to bridge this gap. Peter Kraker, Katrin Weller, Isabella Peters and Elisabeth Lex report on the multitude of topics and viewpoints covered on the quantitative analysis of scientific research. A key theme was the strong need for more openness and transparency: transparency in research evaluation […]

When are journal metrics useful? A balanced call for the contextualized and transparent use of all publication metrics.

The Declaration on Research Assessment (DORA) has yet to achieve widespread institutional support in the UK. Elizabeth Gadd digs further into this reluctance. Although there is growing acceptance that the Journal Impact Factor is subject to significant limitations, DORA feels rather negative in tone: an anti-journal metric tirade. There may be times when a journal metric, sensibly used, is the right […]

We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better.

Rather than expecting people to stop utilizing metrics altogether, we would be better off focusing on making sure the metrics are effective and accurate, argues Brett Buttliere. By looking across a variety of indicators, supporting a centralised, interoperable metrics hub, and utilizing more theory in building metrics, scientists can better understand the diverse facets of research impact and research quality. In […]

Life in the Accelerated Academy: anxiety thrives, demands intensify and metrics hold the tangled web together.

The imagined slowness of university life has given way to a frenetic pace, defined by a perpetual ratcheting up of demands and an entrepreneurial ethos seeking new and quantifiable opportunities. Mark Carrigan explores the toxic elements of this culture and its underlying structural roots. As things get faster, we tend to accept things as they are rather than imagining how they might be. […]