Category: bibliometrics

How to make altmetrics useful in societal impact assessments: shifting from citation to interaction approaches

The suitability of altmetrics for use in assessments of societal impact has been questioned by certain recent studies. Ismael Ràfols, Nicolas Robinson-García and Thed N. van Leeuwen propose that, rather than mimicking citation-based approaches to scientific impact evaluation, assessments of societal impact should be aimed at learning rather than auditing, and focused on understanding the engagement approaches that lead to […]

Google Scholar is a serious alternative to Web of Science

Many bibliometricians and university administrators remain wary of Google Scholar citation data, preferring “the gold standard” of Web of Science instead. Anne-Wil Harzing, who developed the Publish or Perish software that uses Google Scholar data, here sets out to challenge some of the misconceptions about this data source and explain why it offers a serious alternative to Web of Science. […]

Mendeley reader counts offer early evidence of the scholarly impact of academic articles

Although the use of citation counts as indicators of scholarly impact has well-documented limitations, it does offer insight into what articles are read and valued. However, one major disadvantage of citation counts is that they are slow to accumulate. Mike Thelwall has examined reader counts from Mendeley, the academic reference manager, and found them to be a useful source of early […]

Twitter can help with scientific dissemination but its influence on citation impact is less clear

Researchers have long been encouraged to use Twitter. But does researchers’ presence on Twitter influence citations to their papers? José Luis Ortega explored to what extent the participation of scholars on Twitter can influence the tweeting of their articles and found that although the relationship between tweets and citations is poor, actively participating on Twitter is a powerful way of […]

Cluster analysis of individual authors shows the diversity of scholarly research both between and within disciplines

Academic disciplines in the social sciences and humanities show considerable variation with regard to their publication patterns. But what of the authors within each of those disciplines? Are their publication patterns as similar as one might reasonably expect or do the same variations exist? Frederik Verleysen discusses the diversity among experienced scholars in Flanders based on their choice of publication […]

Context is everything: Making the case for more nuanced citation impact measures.

Access to more and more publication and citation data offers the potential for more powerful impact measures than traditional bibliometrics. Accounting for more of the context in the relationship between the citing and cited publications could provide more subtle and nuanced impact measurement. Ryan Whalen looks at the different ways that scientific content are related, and how these relationships could be explored […]

The ResearchGate Score: a good example of a bad metric

According to ResearchGate, the academic social networking site, their RG Score is “a new way to measure your scientific reputation”. With such high aims, Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at the opaque metric. By reverse engineering the score, they find that a significant weight is linked to ‘impact points’ – a similar metric to the widely […]

When are journal metrics useful? A balanced call for the contextualized and transparent use of all publication metrics.

The Declaration on Research Assessment (DORA) has yet to achieve widespread institutional support in the UK. Elizabeth Gadd digs further into this reluctance. Although there is growing acceptance that the Journal Impact Factor is subject to significant limitations, DORA feels rather negative in tone: an anti-journal metric tirade. There may be times when a journal metric, sensibly used, is the right […]

We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better.

Rather than expecting people to stop utilizing metrics altogether, we would be better off focusing on making sure the metrics are effective and accurate, argues Brett Buttliere. By looking across a variety of indicators, supporting a centralised, interoperable metrics hub, and utilizing more theory in building metrics, scientists can better understand the diverse facets of research impact and research quality. In […]

Time to abandon the gold standard? Peer review for the REF falls far short of internationally accepted standards.

The REF2014 results are set to be published next month. Alongside ongoing reviews of research assessment, Derek Sayer points to the many contradictions of the REF. Metrics may have problems, but a process that gives such extraordinary gatekeeping power to individual panel members is far worse. Ultimately, measuring research quality is fraught with difficulty. Perhaps we should instead be asking which features […]