There are various ways to measure success. In research, one of those is to look at citations in the literature, awards, one’s position in the academic hierarchy, and other factors. Usually, these are all measured individually but then rounded together to form a bigger picture of the academic status of a given research. In 2005, physicist Jorge Hirsch of the University of California San Diego devised a more formal approach to author-level metrics, the h-index.
The h-index measures both one’s output and the citation impact of that output as well as the quality – high or low – of the publications in which that work has been published. It has been shown that the h-index usually correlates with more obvious indicators of success, such as major and international awards in one’s field, fellowships, and position held at higher-level institutions, for instance.
The h-index has been seen for many years as a more useful indication of a scholar’s “intellectual” ranking, as it were. However, there are some drawbacks to this approach that have become more problematic when using it to evaluate or rank scholars. Writing in the International Journal of Research, Innovation and Commercialisation, Alberto Boretti of the Prince Mohammad Bin Fahd University in Al Khobar, Saudi Arabia, suggests that in the age of citation farms and hyper-authorship the h-index is no longer an “indication of better knowledge or productivity”.
Boretti suggests that there needs to be a new ranking tool that will subsume and improve on the h-index. He points that the h-index itself improved on earlier approaches to academic assessment but there must now be a way to see the inappropriate use of automated tools that boost an author’s citations, which is wholly fraudulent. The new tools also need to be able to determine from papers with inordinately long lists of authors exactly who the main contributors are and which authors played minor or even marginal roles and perhaps even which authors have been included as a matter of courtesy rather than content.
Moreover, suggests Boretti, who has held numerous research positions around the world during his career as well as spending many years in industry at a senior level, the new tools also need to be able to embed the quality of research, innovation, and commercialisation processes undertaken by an individual. Ultimately, the indexing of researchers is about selecting suitable candidates for the next position and being able to discern who will be the best fit for a research team based more on the opportunities they can offer based on their past successes. Moreover, a new approach will help weed out those who are attempting to game the system through “author stuffing” and cheating by using citation farms and other such tools to defraud the academic archives.
Boretti, A. (2020) ‘Is the h-index the best criterion to select scientists?’, Int. J. Research, Innovation and Commercialisation, Vol. 3, No. 2, pp.160–167.
No comments:
Post a Comment