25 June 2015

Assessing academic rank and file

The world of academia and its penchant for publishing in journals has led to various ways of assessing the relevance of a given journal, paper and the authors of said paper in a given field and in the wider world of research. Citation indices for journals are common, widely touted by journals and their publishers (but usually only when they make the journal look good) , and decried by authors who choose or are forced to publish in specialist or lower-ranking publications with smaller readerships.

Assessment of an individual author is thus skewed by the rank of the journals in which they publish. Many good authors hone their skills in a small, esoteric niche and so have no realistic access to the upper echelons of publishing nor to find their focus acknowledged repeatedly in the reference sections of papers from their colleagues and rivals in that field. Other authors have huge research teams in fields widely considered very important and thus find themselves published in major journals and thus ranked more highly.

There have been attempts to break this dichotomy. The “h index” (developed by Hirsch in 2005 and now used by Scopus, Web of Science, Google Scholar) for instance gives a researcher a rank based on this formula: “A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np – h) papers have no more than h citations each.” However, this and other indices do not necessarily take into account the citation behaviour of researchers in specific fields. They often ignore author position in an author list, which is usually a determinant of the individual’s depth of involvement on a given paper. Moreover, h index can be manipulated by self citation or by automatic digital citation by online Scholar-type search engines.

Various attempts have been made to improve on the h index: m-quotient, g-index, h-bar index, e-index, AR-index etc each with its pros and cons and all adding an extra twist, such as age of the researcher (m-quotient), or age of the paper (AR-index).

Now, Jorge Ancheyta of the Instituto Mexicano del PetrĂ³leo in Mexico City has devised the C-index to circumnavigate many of the limitations of the h index and others. His index is only valid for individuals, not journals, and for its calculation it requires the number of papers of an author, the number of participants in the elaboration of each paper, and the position of the author in the list of authors (alphabetized author lists, common in computer science, must be excluded, of course, as position reflects nothing but the authors’ initials. Also life sciences papers often place first and last author in the list based on one being lead researcher on the project and the latter being the professor or group leader). It can then be combined mathematically with the h index to derive a much more robust ranking for a given author.

“The use of yearly C-index provides a suitable manner to evaluate the whole career of a scientist and the real impact of her/his contribution in the research field,” Ancheyta says.


Ancheyta, J. (2015) ‘A correction of h-index to account for the relative importance of authors in manuscripts’, Int. J. Oil, Gas and Coal Technology, Vol. 10, No. 2, pp.221-232.

Assessing academic rank and file is a post from: David Bradley's Science Spot
via Science Spot » Inderscience http://ift.tt/1KbQhOp

No comments: