A More Comprehensive Index in the Evaluation of Scientific Research: The Single Researcher Impact Factor Proposal

A More Comprehensive Index in the Evaluation of Scientific Research: The Single Researcher Impact Factor Proposal

Clinical Practice & Epidemiology in Mental Health 05 Nov 2010 RESEARCH ARTICLE DOI: 10.2174/1745017901006010109


Good alternatives to the Impact Factor (IF) algorithm are needed. The Thomson IF represents a limited measure of the importance of an individual article because 80% of a journal's IF is determined by only the 20% of the papers published. In the past few years, several new indexes has been created to provide alternatives to the IF algorithm. These include the removal of self citations from the calculation of the IF using the Adjusted IF, Index Copernicus initiative and other modifications such as the Cited Half-Life IF, Median IF, Disciplinary IF, and Prestige Factor. There is also the Euro-Factor, born in Europe to avoid the strong US centrality, and the English language basis of the Thomson database. One possible strategy to avoid "IF supremacy" is to create a new index, the Single Researcher Impact Factor (SRIF), that would move the evaluation from the power of scientific journals to the quality of single researchers. This measure can take into account the number and quality of the traditional publications and other activities usually associated with being a researcher, such as reviewing manuscripts, writing books, and attending scientific meetings. Also, in funding policy, it might be more useful to consider the merits, contributions, and real impact of all the scientific activities of a single researcher instead of adding only the journals' IF numbers. The major aim of this paper is to propose and describe the SRIF index that could represent a novel option to evaluate scientific research and researchers.

Key Words: Impact Factor, Scientific Journal, Single Researcher, Scientific Evaluation.


Today the assessment of scientific productivity and quality in different emerging and established scientific areas is a very controversial issue. The measure of quality is used to decide whether or not a scientist is promoted to principal investigator, obtain a better position in a department, earn a PhD degree, is given faculty tenure and is awarded an important research funding. Indeed, the evaluation of the scientific work has become a serious daily practice in basic and clinical settings [1-5].

The evaluation of the researcher scientific quality is a recognized difficulty that has not historically received a solution in a standard way. The ideal aim of scientific results assessment is to evaluate if the research is published in the international literature. The research work should be submitted for critical revision by experts in the specific area, peer reviewing, that according to established regulations state the quantitative and qualitative significance of a certain researcher work. However, the so-called peer review is often conducted by scientific committees with generic competence rather than by recognized experts of the specific theme and the referees may be uninformed [1, 6]. In addition, it may be painful for the authors, mainly the youngest ones, because of the referee demands [7]. These scientific assessment groups have the tendency of using secondary criterions as the “raw” calculation of publications number, prestige of journals in which authors publish, reputation of authors and institutes involved in the research, and estimated importance and relevance of the research field [1].

In this work we will comment initially the essential information and some observations on the use and abuse of the journal Impact Factor (IF), the most extended and controversial index of international scientific prestige. This introductory debate will enable us to present some current proposals that address the dilemma of scientific quality assessment.


The IF was created by Eugene Garfield and Irving H. Sher (first mentioned in 1955) to take action on the claiming from the international scientific community to have a simple method to compare journals [8-10]. This number, IF, is obtained after the following calculation:

IF (2007) = Citations in 2007 (in journals indexed by Thomson Scientific) of articles published in the journal in 2005–2006 divided by the number of “citable” articles by Thomson Scientific that were published in 2005–2006 by the journal.

The citations records are obtained from the Science Citation Index (SCI), an electronic database that records the scientific citations in the reference lists of journals articles.

It is important to bear in mind that the legitimate use, for which the IF was created according to the original intentions of Garfield, was to help libraries in the selection of journals to purchase, assist the authors in the selection of journals to send their own works, and provide an index of journals prestige in which the authors have published.

In this regard, Garfield [8-10] stated that the concept of IF has suffered an evolution during time, acquiring by mistake the double meaning of impact of the journal and impact of the author. In fact, in the last years the IF in the European region has always been more used as an index of quality and scientific excellence to receive research projects and funding, and recognition and prestige in the best research institutions [4, 11]. However, Garfield stated that the IF is mainly a useful index to estimate the expected frequency of citations in comparison to the actually obtained [8], although a strong correlation exists between an elevated IF and the journal prestige in the current scientific field [12].

Before we extend the analysis of this index, we have to consider that the IF has become important for the evaluation of research activities in a lot of disciplines, and reached a growing influence in the scientific community. In this regard, the increased use/abuse of the IF might determine the destiny of single researchers, departments and even academic institutions [4, 13-15].


Because the IF is derived from citations to all articles in a journal, this number is not statistically representative of single articles. Actually, the greatest part of citations are usually associated to one small part of total articles published by a journal, therefore the IF does not seem to be an appropriate measure of citations obtained by a “regular” article. Indeed, the 80% of a journal's IF is determined by only the 20% of the papers published. Therefore, a high IF doesn't assure a high productivity and scientific excellence of a single researcher and represents a limited measure of the importance of an individual article [1, 16-20].

Other problems (limitations) associated to the use of the IF as qualitative measure of the author’s scientific achievements include that books are not included in the database as a source for citations, and the database has the bias of English language exclusively. We mention in this context the article: “Does the impact factor kill the German language?”[21] and consequently is dominated by American publications. On the other hand, limited fields of research do not tend to have journals with high IF [1].

In the past few years, several new indexes have been created to provide alternatives to the IF algorithm. These include the removal of self citations from the calculation of the IF which improve the calculation of the index [22]. A more actual image of the influence of a journal in its scientific field, rather than among all the journals in general, might be a further variant of the impact factor, defined as Scope-Adjusted Impact Factor [23]. The calculation of this index predominantly keeps in mind the thematic area and determines a more different journal classification in comparison to generic original IF which is not “repaired¨. Coelho and colleagues [24], support a reasonable index, notably as the IF traditional analysis through the Kruskal-Wallis that demonstrates it is not possible to compare different disciplines using the IF without adjustments.

Other diverse alternative indexes for the IF recently reviewed by Dong and colleagues [25], might be commented such as the Cited Half-Life IF (CHAL-IF), introduced by Sombatsompop, Markpin and Premkamolnetr [26], the Median IF (MIF) proposed by Rousseau [27], the Disciplinary IF (DIF) proposed by Hirst [28] and also selected by Pudovkin and Garfield [29] and the Journal Performance Indicator (JPI) http://scientific.thomson.com/products/jpi/. Among these proposals, it is particularly significant the European-Factor (EF) [30] which tries to solve the linguistic bias (absolute presence of English language) and territorial (USA-centrality) of the traditional IF system. This index is supported by VICER Publishing http://www.vicer.org/.

Van Leeuwen and Moed [31] have developed an alternative system, for the journal impact measurement, named Journal to Field Impact Score (JFIS). This index includes research articles, technical notes, editorial letters and reviews either in the numerator or in the denominator of the equation. Moreover, the JFIS takes into account the mean citations regarding the specific discipline that is considered.

Consequently, there has been an increase in the comprehensive systems for assessment of scientific productivity impact that also take into consideration the development in the web (in some cases exclusively from the web) of some journals, for instance: the External Web IF (WIF), defined by the number of external pages (not belonging to the site in evaluation) containing a link to the data in the Web [32].

Fava and colleagues [33] has discussed very often the IF alternatives topic and proposed the citations analysis as a suitable substitute, affirming that the only reliable system to evaluate the researcher impact are the actual citations, which is possible through the Web of Science from the Institute for Scientific Information (ISI). Therefore, the IF has importance for verifying how much the researchers have contributed to a journal and their development during years; however this index doesn't express a judgment of scientific significance.

At present, three electronic scientific databases provides a measure of an individual’s citation rate: Index Copernicus Scientists [34], Scopus [35] and Thomson-Reuters’ Researchers ID [36]. Remarkably, Index Copernicus Scientists is the only one which calculates an individualized impact factor [34].

The Index Copernicus Scientists http://www.index-copernicus.com/info.php?id=1 besides providing scientists with global scientists networking and international research collaboration, present a multi-parameter career assessment system which analyses the researcher individual profile [34]. This goal is accomplished by a uniform scoring system that evaluates the achievements of scientists in three areas of professional activity: research potential [R], teaching potential [T] and administration experience [A]. This Index Copernicus Scientists database may be useful for making decisions for scientific degree promotion purposes, and searching for the best candidates to lead a research group.

Scopus (http://www.scopus.com/), although limited to articles published after 1995, is the world's largest abstract and citation database of peer-reviewed literature and quality web sources [35, 37] and its Scopus Citation Tracker provides a simple way to investigate citations in a number of ways www.info.scopus.com/ctracker. One manner is to evaluate the real-time citation data of articles and authors. Very recently, Scopus announced that the h-index will be also incorporated http://info.scopus.com/news/press/pr_ 140407.asp. It has been postulated previously that H-index may quantify the scientific output of the researchers [38] and in a more recent work is discussed its power to predict future scientific achievements [39]. Interestingly, Scopus database was used very recently to distinguish the different productivity and visibility of Cuban neuroscientists [40].

The ResearcherID is a website http://www.researcherid. com/ where researchers can register for a unique researcher ID number to avoid the frequent difficulty of author misidentification. In this way the researcher build their publication list using Web of Science and citations counts will be automatically updated in order to generate citation metrics. The h-index for ResearcherID participants is also calculated [36].

Remarkably, it was discussed very recently the need for speed the personal IF assessment in which the subject of calculation is the scientist and not the journal. This measure would really provide a more accurate measure of an individual’s citation rate [34].


One possible strategy to avoid "IF supremacy" is to propose a new index, the Single Researcher Impact Factor (SRIF) that would move the evaluation from the power of scientific journals to the quality of single researchers [41, 42]. In comparison with the Index Copernicus scientific approach previously described [34], the SRIF is similarly focused on the single researcher, but it is not, just dedicated to the scientific “impact factor” not considering the complex (not only scientific) researcher individual profile for making decisions and searching for the best candidates to lead a research group.

This measure can take into account the number and quality of traditional publications and other activities usually associated with being a researcher, such as reviewing manuscripts , write a book, and participating in scientific meetings. Additionally, in funding policy, it might be more useful to consider the merits, contributions, and real impacts of all the scientific activities of a single researcher, instead of measuring or adding only the IF numbers.

We will take into account some aspects of the suggestions done previously by Laudanna and colleagues [43] to propose this impact factor integrative system, the SRIF, to asses in a better way the scientific productivity. In fact, more than an index, we propose a more comprehensive system of evaluation.

We will consider several kinds of scientific products and scientific activities:

  • Publications (journal articles, books, oral and poster presentations in scientific meetings). These publications might be in press or in electronic-digital format.
  • Products, that are elaborated as software, CD-ROM, videos, databases, etc.
  • Activities, or all those reported scientific activities that do not necessarily correspond to the production of manuscripts or other elaborated products such as scientific positions or positions in conferences organization, participation in journal editorial boards, activities on human resources education, and participation in international funding projects.

Currently, these scientific activities balance and strength the traditional scientific productivity (journal publications, books, chapters in books), related to the management of modern science [5].

We take into consideration the definition, with precision, of certain scientific criterions that enable us to consider the significance of various types of publications, products and activities and to assign a determined score to them. These single scores will be summed at the end of the SRIF calculation if desired (Table 1).

Table 1.

Evaluation System of Scientific Publications and Products

International National
Books 10 5
Articles in journals included in the SCI 9 4.5
Chapters in a book 8 4
Edition of a book 7 3.5
Articles in journals not included in the SCI 1 0.5
Oral presentations in Congresses/Workshops/Conferences 0.5 0.25
Poster presentation in Congresses/Workshops/Conferences 0.2 0.1
Electronic items: Databases, CD-ROM, Software, video 1 0.5

Note: SCI, Science Citation Index

Table 2.

System for Scientific Activities Evaluation

Journals International National
Editor in Chief 5 2.5
Editor 4 2
Editorial Board member 3 1.5
Referee 2 1
Scientific Chairperson 5 2.5
Scientific committee member 3 1.5
Referee 1 0.5
Instructor 2 1
Assistant 3 1.5
Senior 4 2
Scientific category
PhD 10 5
MSc 3 1.5
Societies, Foundations and Agencies
Chairperson 5 2.5
Board 3 1.5
Member 1 0.5
Awards 5 2.5

Note: international teaching will be considered if the professors are included in international courses. International scientific categories (PhD, MSc) will be considered when they were obtained in international institutions.

In the classification of publications, some aspects are important for differentiating in a better way the several types of publications (books, articles, presentations in congresses):

  • Reviewing rigorousness of submitted works: for instance, an article in a journal has usually a more rigorous selection than an oral presentation in a congress.
  • Degree of idea originality in the work: it is supposed that a book authorship, in comparison with book editing work, generally allow the expression of more original thinking.
  • Quantity of required work: a book editing usually requires a more accurate organization and presentation of several contributions than a poster in a meeting.

However, it is essential the flexibility of simple categories in the classification. The maximum and minimum values, fixed for every type of publication or product that tolerate variability in the score regarding its impact, for instance: a book authorship can receive a score from a minimum of 5 (national) to a maximum of 10 (international).

This solution assures enough variability to the scores, and avoids the risks of reductionism and rigidity because of simple classifications based on the importance of publication types, for instance, “article in journal” or “book” category. Accordingly, there is still the possibility that a certain book chapter receives, taking into account its specific value (international/national), a similar or superior score than an article in a journal.

Regarding the scientific publications and products scores, in order to stress the significance of the first author work, we propose to multiply the score by two when the SRIF of the first author is calculated.

Here, we describe in detail every category already described in Table 1.

  1. Books and journals articles: traditionally, they are the most wanted scientific products; therefore they have the maximum scores. It is highly acknowledged that international books may have slightly superior maximum scores than articles in journals included in a Citation Index if they are published by very prestigious publishing editorial companies. In the evaluation of one specific journal, the IF already takes into account the comparison of journals from the same discipline, the rejection rate of the journal, understood as the index of difficulty to overcome the publishing sieve that for some journals is above 90%, and the prestige of the journal editorial board. Remarkably, the calculation of the individual publications scores will also take into consideration the number of actual citations trough the Web of Science from the ISI because this number will be multiplied by the publication score (national/international) and later by the journal IF that the journal sustain at the time of the publication in the numerator. The denominator will be always 5 considering this calculation will be performed with articles published in the last five years. The calculation would be the following:
    Publications score=international/nationalscorenumberofcitationsjournalIF5
    This calculation considers besides the international/national journal IF, the actual citation index of individual articles in a 5 years period. In this way, it is presented an alternative to the IF taking into account previous works postulating IF bias regarding the citation analysis [33] and the time (two years) included in the traditional IF calculation [44]. The publication score calculation of the SRIF also differentiate from the very recent proposal of Mark R. Graczynski named Personal IF [34] that do not take into account the authorship order significance and the limited time period of the previous two years.
  2. Contributions to a book: it is accepted a book chapter requires less work and analysis than a whole book. In addition, the scientific visibility will be much greater to book authors.
  3. Presentations in congresses: the authorship of a poster presentation in a congress is considered inferior than an oral presentation. Thus, it is reasonable that a meeting poster, usually requiring a smaller commitment, assume lower scores in the category.

We will also take into consideration some activities that fluctuate from the scientific to the technical field, from the theory to the pragmatic field. In these activities, there are works developed for journals, congresses, panels, scientific management, awards, teaching, and projects coordination. It is also highlighted in a more detailed manner between a member of the scientific meeting committee and the chairperson; and in the case of journals among the editors and journals referees.

A summary of the scores attributed by the authors in these activities are described in Table 2.

Regarding the journals work, we propose to take advantages of the IF again. This time we will multiply the editors/referees score by the IF media (a mean between the journal IF at the beginning of the journal work and at the end of the work period), afterwards this number will be multiplied by the years in charge. The calculation of the work in individual journals would be in the following way:

Editors/referees IF = score × IF media × years of work

In this way, the extremely needed peer reviewing work for the success of modern science [6] would be rewarded. Regarding the importance of considering the peer reviewer responsibilities, it is important to take into account that writing and finalizing an article is a very complex process where reviewers can usually offer a valid and crucial scientific contribution that can make an article ready to be published and appreciated in the scientific community. In order to make peer-reviewers more compliant in their fundamental role for the improvement of science, one possible solution is to create a new index that can take into account other activities usually associated with being a researcher, such as reviewing manuscripts. By replacing the journal-centered IF with a single researcher-centered IF, that can include reviewing activity too, the evaluation of individual scientific impact in the community will be more accurate and could motivate researchers, overall young ones, in reviewing without frustration [45-47].

In order to check the validity of this SRIF proposal, our goal is to carry out a pilot study in a university or research institute to compare the usefulness of our SRIF with the traditional IF. It is important that our method could be challenged in a statistical study with the traditional IF.


In this work we have discussed that IF has merits and defects. We have also stressed on its limitations and the alternatives of such an index to continue the significant debate already emerged in the global scientific community. What are we going to do with the IF? Some authors think it is good to preserve such an index while is found another parameter or a better parameter [10]. In the meantime the scientific community should finally elaborate not a substitutive index but an integrative one, or a battery of indexes. In this regard, we believe should be considered carefully the SRIF as an assessment system of scientific productivity. This system, with appropriate changes and integrations, would be able to adapt well to different scientific disciplines.


GC and DL conceived and designed the study. GC and DL contributed equally to this work. LS participated in the acquisition and analysis of data. All authors helped to draft, read and approved the final manuscript.


The authors declare that they have no competing interests.


This article was supported by the TECNOB Project (Technology for Obesity Project) -“Compagnia di San Paolo” Italian private foundation.


Seglen P. Why the impact factor of journals should not be used for evaluating research BMJ 1997; 314(7097): 498-502.
Yewdell JW. How to succeed in science: a concise guide for young biomedical scientists. Part I: taking the plunge Nat Rev Mol Cell Biol 2008; 9(5): 413-6.
Yewdell JW. How to succeed in science: a concise guide for young biomedical scientists. Part II: making discoveries Nat Rev Mol Cell Biol 2008; 9(6): 491-.
Castelnuovo G, Molinari E. La valutazione della produzione scientifica in psicologia clinica: impact factor e prospettive integrative In: Molinari E, Labella A, Eds. Psicologia clinica. Milan: Springer-Link 2007; pp. 203-15.
Guberman J, Shapiro B, Torchia M. Making the right moves: a practical guide to scientific management for postdocs and new faculty Maryland, North Carolina: Howard Hughes Medical Institute and Burroughs Wellcome Fund 2006.
Alberts B, Hanson B, Kelner KL. Reviewing peer review Science 2008; 321(5885 ): 15.
Raff M, Johnson A, Walter P. Painful publishing Science 2008; 321(5885): 36.
Garfield E. The impact factor and its proper application Unfallchirurg 1998; 101(6): 413-.
Garfield E. Journal impact factor: a brief review CMAJ 1999; 161(8): 979-80.
Garfield E. The history and meaning of the journal impact factor JAMA 2006; 295(1): 90-3.
Rey-Rocha J, Martin-Sempere MJ, Martinez-Frias J, Lopez-Vera F. Some misuses of Journal Impact Factor in research evaluation Cortex 2001; 37(4): 595-7.
Levorato MC, Marchetto E. Il giudizio degli psicologi italiani sulle riviste nazionali e internazionali Portuguese (Brazil) Giornale italiano di psicologia 2003; 30: 15-36.
Figa Talamanca A. The "impact factor" in the evaluation of research Bull Group Int Rech Sci Stomatol Odontol 2002; 44(1): 2-9.
Ramos-Rincon JM, Gutierrez-Rodero F. Evaluation of in the impact factor of journals included in the Infectious Diseases category of the Journal Citation Report (1991-2001) Enferm Infecc Microbiol Clin 2003; 21(7): 388-90.
PLoS medicine Editors. The impact factor game PLoS Med 2006; 3(6): 707.
Vinkler P. Evaluation of some methods for the relative assessment of scientific publications Scientometrics 1986; 10: 157-77.
Maffulli N. More on citation analysis Nature 1995; 378-760.
Kurmis AP. Understanding the limitations of the journal impact factor J Bone Joint Surg Am 2003; 85-A(12): 2449-54.
Sculier JP. Good and bad uses of the Impact Factor, a bibliometric tool Rev Med Brux 2004; 25(1): 51-4.
Vakil N. The journal impact factor: judging a book by its cover Am J Gastroenterol 2005; 100(11): 2436-7.
Haller U, Hepp H, Reinold E. Does the "impact factor"kill the German language? Gynakol Geburtshilfliche Rundsch 1997; 37: 117-8.
Sevinc A. Manipulating impact factor: an unethical issue or an Editor's choice? Swiss Med Wkly 2004; 134(27-28): 410.
Huth EJ. Authors, editors, policy makers, and the impact factor Croat Med J 2001; 42(1): 14-7.
Coelho PM, Antunes CM, Costa HM, Kroon EG, Sousa Lima MC, Linardi PM. The use and misuse of the "impact factor" as a parameter for evaluation of scientific publication quality: a proposal to rationalize its application Braz J Med Biol Res 2003; 36(12): 1605-2.
Dong P, Loh M, Mondry A. The "impact factor" revisited Biomed Digit Libr 2005; 2: 7.
Sombatsompop N, Markpin T, Premkamolnetr N. A modified method for calculating the Impact Factors of journals in ISI Journal Citation Reports: Polymer Science Category in 1997-2001 Scientometrics 2004; 60: 217-35.
Rousseau R. Median and percentile impact factors: a set of new indicators Scientometrics 2005; 63: 431-1.
Hirst G. Discipline impact factor: a method for determining core journal lists J Am Soc Inform Sci 1978; 29: 171-2.
Pudovkin AI, Garfield E. Rank-normalized Impact Factor: a way to compare journal performance across subject categories. American Society for Information Science and Technology Annual Meeting: Providence Rhode Island 2004.
Hofbauer R, Frass M, Gmeiner B, Kaye AD. The European Factor – The Euro-FactorTM The new European Journal Quality Factor The new European “scientific currency”. Vienna: VICER Publishing 2002.
Van Leeuwen TN, Moed HF. Development and application of journal impact measures in the Dutch science system Scientometrics 2002; 53
Soualmia LF, Darmoni SJ, Le Duff F, Douyere M, Thelwall M. Web impact factor: a bibliometric criterion applied to medical informatics societies' web sites Stud Health Technol Inform 2002; 90: 178-83.
Fava GA, Ottolini F. Impact Factor versus actual citations Psychother Psychosom 2000; 69: 285-6.
Graczynski MR. Personal impact factor: the need for speed Med Sci Monit 2008; 14(10): ED1-2.
Burnham JF. Scopus database: a review Biomed Digit Libr 2006; 3: 1.
Cals JW, Kotz D. Researcher identification: the right needle in the haystack Lancet 2008; 371(9631): 2152-3.
Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses FASEB J 2008; 22(2): 338-42.
Hirsch JE. An index to quantify an individual's scientific research output Proc Natl Acad Sci USA 2005; 102(46): 16569-72.
Hirsch JE. Does the H index have predictive power? Proc Natl Acad Sci USA 2007; 104(49): 19193-8.
Dorta-Contreras AJ, Arencibia-Jorge R, Marti-Lahera Y, AraujoRuiz JA. [Productivity and visibility of Cuban neuroscientists: bibliometric study of the period 2001-2005] Rev Neurol 2008; 47(7): 355-60.
Castelnuovo G. Ditching impact factors: Time for the single researcher impact factor BMJ 2008; 336(7648): 789.
Castelnuovo G. More on impact factors Epidemiology 2008; 19(5): 762-3.
Laudanna A, Miceli M, Welin AC. La valutazione della produzione scientifica: criteri e proposte dell'Istituto di Psicologia del CNR Giornale Italiano Psicologia 2001; 3: 611-32.
D'Odorico L. The citation impact factor in developmental psychology Cortex 2001; 37(4): 578-9.
Castelnuovo G. Include reviewing into the single researcher impact factor. In: E-Letter responses to: letters: Matthew A. Metz. Rewarding Reviewers Science 2008; 319: 1335c.
Metz MA. Rewarding reviewers Science 2008; 319(5868): 1335.
Perrin WF. In search of peer reviewers Science 2008; 319(5859): 32.