When it comes to the quantitative basis, publication and citation counts often spring to mind, but these are not the only quantitative methods. The overview of indicators offers various other examples, such as using databases and infrastructures, networks of productive collaborations, numbers of users of research results broken down by their nature or background, numbers of reviews with their qualifications, etc. You could also consider quantitative descriptions of developments in research programmes (on this point, see Robust data and the examples cited there).
Citation counts
Appendix E: Merits and Metrics of the SEP states that citations (i.e., references) of articles, books and other products can be used as an indicator of use by fellow researchers. Citation counts say more about the impact than the quality of research. The results can be made more meaningful, however, by presenting citation counts in conjunction with substantive argumentation about the quality of the publications and other output.
When it comes to the possibilities and limits of citation and publication counts in the humanities, it is advisable to keep the following in mind:
- Sources for counts: The outcome of counting citations in the academic literature is dependent on the information source used; are citations in journals alone counted, or other academic sources too? In the case of the humanities, it makes sense to use Google Scholar, because it more frequently incorporates references cited in books and other documents. However, Google Scholar is not useful for all areas of the humanities.
- Language dependence: Publications in languages other than English are referenced less often, because the citing sources appear less often in databases such as Scopus or the Web of Science. That is also the case for Google Scholar, although less exclusively so.
- Life cycles and publishing media: Not all publishing channels have the same ‘life cycle’; proceedings and journal articles usually get the most citations in the period following its actual publication, for example, whilst the impact of books or websites may appear in citation counts only years later. This means that citation counts that only cover a shorter period do not always provide a good picture of a publication’s impact. With that in mind, one could choose other ways to demonstrate impact, such as the extent to which publications lead to follow-up activities such as lectures, symposia and new projects, including in the public domain (also in view of the hybrid nature of many publications). Occupying a special place in this context are publications that serve as (part of) schoolbooks and textbooks in various forms of education.
Alternatives to the JiF and H-index
The SEP does not allow the use of the Journal Impact Factor (JiF) and the H-index as indicators of the importance of academic journals and authors, and this also applies to the humanities.
Developing alternatives to the H-index is by no means straightforward; to make a proper comparison with others, the H-index is very dependent on the seniority of the researcher in question and the specific field in which he/she works.
The indicators suggested in the SEP do offer alternative ways to substantiate statements about the scientific impact of the unit’s research, starting with the choice of certain publishing channels (journals, publishers) in relation to the strategy.