Korytkowski, PrzemysławKulczycki, Emanuel2019-12-092019-12-092019Journal of Informetrics, 13(3), 804–816.http://hdl.handle.net/10593/25233This work was supported by the DIALOG Program (Grant name “Research into Excellence Patterns in Science and Art”) financed by the Ministry of Science and Higher Education in Poland.In this paper, we investigate the effects of using four methods of publication counting (complete, whole, fractional, square root fractional) and limiting the number of publications (at researcher and institution levels) on the results of a national research evaluation exercise across fields using Polish data. We use bibliographic information on 0.58 million publications from the 2013–2016 period. Our analysis reveals that the largest effects are in those fields within which a variety publication and cooperation patterns can be observed (e.g. in Physical sciences or History and archeology). We argue that selecting the publication counting method for national evaluation purposes needs to take into account the current situation in the given country in terms of the excellence of research outcomes, level of internal, external and international collaboration, and publication patterns in the various fields of sciences. Our findings show that the social sciences and humanities are not significantly influenced by the different publication counting methods and limiting the number of publications included in the evaluation, as publication patterns in these fields are quite different from those observed in the so-called hard sciences. When discussing the goals of any national research evaluation system, we should be aware that the ways of achieving these goals are closely related to the publication counting method, which can serve as incentives for certain publication practices.enginfo:eu-repo/semantics/openAccessEvaluationCounting methodWhole countingFractional countingComplete countingPublication counting methods for a national research evaluation exerciseArtykułhttps://doi.org/10.1016/j.joi.2019.07.001