Due to the current dominance of metric-based approaches to assessing academic staff, false incentives have proliferated. Sybille Hinze explains why, and looks for a better approach.

During a recent discussion with students, a participant asked me what they needed to do to successfully place a publication in a journal with a high impact factor. But when I in turn asked about the content of the article, they did not yet have a topic in mind. This is not an isolated case, or just due to one student’s inexperience. Rather, it is symptomatic of how the shift toward purely metric-based evaluation systems leads to false priorities.

How do you assess an academic’s performance?

Performance assessment is ubiquitous in academia, whether it is used to fill positions, decide on promotions, award grants or accept publications. The question of whether these decisions should be based more on qualitative or quantitative approaches is a difficult one. After all, neither approach is perfect.

The list of problems associated with metrics for evaluation is long, but so is the list of problems associated with peer review procedures. Combining the two seems to be an obvious solution, not least to compensate, at least in part, for the weaknesses of each approach. Due to the current dominance of metric-based approaches, particularly publication- and citation-based metrics, false incentives have proliferated.

It is no surprise that these metrics have become so widespread over the past decades. Indeed, access to them is often touted as being as easy as child's play, as they are available on commercial online platforms – albeit not entirely inexpensive, as access licenses are costly. With just a few clicks, you can generate the relevant indicators, even if you do not have the expertise needed to properly interpret them.

Consequently, we need to reconsider our approaches and focus more on content. The evaluation of research quality using qualitative methods must (once again) play a bigger role in the overall system.

If, in addition, it is not only a matter of evaluating research, but also of evaluating researchers, it is essential to take a broader spectrum of activities into account. In their various roles, academics usually perform a range of other tasks in addition to research. These demands are prominently reflected in the Agreement on Reforming Research Assessment, which I very much welcome and whose implementation I support through my work.

What can institutions do to improve academic career assessment?

However, I am also aware on a daily basis of the complexity of evaluation processes, as well as their organisational anchoring and effects on the overall system of science. It is not just a question of whether quantitative or qualitative methods are used, but also of greater diversity and transparency.

What does this mean in practice? For example, institutions should clearly describe the profile of a position and the associated expectations. Then, each academic staff member can weigh up which performance dimensions they need to focus on, regardless of whether these are assessed qualitatively or quantitatively.

In this sense, the results of a survey conducted as part of the work of the Coalition for Advancing Research Assessment (CoARA) Working Group on Reforming Academic Career Assessment are promising. According to the survey, metrics will continue to be used in evaluation processes in the future, but the sole focus on publications will lose its dominance. Instead, the majority of institutions are focusing on greater diversity in indicators, but above all on a mix of quantitative and qualitative approaches.

Making the exception the rule

Incidentally, it was certainly not a fully conscious decision that led me to become intensively involved in science and technology indicators 30 years ago. Rather, it was one of those unpredictable options that arise in life – or, more precisely, in professional life.

Therefore, my professional career is therefore based not least on the principle of chance rather than on clear planning and working towards a very specific professional goal. However, the decision regarding my research topics went hand in hand with the institutional environment in which I was active then, and am still active now. Performance evaluation naturally played a central role as well.

Interestingly, in my own career I have never found myself in a situation where metrics were used exclusively to evaluate my performance. Even in the studies I worked on as a researcher, where the focus of evaluation was generally not on individuals but on projects, programmes, or institutions and their units, evaluations were always based on a combination of quantitative data, contextual information and qualitative assessments.

However, if one follows the discussions of recent years, this seems to be the exception rather than the rule, especially when the focus is on the researchers themselves. It is time to change this.

It is encouraging to see how much momentum various reform initiatives have gained over the past years, and I am confident that we can achieve change. I am also confident because of the intense dialogue within my own scientific community, that is the community that deals with evaluation procedures, on the interplay between quantitative and qualitative approaches to. It is no coincidence that the motto of the Annual International Conference on Science and Technology Indicators in 2025 was 'Reconciliation of research and measurement'.