Back

The making of university rankings – Has anything changed?

With the academic year now in full swing, and most university rankings published – the latest being the US News and World Report’s 2019 Best Global Universities ranking on 30 October – higher education leaders around the world are looking at where they stand in comparison to their peers. This editorial was first published on University World News on 12 November 2018.

Each year some universities lose ground while others gain and there is always an abundance of commentary about the rankings and their methodologies. But has anything really changed over the years in the way they are compiled?

Earlier this year, the IREG Observatory on Academic Ranking and Excellence published an inventory of international university rankings covering 2014-17, which bears striking similarities to reports published by the European University Association (EUA) back in 2011 and 2013

Those reports noted that there was a rise in the number of rankings and that their diversification was growing as compilers had started publishing several different kinds of parallel rankings. This is still true today: The majority of the sub-, subject and regional rankings mentioned in the IREG report have been launched since 2013. And there are new ones coming out all the time, including on teaching and even sustainable development.

Elites

Another constant in the world of rankings is the focus on elite universities. The IREG report notes that the ‘Top 1000’ has become standard for the number of higher education institutions included in rankings. This represents little progress since 2013, when EUA noted that most rankings were limited to covering 500-700 institutions and questioned whether increasing the number beyond 1,000 would be feasible while still producing stable results.

Notably, fundamental flaws in the ranking methodologies have persisted over time. They are built into the very concept of compiling a ranking – meaning that there is no such thing as a perfect or objective ranking. These flaws come from issues like presenting simple-looking figures that are derived from complicated formulas. 

Moreover, indicators may be absolute or relative and subjective judgement by the compilers may determine which indicators are more important. Finally, it is very difficult for rankings to take into consideration the institution’s societal context.

Similarly, biases built into ranking indicators still prevail. Disciplines have different publishing opportunities and language and regional biases remain in global rankings even if compilers have made attempts to address this through regional rankings. Some rankings rely partly on peer review or reputation surveys, which also create biases. 

And last but not least, a simple review of methodologies shows that rankings continue to judge universities largely, if not solely, on research criteria. Educational or societal missions continue to be ignored. Even in cases in which there are criteria for mapping teaching performance, they are proxies, and do not represent teaching quality. 

Moreover, often information is missing and doubts prevail about the comparability of data. This will continue to be a major challenge in new attempts to produce rankings specifically focused on teaching or the societal mission of universities.

Self-regulation

The EUA reports discuss the difficulties in understanding how the indicators were compiled. They also point out that descriptions of methodologies offered by the rankers are often superficial. In this context, the IREG Ranking Audit initiative gave hope that this would change, as it was intended to introduce a system of self-regulation in the sector. 

The transparency of methodologies is indeed one of the criteria IREG uses when it audits whether a ranking is done professionally and transparently and whether it observes good practices and responds to a need for relevant information. However, it is noteworthy that so far, according to IREG, only one international ranking – the QS World University Rankings – and three national rankings – Poland’s Perspektywy University Ranking, Russia’s Russian University Ranking and Germany’s CHE University Ranking – have received an ‘IREG Approved’ certificate.

This autumn, as university leaders rush to find their place on the lists of the best and brightest in the sector, it is useful to remember that fundamentally nothing has really changed over the years in terms of ensuring rankings better depict the quality of universities and their activities. There are more rankings and more different types of focus, but rankings are inherently able to tell only a part of a much wider story.

Original article.

“Expert Voices” is an online platform featuring original commentary and analysis on the higher education and research sector in Europe. It offers EUA experts, members and partners the opportunity to share their expertise and perspectives in an interactive and flexible exchange on key topics in the field.

All views expressed in these articles are those of the authors and do not necessarily reflect those of EUA.

Tia Loukkola

Tia Loukkola was formerly Director of Institutional Development and Deputy Secretary General at the European University Association. She was in charge of a variety of EUA’s activities dealing with improving and monitoring the quality of universities and their educational mission, as well as specific topics including quality assurance, recognition, rankings and learning and teaching.

Search

Comfortable read mode Normal mode X