Desktop version

Home arrow Education arrow Evaluating Collaboration Networks in Higher Education Research: Drivers of Excellence

Indicators and Rankings Debate

This chapter is incomplete without a discussion on relevant issues of contemporary science. After all, rankings measure the quality of an institution? Indicators in use reveal the true science produced? Since the 1990s, Nigel Norris warned about corruption in the use of indicators believing that they could be part of an evaluative judgment, not being fair and adjusted to the evaluative processes and could generate corruption issues in a system. He understood that corruption would occur by the distortion of the systems, on their own operation.

Los indicadores de rendimiento tienden a influir en el modo en que un sistema opera y funciona. Cuanto mas sean utilizados los indicadores de rendimiento en la toma de decisiones, mas tendera la actividad del sistema a ser corrompido y con mayor probabilidad se distorsionara el proceso social que dichos indicadores pretenden controlar. (Norris, 1997, para. 40)2

This distortion phenomenon occurs because the performance indicators relate to the past; both institutional past, as the researchers’ past or former conclusions. In technical language, it is an ex-post evaluation. Given that the indicators feed higher education policies, control policies, regulation, and feedback of S&T systems, they can be turned into regulatory performance standards of achievement that are already outdated. These standards entail risks associated with past realities and may contribute to unintended consequences. They can, for instance, lead to researchers’ work intensification to the level of exhaustion to the point of illness or increased absenteeism.

Ello conlleva dos consecuencias probables: primera, puede dirigir hacia una intensification del trabajo que a su vez provoque consecuencias no previstas como el incremento del nivel de enfermedad o de absentismo; segundo, a menudo lleva al acuerdo tacito de no exceder ciertos niveles de rendimiento en orden a controlar el ambiente de trabajo. (Norris, 1997, para. 47)3

Ultimately, the use of evaluation indicators, to know and to control research, focuses on the control of the researcher and may, even, contribute to his/her elimination or exclusion of working environment. Now, researchers are employees of research. They have their own trajectories— also computed in their past—which are being used to the current construction of an institutional stage of development since a labor context must be taken into account for evaluation. Under the quality control label, individual performance indicators can weigh as a sword of Damocles over the heads of researchers, sensitive human beings like any other.

Indicators are measures that, plus other variables, show the quality of a research program. However, oftentimes, they can be ambiguous measures of a research group’s production. They may also lead the researchers to produce more quantitatively, to replicate works and publications, to self-plagiarize, and to plagiarize colleagues. From these distorced approaches, this performance obsessed collaborates to vitiate the youth generation production as members of research groups where practices are lacking ethics.

Constant use of quantitative indicators alone may be causing an evaluative reductionism because they fail to report on the social ecology of groups and obscure what really happens inside the research areas and therefore may distort the quality of a system view. If, traditionally, indicators capture information and express it in a quantitative mode, as Jongbloed and Westerheijden (1994) say, they need to be complemented. From this perspective, qualitative information, such as perceptions gathered through interviews or questionnaires, is considered as complementary information to the quantitative indicators. Together, they can give a richer information account of research practice and its effects (Jongbloed and Westerheijden, 1994).

On the other hand, the rankings are controversial and inspire intense debate and criticism. One of the most invoked criticisms is that we have an evaluation that results in a single indicator to produce lists of institutions as if they were a football league. Thus, this type of evaluation has a perverse effect. Rather than serving to implement a process improvement, to get better results, it ultimately leads to cynicism, which translates to presenting the most suitable institutional data in order to obtain a better position in the list of the university world rankings.

Finally, research evaluation is itself a matter of concern. Some researchers are disquieted with the perversity of certain stages of evaluation that divert its objectives and, in a way, can result in an instrument for encouraging the cynical and unethical behavior. In opposition or against

Table 5.8 Leiden manifesto ten principles



Data integration

Quantitative evaluation should support qualitative, expert assessment


Measure performance against the research missions of the institution, group, or researcher

Local relevance

Protect excellence in locally relevant research


Keep data collection and analytical processes open, transparent, and simple


Allow those evaluated to verify data and analysis


Account for variation by field in publication and citation practices

Qualitative judgment

Base assessment of individual researchers on a qualitative judgment of their portfolio


Avoid misplaced concreteness and false precision

Impact of the evaluation

Recognize the systemic effects of assessment and indicators

Improvement of the evaluation

Scrutinize indicators regularly and update them

Source: Based on Hicks et al. (2015)

Features Principles

such concerns arise the denominated Leiden Manifesto (Hicks et al., 2015). In this document, ten principles are proposed as paramount to governing the implementation of a research evaluation procedure. They are namely data integration, alignment between evaluation and objectives and institutional mission, the valorization and recognition of local relevant research, simplicity, the stakeholders’ evaluation of data and information delivered by research evaluation, contextualization, the qualitative judgment, the reality (avoid misplaced concreteness and false precision), and impacts of the evaluation and continuous improvement of the evaluation (see Table 5.8).

The manifesto mainly highlighted the fact that a set ofmetrics based on quantitative data is just a tool to be scrutinized regularly. This instrument should never be confused with the purpose of evaluation. In addition, the manifesto emphasized the importance of qualitative information; this should be considered an additional source of relevant information for informed decision-making that seeks to encourage knowledge production (qualitative and quantitative). However, again, there is a lack of mention of internal processes and to collaboration within the groups and research networks.

In this chapter, we reviewed the creation of indicators and rankings, how collaboration in science is measured by indicators, and recent developments in altmetrics. We summarized part of the international debate over rankings and higher education evaluation tools and the criticism on indicators.


  • 1. ARWU, HEEACT, Leiden CWTS, SCImago, QS, THE Ranking, and URAP.
  • 2. “Performance indicators tend to influence how a system operates and works. The more the performance indicators in decision-making are used, the more the system activity will tend to be corrupt and is most likely to distort the social process that these indicators seek to control.”
  • 3. “This involves two probable consequences: first, it can lead to an intensification ofwork which in turn cause unintended consequences such as increased levels of sickness or absenteeism; second, often it leads to tacit agreement not to exceed certain levels of performance in order to control the environment work. ”
< Prev   CONTENTS   Source   Next >

Related topics