What Do We Measure by Evaluating Research Collaboration Networks?
Abstract This chapter departs from the idea that collaborative processes must be diagnosed and monitored. Here is an introduction to evaluation tools, some of them very well known, such as rankings or bibliometric measures. We introduce the new metrics, the altmetrics. Here, we present answers to some questions: What do we measure when we intend to evaluate universities and higher education systems? Are we evaluating research collaboration networks? In an ultimate analysis, are we not measuring just researcher productivity? So in this chapter, we question a critical view on the way indicators can even be a factor of system corruption. In another way, we assume the relevance of ethical principles to guide the selection of indicators and research work.
Keywords Evaluation tools • Indicators • Rankings • Research evaluation
Science is an information production system in the form of publications, according to Spinak (1998). Taking this perspective, policy management, academic management of universities and research institutes, countries’ S&T systems, all should necessarily include a definition of the best indicators to fairly and accurately evaluate the production of information derived from research and, at the same time, from the individual researcher’s world. Every indicator would be contributing to higher education © The Author(s) 2017
D. Leite, I. Pinho, Evaluating Collaboration Networks in Higher Education Research, DOI 10.1007/978-3-319-45225-8_5
evaluation. In general, they evaluate performances in a frame of research results and products.
Judging from Spinak’s assertion, the measure of individual productivity of each researcher is the central component of the research evaluation. Literature suggests many and quite different indicators for this purpose. In this chapter, we will see some indicators and emphasize on the intense discussion about the problems in the use of indicators to evaluate publication outputs and individual performance of researchers. Some studies from the areas of Social Sciences and Humanities even show the side effects of the so-called regulatory evaluation on the health of human relations in the academia. Almerindo Afonso, for example, says that there is a virus in the academic survival that erodes human relationships among peers (Afonso, 2015). The virus would be inoculated by evaluations set out from outside to inside academic living microsystems. Such evaluations are presented in the form of performance indicators or researcher/teacher performance indicators.
Moreover, the use of quantitative indicators may cause a deformation in information because they do not comprise dimensions of research work processes carried out in teams. In our opinion, as shown in the previous chapter, there are significant learning processes that take place within the research groups, which are undervalued. With accurate evaluation, these internal processes would be better perceived as the real engine of the research outputs. Usually, it is difficult to gauge and measure these processes, given the difficulty in obtaining consistent data, and, therefore, what goes on within groups and networks is not evaluated. So, for lack of studies on the subject, we can lose the wealth of processes that are full, plenty of human, interactive and collaborative relationships.
From the perspective of academic governance, some questions must be answered: What are research performance indicators? For what are such performance indicators used? What are the most commonly used indicators? What are the challenges to implementing a system of indicators? Do these indicators measure the collaborative processes within research networks?
In this chapter, to give clues to possible answers, a literature overview is provided about performance indicators, rankings, as well as a critical debate on rankings. We highlight some innovative rankings aspects of evaluation, such as the use of collaborative indicators. We argue that the evaluation of academic science, by employing collaborative process indicators, can be a driver of excellence.