Desktop version

Home arrow Education arrow Evaluating Collaboration Networks in Higher Education Research: Drivers of Excellence

Publish, Participatory Evaluation, and Metrics

In 2010, the journal Nature organized a survey (Abbott et al., 2010) about publication metrics. They have a sample with 150 readers, and 50 % of respondents said they had changed their behavior depending on the metrics, fearing being fired or discredited in their departments and research groups. And more than 71 % said their colleagues “game or cheat the systems” of evaluation of their institutions. Explicitly, one of the respondents said that, when reviewing a paper, he/she would be more inclined to accept that article if it cited a paper of his/her in order to increase an H-factor. The same happens, in other words, in our studies. The respondents interviewed said external evaluation changes the posture of the researcher. They also recognized that it modifies the posture and induces moral deviation, such as those of Nature had no problem revealing. When we showed that evaluation has a past, the condensed history of the science evaluation, we have seen how much it has been a source of symbolic violence and how it carries values, stresses, and impacts on individual behavior. To say that the evaluation punishes or gives merit is not enough.

Evaluation is a force, a soft power that shapes consciousness. Bourdieu and Bernstein were right, and Nature seems tranquil in showing the soft power in action, the strength of evaluation metrics. However, the direction that this soft power takes deserves to be questioned!

Now, a group, a research network acts in a competitive direction, no doubt. Quantitative metrics induce the direction of the competition and the researchers’ position at prestigious scales. A prestigious scale means profits even if symbolic, but in most cases includes financial profits such as travels, grants, and research resources. Recall that such metrics, in different apparatuses, focus on three basic indicators: publications, citations, and impact journals. But they cannot be totally detached from evaluating and mentoring students, a quali-quantitative indicator. In this topic resides an extraordinary difference in choosing the format of evaluation. The evaluative actions unfold in the sphere of personal formation of the new researchers. A strong science, responsible science, would be concerned and worried about evaluation and the values that bring to build. No matter the tribe or territory, the field of knowledge in which move the researchers, in the action of producing results and findings that can be transformed into common goods or resources for humanity, researchers should have a practical research and education based on ethics and civic education, social and scientific responsibility; they have to have a concern for the societal impact. Should this be a visible and accepted task, or not, whether we like it or not, we cannot ignore the power of evaluation. This is not a new scientific truth—it is a universal historical truth. We do not only teach by words and speeches, we teach by deeds, and even by denial and omissions. By making a participatory and transparent evaluation, one teaches and forms for acts, deeds, and words.

Consciously, we see that scientist merit, nowadays, is heavily dependent on their publications, productivity indicators, and evaluative metrics used to calculate markers of production results in science. We may not realize that their merit is also dependent on and linked to the occupation of a place in a real research group, in one or more research and collaboration networks. If science is part of an information system, as Spinak said in 1998, translated into publications, very well! But science is much more than the papers produced by scientists. It was the past and is the future of the good life of humanity and the planet. In the same way, one does not produce knowledge without constant review of methodologies and routes, findings, antecedents, and consequences. Groups and networks also review their actions. In this sense, we suggest that participatory evaluation is an essential adjuvant of behavior and ethics for the scientific training of new generations, those that will move the world and give continuity to ethical and collaborative processes that we make to establish.

Undoubtedly, participatory evaluation, RNPE as we propose, is done in successive approximations, with levels and intensities decided in dialogue. It will be an evaluation the more participatory the smaller the number of experts in evaluation who want to drive the process. It will be a more participatory evaluation if lower the weight of the number of evaluation experts (who usually want to drive the process) allowing other actors can participate. As we pointed out before, “the stakeholders and the evaluation managers are confused as actors, and process managers as players such as interest groups” (Leite, 2005, p. 112). In this practice, we can identify principles of strong democracy, active citizenship, which moves in successive decentralized management. Different completion times allow discussion and reflection on what to do, how to act, and how to monitor the action and decide on the products and results to be obtained and improvements to be undertaken.

In the world of digital convergence, publish or perish appeals to new metrics; it is not enough to count the publications; there is a need to measure the collaborative process that leads to scientific outputs and learning. In the twenty-first century, RNPE brings at the center of the discussion of ethical and collaborative principles that guarantee a future. The RNPE as being glocal can answer to what Spinak predicted in 1998:

Em um futuro cercano tendremos nuestros propios datos bibliometricos para realizar las evaluaciones pertinentes de nuestra producion bibliografica a la luz de nuestro contexto economico y social y medir esos resultados acuerdo a las prioridades de las pollticas de CyT que corresponden a nuestro desarrollo regional. (Spinak, 1998, p. 148)4

In summary, we can answer the initial issues: publish or perish is not the question; it surely needs to be published and disseminate knowledge. It is also required a participatory evaluation adopting an approach of “analysis of research networks” such as RNPE. But we need to choose key metrics to understand global, national, and local knowledge production phenomena. Metrics selection needs wisely; in other words, what are the main metrics that support those being measured, always bearing in mind that the metrics are not the target, but a mean to improve performance.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics