Desktop version

Home arrow Education arrow Evaluating Collaboration Networks in Higher Education Research: Drivers of Excellence

Publish or Perish and Metrics

The complexity of the evaluation procedures, with which higher education lives in the twenty-first century, indicates that the science of evaluation and research walks together. An interdisciplinary approach, evaluation of science is walking toward encompassing references from information and communication sciences, statistics, computer science, and mathematics in a ritual process with the highest sophistication. And what we are seeing are demanding and complex evaluation that become the foundation on which rests the production of national and international rankings of universities and, at the same time, contribute to the judgment of the merits of the scientists and their research.

The merit of a scientist, until recent times it was clear, lay on rigorous and sustained intensive search for explanations for the phenomena. It was his subject, his/her metier to observe, collect data, verify data, and explain world reality and nature. Nowadays, the merit of a scientist in our century relates to the individual measures of his/her publications, the impacts of the magazines that publish such works, and citations they receive. Or rather, the perception of merit is highly dependent on productivity indicators and assessment metrics used to measure and to calculate such markers in the production of scientific results.

Evaluation indicators, some of which we review in Chap. 5, may lead world rankings of institutions and even rankings of scientists. In 2016, a recognized global agency published a list of the most cited scientists in the world. They ranked more than 3,000 top scientists, that is, the most cited in the scientific literature. The publication failed to consider, and the public was not informed, that the top scientists had prepared each of their most cited articles with multiple partners of their networks, sometimes large groups of researchers and students who guaranteed him the volume of citations. Each highly cited scientist had his publication replicated at its original font by the number of colleagues who participated in his research networks and contributed to the studies. Not always the collaborators were detaching, only the leader; often collaborators do not receive any distinction. By the way, Dorogovtsev and Mendes (2015, p. 1), Dorogovtsev being one of the most cited of this global ranking, said that the issue concerned the researchers because the metrics are inaccurate and the search algorithms in scientific literature should be reviewed since they come from an imprecise and simplistic index, the h index of J.E. Hirsh. Consequently, the Hirsch index is not merely imperfect but it unfairly favors modestly performing scientists and punishes stronger researchers with a large mean number of citations per paper (Dorogovtsev and Mendes, 2015, p. 2). Dorogovtsev and Mendes concluded and reinforced that the merit of a researcher lies in making a strong science and not in the number of publications and citations.

In this way, we find the evaluation as a pivot, as a driver for excellence that requires some knowledge domain from the academy. It is important to note that evaluation in this twenty-first century reinforces the direction of the individual researcher productivity, the productivity of projects and institutions. There are productivity goals, production targets, to be achieved and concerns about the impact of the papers with eigen value or eigenfactor of the journals where it shall publish such work. The evaluation of impact has no disciplinary boundaries, and it affects everything and everyone in the academic field. Even when the evaluation deals with the collective production, it covers a summation of individual works that, once appropriate by sophisticated evaluative techniques, will show the world of knowledge global markets, its exchange value, and not exactly the worth of science.

In academia, publish or perish marks the researcher because he knows that this idea directs the evaluation of research agencies and S&T funding (McGrail et al., 2006). Such evaluations acquire economic and market value. Here, we consider a special market, the knowledge market itself, and a market for researchers who produce knowledge and provide this market with merchandise. The most central point ofevaluations, in addition to the scientific and moral or ethical value, is to mark the production of the competitive and possessive individual. The etymology of the word evaluation leaves no doubt. The Portuguese word originates from Latin (a +) valere—means to have, or to give value to something, validate, or make valid and dignified. By the way, the word evaluation (aualiacao), used by the Portuguese Manueline Orderings, was employed since 1521 to symbolize judgment and sharing.

What we see in this century, perhaps the result ofa misunderstanding of the principles of (superneo) liberalism, is the private and public agencies competing for markets (large publishing houses, supporting foundations, research private funding) with the prerogative to disseminate science. Evaluation then determines what is worthy and what is not worthy of publication, in which language it should be published, and where, an issue that we discussed in Chap. 2. This is such a complex game that scientists from developing countries find it difficult to keep playing in the global chessboard of science. Alternatives such as publication in open access are emerging. But criticism over the open access succeeds in trying to diminish its importance, the perceptions on the rigor of assessments it carries out, and its reach as a valid vehicle for the dissemination of science. That is, given the strength and the power of evaluation, we need to know more about evaluation procedures and what is marking the academics’ life in its name.

We will make a historic briefing on the subject of evaluation; then, we will highlight the collaborative and pedagogical processes of evaluation networks and research groups, showing that simple quantitative and qualitative indicators can contribute to self-evaluation and improvement of the performance of research groups.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics