Desktop version

Home arrow Education arrow Evaluating Collaboration Networks in Higher Education Research: Drivers of Excellence

Concluding Remarks: Evaluation and Collaboration

Abstract In this chapter, an overview of the book is provided. We understand that both evaluation and collaboration are drivers for excellence in knowledge production, given that they are appropriated by the research factors. First, knowledge production processes involve thinking and rethinking what we do, how we do, and what we get; such is the role played by evaluation. Second, knowledge production processes are enriched and potentiated by the collaboration of multiple researchers, creating networks that integrate diverse backgrounds and abilities. The same holds true for the dissemination of knowledge and education of researchers. Thus, the development of evaluation of collaboration networks process can change how researchers perceive and manage knowledge production, imparting new layers of quality.

Keywords Collaboration • Evaluation • Knowledge Production

The journey proposed in this book begins by looking at the global map of knowledge production. Considering the global context of science, it is possible to observe changes in its landscape, particularly in terms of new patterns of production and in the new positions taken by the emergent countries, as seen in Chap. 1. Reflecting on the forces that are behind those changes, we chose to focus on the main drivers: research networks and international collaboration. However, as Chap. 2 highlights, there are © The Author(s) 2017

D. Leite, I. Pinho, Evaluating Collaboration Networks in Higher Education Research, DOI 10.1007/978-3-319-45225-8_7

limits or barriers to international collaboration that depend on factors that have little to do with the very logic of science, such as linguistic delimitations and geopolitics. Another barrier is related to time: we know that since, at least, the last decade, emerging countries have made a long-term investment in S&T education and training of human resources. Some impacts, in terms of scientific publication, can already be quantified. However, the real results in these societies’ welfare and productive capacities are yet to be seen.

In Chap. 3, we focused on research collaboration networks literature because this background is fundamental to deeply understand those social spaces where researchers’ interactions facilitate the sharing, the acquisition, and the creation of knowledge. We understand that research collaboration networks are not just productive but also creative agencies. This differentiation comes from the fact that, by connecting research people of diverse expertise, they do not only produce knowledge in a given manner, but they create new, emergent, and contingent arrangements, structural tensions, and ruptures for knowledge production and innovation.

Next, in Chap. 4, we captured the insights on research collaboration networks from another source: the researchers themselves. We interviewed researchers from different scientific fields in two countries, belonging to networks that are in diverse positions at their network life cycle. We also took two scenarios to have a richer overview: some of the participants led consolidated research groups from prestigious universities and others led a new network, anchored in a young university, geographically isolated, but with strategic international connections.

Despite recognizing the importance of collaboration for the performance of research, we acknowledge a gap in their study: there is little information/research from the viewpoint of evaluation applied to research networks processes. What did we find? Measurement of final research products. So, in Chap. 5, we summarized the tools and metrics traditionally used to measure research products, as well as new emerging metrics. We emphasized the importance of clarity on what is measured and the need to structure any evaluation system upon ethical principles.

In Chap. 6, we proposed the RNPE. This proposal is the result of an integrative effort that draws insights from various sources: our own research background, the literature review, the history of evaluation, and the empirical results from previous and recent research projects we are directing under the CNPq’s auspices. This wealth of information was subjected to the knowledge spiral triggered by the knowledge networks in which we are implicated. So, we assume that in the context of international science research networks and international collaboration among scientists improve research performance and impact knowledge production. We assume that international collaboration networks may be evaluated so that researchers acquire a degree of self-knowledge about their work, their networks, and their environment. We believe that researchers cannot be autonomous and creative if they do not recognize the way they do science, inside their networks—or refusing to engage in networks. There is just so much to know about the impact of our research from the citations of our articles by our colleagues.

Departing from two recognized drivers—networks and international collaboration—we proposed a framework to evaluate and measure networked research activity. The measures consider quantitative and qualitative indicators to build a comprehensive picture of collaborative research work.

Accordingly, we designed a theoretical framework that includes those drivers as central ideas for understanding knowledge production dynamics.

We defend that research evaluation needs to go beyond measuring the stocks of research products and researcher performance productivity. It ought to monitor flows of knowledge that energizes and structures research networks. Our goal is not to gauge the individual merit of a scientist, the individual measures of output, but the impact of what was published as counted by the citations received. Although important, the dependence on productivity indicators by evaluative metrics configures a lapse for science. Scientists working in networks in active processes of international collaboration are doing much more. They are teaching, learning, and training people; they are transmitting values and cultures that underpin science excellence.

In the micro-level context of a network, researchers have at their disposal a learning environment where they can set-up principles of iso- nomia, isegoria, and isocracy by using a participatory approach (RNPE). They can allow the actors of their networks to evolve and grow together with them in benefit of scientific progress. So, we suggest new tools for the evaluative exercise. We suggest considering, in RNPE, indicators such as motivation, interest, communication, cohesion, scientific cooperation, interaction, incentives, societal impact of research themes, internal coauthorship policies, and coauthorship work share, among others. We suggest to consider the network’s composition, the leader’s ability to congregate collaborators, national and international reach of research collaboration, extent of collaboration within the network, national and international reach of the network’s outputs, geographic reach of the network’s outputs considering the variety of the journals’ locations, strength of the leader’s brokerage role within the network, network connectivity, and the diversity of relations established inside the network.

We defend that RNPE is useful, necessary, and can provide a competitive advantage to those organizational and institutional entities whose missions compel them to value and improve knowledge production.

Ifnowadays more and more scientists are globally connected, evaluation needs to observe national and international collaboration to get richer and more complete information about research, addressing the very human collaboration processes that happen inside the networks. Evaluation ought to go beyond traditional measures because research networks, being platforms of convergence and synthesis of knowledge, are loci of innovation. They can be participative evaluated in order to learn how they can be better managed to facilitate connections, to identify bottlenecks that hinder the flow of information and to join diverse expertise to form an enabling environment for knowledge creation and researcher training. International collaboration also allows new prospects of diversity, creativity, and attention to different cultures, which altogether contribute to increased performance. It is not enough to meet a productive agenda without giving it the necessary consideration and evaluating its real impact.

Evaluation expresses a judgment about the worth, value, or effectiveness of a performance, process, act, or outcome. Participatory evaluation can take us beyond the context of national states, markets, and other competitive systems of global and local science production. It can enrich our understanding of how the flows of knowledge, the collaborative processes, go beyond the micro-level of a simple or complex research network. We can learn from similarities and differences; with convergences and divergences that naturally appear in different layers (micro, meso, and micro levels). Participatory evaluation can act as the glue that holds all levels together and it must be simple, accurate, and useful for different stakeholders. Researchers want less administration work and more time to research, so it is comfortable to think of evaluation as a process distributed among people who work together. Participatory evaluation processes can be a challenge, but, certainly, they are tools for collaborative learning.

Traditionally, institutions and individuals are evaluation objects; but if we take a participatory evaluation perspective in the new knowledge production context, the evaluation calls the evaluated to a new role: active participants, actors of the networks. This means that they participate during all phases of evaluation and take different roles in the evaluation. There is an evolution, traditionally people start to be considered as evaluation objects; evolution happens, in a smart approach, when people become active players of the evaluation (emancipatory component). Thus, they take ownership of evaluation codes, which is an empowerment, and they become involved in the action of taking their own evaluation to improve the collaborative processes of knowledge production. Hence, our proposal: the RNPE for excellence knowledge production, without external dominance or colonization.

Traditionally, the research evaluation is based on individualism and competition. Currently, we question this way to evaluate because it is not considered collaboration, teamwork, and results of co-creation of knowledge. Thus, we propose an evaluation of research at the level of collaborative networks with a focus on the collaborative process. Our proposal is also innovative in considering that the best way of doing the evaluation should be participatory. It is intended that evaluation to be made with people; they are evaluated but they are participants and they must have a sense of belonging. So, we agree with Susanne Weber:

In a heterarchic decision-making structure, democratized expertise is a given and the production of knowledge that becomes relevant for action has to work with network knowledge - if it does not, there are distinct risks of interest-guided dominance and colonization on the one hand, lack of acceptance and inner emigration by networks to partners on the other. Knowledge production in network thus has to rely on the cooperative structures of “participatory research”. The efficiency of the solution of material problems depends on the participation of those concerned, on openess to criticism, on horizontal structures of interaction and on democratic procedures for implementation. (Weber, 2007, p. 51)

The emerging knowledge society will increasingly demand science to answer global and local problems. Scientific research cannot be a limited decision but a collective competence driver for excellence, future-oriented by constant reflexive evaluation.


 
Source
< Prev   CONTENTS   Source

Related topics