Desktop version

Home arrow Education arrow Library and Information Sciences

Bibliometrics and University Research Rankings Demystified for Librarians

Ruth A. Pagell

Abstract In the six years since I first researched university research rankings and bibliometrics, much of the world suffered an economic downturn that has impacted research funding and open access journals, research institution repositories and selfpublished material on the web have opened up access to scholarly output and led to new terminology and output measurements. University rankings have expanded beyond the national end-user consumer market to a research area of global interest for scientometric scholars. Librarians supporting scholarly research have an obligation to understand the background, metrics, sources and the rankings to provide advice to their researchers and their institutions.

This chapter updates an article in Taiwan's Evaluation in Higher Education journal (Pagell 2009) based on a presentation at Concert (Pagell 2008). It includes a brief history of scholarly output as a measure of academic achievement. It focuses on the intersection of bibliometrics and university rankings by updating both the literature and the rankings themselves. Librarians should find it relevant and understandable.

Keywords Bibliometrics Universities Rankings Higher Education Research

International Librarians


One result from the internationalization of the education industry is the globalization of university rankings, with a focus on research output. Governments, intergovernmental organizations and funding bodies have shown a growing concern for research accountability which has moved the university rankings from a national to a worldwide playing field.

This chapter examines three different research streams underlying today's university research rankings and demonstrates their impact on today's university rankings to help readers understand “… how an arguably innocuous consumer concept has been transformed into a policy instrument, with wide ranging, intentional and unintentional, consequences for higher education and society” (Hazelkorn 2007).

National, Regional and International Policy and Accountability

The increased ability to measure and analyze scholarly output has increased the involvement of governmental and funding agencies in the rankings arena. They seek methodologies that will measure universities' accountability to their funding sources and their constituencies. Government concern about the spending and impact of its research monies is not new. In 1965, U.S. President Johnson (1965) issued a policy statement to insure that federal support “of research in colleges and universities contribute more to the long run strengthening of the universities and colleges so that these institutions can best serve the Nation in the years ahead.

A growing number of countries have initiated research assessment exercises, either directly or through evaluation bodies such as the benchmark United Kingdom Research Assessment Exercise (RAE) initiated in 1992 which used peer review. The newer initiatives by the Higher Education Funding Council for England incorporates bibliometric measures of research output and considers measurements of research impact (van Raan et al. 2007; Paul 2008; HEFCE 2013) An OCLC pilot study (Key Perspectives 2009) looks at five specific countries, the Netherlands, Ireland, United Kingdom, Denmark and Australia, who have taken different approaches to assessment. Hou et al. (2012) examine the higher education excellence programs in four Asian countries, China, Korea, Japan and Taiwan.

Other active agencies are the Higher Education Evaluation and Accreditation Council of Taiwan (HEEACT 2013) University Grants Committee of Hong Kong (2013) and the Australian Research Quality Framework (Excellence in Research 2013). Most of these incorporate some form of bibliometrics into their evaluation methodology. Italy introduced performance related funding in 2009 (Abbott 2009), establishing the National Research Council (CNR 2013). In conjunction with the new Italian initiative is a series of articles examining many aspects of rankings and productivity (Abramo et al. 2011a, b, 2012, 2013a, b).

Europe has been active in tracking academic rankings at a multi-national level. A group of experienced rankers and ranking analysts, who met first in 2002, created the International Ranking Expert Group (IREG 2013), now called the International Observatory on Rankings and Excellence. In 2006, the Group met in Berlin and issued the Berlin Principles for ranking colleges and universities. The UNESCOEuropean Centre for Higher Education in Bucharest, Romania and the Institute for Higher Education Policy (IHEP), an independent group based in Washington D.C., co-hosted the meeting. The four categories for the 16 Berlin Principles for rankings and league tables include:

A. Purposes and Goals of Rankings

B. Designing and Weighting Indicators

C. Collection and Processing of Data

D. Presentation of Ranking Results

The guidelines aim to insure that “those producing rankings and league tables hold themselves accountable for quality in their own data collection, methodology, and dissemination (Bollag 2006; IHEP 2006). As a follow-up, IHEP (2007) issued an evaluation of various existing ranking systems.

The key findings of the proceedings of the three assessment conferences are in the UNESCO-CEPES publication, Higher Education in Europe:(“From the Editors,” 2002; Merisotis and Sadlak 2005) and(“Editorial,” 2007).

The OECD Feasibility Study for the International Assessment of Higher Education Learning Outcomes (AHELO)gauges “whether an international assessment of higher education learning outcomes that would allow comparisons among HEIs across countries is scientifically and practically feasible. Planning began in 2008 and the final results are presented in several publications (OECD 2013) 17 countries, representing 5 continents are included in the study.

Incorporating both the Berlin Principles and the AHELO learning outcomes, the European Commission, Directorate General for Education and Culture, issued a tender to “look into the feasibility of making a multi-dimensional ranking of universities in Europe, and possibly the rest of the world too‛ (European Commission 2008). A new system U-Multirank, scheduled for launch in 2014, is the outcome of the feasibility study. (van Vught and Ziegele 2011).

At the university level, rankings have been viewed as a game (Dolan 1976; Meredith 2004; Henshaw 2006; Farrell and Van Der Werf 2007). University administrators play the game by making educational policy decisions based on what will improve their standings in those rankings that are important to them. 63 % of leaders/ university administrators from 41 countries who responded to a 2006 survey under the auspices of OECD reported taking strategic, organizational academic or managerial actions in response to their rankings. The results of this survey are available in variety publications (Hazelkorn 2008).

Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics