Chapter 2
World Top University Rankings: From Distribution to Implications on National Knowledge Creation and Competitiveness

Thanh Quang Le and Kam Ki Tang

Introduction

Universities have been at the heart of human capital and knowledge creation since Plato founded the first one in 387 bce at Athens. Nowadays, universities play a central role in driving economic development around the world. Alan Greenspan, the former Chairman of the US Federal Reserve System, speaking on the “structural change in the new economy” to the National Governors Association in 2000, emphasized that:

In a global environment in which prospects for economic growth now depend importantly on a country’s capacity to develop and apply new technologies, our universities are envied around the world. … If we are to remain preeminent in transforming knowledge into economic value, the US system of higher education must remain the world’s leader in generating scientific and technological breakthroughs and in preparing workers to meet the evolving demand for skilled labor. (Greenspan 2000)

This statement succinctly summarizes the mission of leading US universities in contributing to the economic development of the country. Universities are important initiating forces for national innovation capacity as they seed new generations of applied research, scientific breakthroughs, and streams of new products. They also play a central and strategic role in educating and training scientists, engineers, teachers, researchers, entrepreneurs, and other skilled workers. Knowledge created by universities is diffused to the society through an array of activities, ranging from educating undergraduate students, training postgraduate students, publishing research papers, to involvement in public debates, providing consultancy services, and producing patents. In this modern era, economic growth is increasingly knowledge-intensive; as such, universities are at the heart of the economic vitality and competitiveness of their host nations. Michael Porter of Harvard Business School once commented: “Skilled human resources and knowledge resources are two of the most important factors for upgrading national competitive advantage” (Boulton 2009 ). Given the significant economic and social contributions of universities to their communities and the whole economy, there is a strong need to evaluate their knowledge creation capability.

Universities’ knowledge creation capacity is intrinsically dependent on their stocks of academic talent. Generally speaking, academic talent is a specific type of human capital alongside other types of human capital such as athletics, art, music, or entrepreneurship (Li, Shankar, and Tang 2010, 2011 ). Academic talent and tacit knowledge are hard to measure directly, but they can be reflected through people’s academic performance. In that sense, a university’s research and publication performance is the best tool to capture the size and quality of its academic talent stock. Accordingly, universities’ knowledge pool could be calculated and compared with each other.

Over the last 10 years, the rise of the middle class in many populous emergent economies has created a high demand for higher education. At the same time, the marketization of the education sector, especially the tertiary one, in many advanced economies (most noticeably the United Kingdom, Canada, Australia, and the United States) plus the globalization of education services have created strong demand for comparative information on the quality of world universities. A large number of world university league tables have been created as a result.

As a consumer good, university league tables are very “successful” as they have attracted a lot of attention from the general public, policymakers, and stakeholders, and have evolved into valuable new benchmarking tools. At the individual level, league tables act as a quality assurance although the credibility of certain scoring methods has been a subject of criticisms. At the institutional level, many universities have heavily advertised their ranking positions for marketing purposes. At the national level, being able to occupy the top spots of university league tables signifies a country’s current innovative capacity and human capital stock. Furthermore, since academic talent is highly mobile nationally and internationally, there is a conjecture that the higher the university’s position in the world rankings, the easier for it to attract good academics and therefore, the higher the country’s future innovation capacity (Li et al. 2011 ).

Against this background, the main objective of this chapter is to examine the distribution of top universities across nations. However, it is necessary to say straight away that although teaching is an integral part of universities’ mission, the focus of this study is on their research. This is because although teaching is more concerned about education and training, it is research that reflects more fully the knowledge creation process of an institution. To that end, we make use of the Academic Ranking of World Universities by the Shanghai Jiaotong University over the 2003–2013 period. Instead of using the rankings, we use the scores underlying the rankings to measure university performance. An advantage of the score measure over the ranking measure lies in its cardinal characteristic, which allows us to aggregate the scores of all top universities of a country into a national score. In doing so, we are able to investigate the research performance differences across countries. In addition, unlike the ranking measure, this measure can help us visualize how far apart countries are in their academic performance and whether they are catching up with each other over time.

In the data analysis, we will firstly focus on national aggregate data. In particular, we look at the number of top universities and the aggregate academic scores that each country obtains. The main purpose is to use the distribution of top universities as indicators of the disparity in science, technology, and innovation capacities across countries (see Castellacci and Natera in this volume, Chapter 1). It is a country’s total stock of technology and scientific knowledge that determines its international competitiveness, regardless of population. Knowing, however, that aggregate measures tend to “benefit” larger countries, this chapter will also examine the countries’ performance in relation to their size. To remove the scale factor, the aggregate score measure constructed is deflated either by population or GDP level. The comparison of countries’ research output intensity is expected to provide us with a better idea on how efficient countries are in performing advanced academic research within their limited resources.

The next section studies research and innovations in universities and their characteristics, while the following section shifts the focus to the distribution of top universities. It explains how league tables of top universities are actually measured and why the distribution of top universities across countries matters. It also compares the distribution of world universities at the top end across countries and geographical regions. The last section concludes with a discussion on policy implications.

Research and Innovations in Universities

Universities as an Important Source of Technical Change

Throughout history, universities have been the birthplace of many significant innovations. For example, Penicillin was developed by scientists at Oxford University in 1939; blood preservation technology by the University of Columbia in 1940; ultrasound by the University of Vienna in 1942; Liquid Crystal Display (LCD) by Kent State University in 1967; and the Hepatitis B vaccine by the University of Pennsylvania in 1969. The number of US patents awarded to universities increased from 500 in 1982 to more than 3100 in 1998 (Lach and Schankerman 2008 ). Within a decade from 1990 to 2000, gross licensing revenues generated by university inventors increased by more than seven-fold from US$186 million to roughly US$1.3 billion. This rapid growth in university innovation is partly due to the passage of the Bayh-Dole Act in 1980, which gave US universities intellectual property rights on patentable inventions funded by federal government money (Lach and Schankerman 2008; Goldstein and Renault 2004 ). By year 2000, a large number of US universities had set up their own technological licensing offices governed by policies concerning intellectual property protection and royalty sharing arrangements for academic scientists (Lach and Schankerman 2008 ). By receiving funding from the government, universities to some extent have accepted the responsibility of directly contributing to national development instead of doing so merely indirectly through their traditional missions of teaching and research.

The knowledge created by universities spills over to industries through a number of channels. First, many students graduating from universities work in industries.1 Second, companies sometimes send their staff to universities to learn new techniques or knowledge. Third, companies and universities can engage in R&D collaboration (Jaffe, Trajtenberg, and Henderson 1993 ). Fourth, there is movement of scientists between the academic and industrial sectors (Zucker, Darby, and Toreto 2002 ). Fifth, university academics publish their findings in technical and scientific papers (Jaffe et al. 1993 ). Sixth, universities can license their own inventions to private firms (Jensen and Thursby 2001; Thursby and Kemp 2002 ). The knowledge flow from universities to the private sector presumably will stimulate industrial R&D, which then increases productivity and output (Fagerber, 1994 ).

Characteristics of Academic Research

The conventional view places basic research at the center of academic research. According to that view, academia conducts research that comes up with fundamental insights (Sauermann and Stephan 2010 ). As academic research often aims at recognition, through publications in scientific journals or presentations at conferences, rather than raising commercial value, the knowledge produced in this type of research is more like a public good. The timely and widespread disclosure of research results is the key factor for accelerating the accumulation of knowledge in society over time (Sorenson and Fleming, 2004 ). The accumulated stock of knowledge then becomes an input for the production of many goods and services.

This view is deemed to be too narrow to capture the characteristics of contemporary academic science.2 As discussed above, many academic institutions have had their research outputs commercialized. However, it is still correct to say that a substantial part of university research is a public good. Publications, an evidence of peer recognition from the scientific community, are still the main incentive for the continuous flow of academic research (Dasgupta and David 1994; Stephan 1996 ).

As compared to their industrial counterparts, academic researchers have more freedom in choosing research topics and methodologies, in deciding when to “pause” and disseminate the results. There have been claims that scientists working at universities are underpaid.3 But this may actually create an unequivocal advantage for academic sciences as they can conduct the research at relatively lower costs (Sauermann and Stephan 2010 ), and that is why universities are ideal places for undertaking explorative research. On the other hand, academic researchers may pursue projects that they find interesting or prestige enhancing but generate little value to society.

Traditionally, state government is the main funding source for university research. Government funding comes in three forms: direct funding to finance infrastructure and staff salaries, research and development grants, and loans and tuition subsidies to students. As these funds are discretionary items in the state budget, they are highly susceptible to budgetary cuts (Zusman 2005 ). A recent survey by Moffitt (2014 ) reveals that due to the recent global recession (2008–2012), deep budget cuts were common worldwide. This creates concerns about the flow-on effect to the innovation capacity of the private sector due to the close connection between universities and industries.

History has shown that many important innovations ranging from medicines to information technology products have their roots from publicly funded projects conducted at universities. According to Nelson (1959 ) and Arrow (1962 ), industries have less incentive to invest in basic research, because they cannot fully appropriate the economic value associated with research ideas due to knowledge spillovers and imperfect intellectual property rights. This leaves the explorative research to mainly academic institutions. Nowadays, besides the government, universities also look to the private sector for funding. As these funding sources often have some particular research agenda, academic researchers’ freedom could be constrained. In the meantime, by controlling the research funding, firms are able to direct academic researchers to projects that help maximize their economic payoffs.

A number of strategies have been employed by the universities in response to the recent cut in state funding. Zusman (2005 ) states that one strategy is that universities make high-return and high-demand professional programs (mainly in business, law, and medicine) self-funded or charge a higher fee on them. Some institutions introduce self-supporting part-time programs and contract education programs with specific industries. A number of public universities have become privatized, while others have found support in the form of private donations and external grants. An indication of privatization of public higher education is the number of online courses produced by private and for-profit companies (Robertson and Leumer 2014 ). Facing shrinking research grants, several universities opt to form for-profit collaborations with industries and make commercial technological transfer (Zusman 2005 ).

International Comparison of Academic Research Output

Academia is often perceived as a homogeneous sector. This view, however, is far from the reality. For any given country, there are top tier research institutions as well as lower tier ones, and there is also considerable heterogeneity among researchers within an institute in terms of research output and impacts.

As mentioned above, academic scientists have to rely on funding from the government or industry. In deciding the allocation of research grants, funding agencies typically consider the researchers’ track record as well as the research environment of the academic institutions they work for. This, on the one hand, makes the competition for funding more competitive because the institutions that manage to stand out from the crowd tend to take most of the fruits. On the other hand, there is an urgent need for a way to measure the research quality of academia. To this end, there has been strong interest in ranking worldwide universities based on their achievements in both teaching and research, known as university rankings or university league tables. This benchmarking exercise is done at the institutional level as well as at the national level. At the institutional level, league tables, given their emphasis on research outputs, obviously provide a measure of a university’s research output and impacts. At the national level, league tables also provide an informal measure of a country’s research capacity and its competitiveness in a knowledge-based world economy (Aghion et al. 2007 ).

Among all the league tables, the most influential and widely cited ones are the Academic Ranking of World Universities (ARWU) by Shanghai Jiaotong University, the THE World University Rankings published by Times Higher Education, and the QS World University Rankings published by Quacquarelli Symonds. These rating agencies nowadays provide university rankings as well as department rankings for specific subjects. Historically, the THE Rankings and QS Rankings had a period of joint publication from 2004 to 2009. In 2009, THE split from QS to form a new partnership with Thomson Reuters. While QS continues to use their preexisting methodology, THE has created and adopted a new method that takes advantage of the data on scientific paper citations supplied by its partner Thomson Reuters. Table 2.1 lists the major features of these three rankings.

Table 2.1 Some major world university rankings.

Source: Academic Ranking of World Universities (2014), QS World University Rankings (2014 ), and THE World University Rankings (2014 ).

ARWU Rankings QS Rankings THE Rankings Remarks
Indicators and their weightings Alumni winning Nobel prizes and Fields medals 10% Academic peer review on research output 40% Industry income – innovation 2.5% (i) ARWU focuses entirely on research.
(ii) Both QS and THE take account of teaching, infrastructure, and international diversity.
(iii) QS puts a significant weight on two subjective measures: academic peer review and employer review.
Staff winning Nobel prizes and Fields medals 20% Recruiter review on quality of graduates 10% Teaching – the learning environment 30%
Highly cited researchers in 21 broad subject categories 20% Faculty student ratio 20% Research – volume, income, and reputation 30%
Articles published in Nature and Science 20% International orientation 10% International outlook – staff, students, and research 7.5%
Science Citation Index and Social Science Citation Index 20% Scientific paper citations per faculty 20% Scientific paper citations – research influence 30%
Per capita academic performance 10%
Period covered 2003–2013 2004–2014 2004–2009; 2010–2014 QS and THE had a joint publication in 2004–2009 before their split in 2009.
Major rankings and number of top universities released Aggregate rankings Top 500 Aggregate rankings Top 800 Aggregate rankings Top 400 (i) Besides aggregate rankings and subject rankings, ARWU and QS additionally have field/faculty rankings.
(ii) QS also has rankings for different geographical regions.
Subject rankings Top 200 Subject rankings Top 200 Subject rankings Top 100
Field rankings Top 200 Faculty rankings Top 400
Asia’s rankings Top 300
Latin America’s rankings Top 250
BRICS’s rankings Top 100
Under 50’s rankings Top 50

Amongst these three league tables, the ARWU is commonly considered the most preferable indicator due to its high objectivity and central focus on research (Marginson 2007; Hazelkorn 2007 ). First published in 2003, the ARWU was originally developed to benchmark the research performance of Chinese universities but was then extended to include universities of other countries. Since the purpose of this study is to compare research capacity across different countries, we choose the ARWU ranking table over the other two for our analysis.

The ARWU measures universities’ research strengths using six different indicators, namely: (1) the number of alumni that have received Nobel prizes in science and economics and Fields medals in mathematics,4 (2) the number of staff that have received Nobel prizes and Fields medals, (3) the number of highly cited researchers in 21 broad subject categories selected by Thomson Scientific, (4) the number of papers published in Nature and Science,5 (5) the number of papers indexed in Science Citation Index (Expanded) and Social Science Citation Index, and (6) the per capita academic performance of these indicators. Each year, more than 1000 universities6 are surveyed, but only the rankings and scores of the top 500 (denoted as ARWU500 hereafter) are reported. The scores for the top 500 universities are normalized in a way that has the topmost university scoring 100, and other institutions are calculated as a percentage of this top score. Harvard University has maintained its leading position throughout the whole sample period (2003–2013), suggesting that it has more research talent than any other university.

Comparison Across All Fields

Over the period 2003–2013, the ARWU500 universities spanned across 45 countries. Universities in the OECD countries account for the lion’s share as only 71 ARWU500 universities were from non-OECD countries (of which 29 are from China and 5 from Hong Kong). The biggest share went to Europe. America came second, followed by the Asia Pacific region. The Middle East and Africa came last with a negligible share. Throughout the period, a large number of countries did not have a single university in the ARWU500 ranks.

Figure 2.1 shows regional distribution of ARWU500 for the two endpoints of the data period. In 2003, 219 universities (43.89%) were located in 23 European countries, 193 (38.68%) in six American countries (160 of them in the United States alone), 83 (16.63%) in nine Asia Pacific countries, and only four universities (0.80%) in a Middle Eastern and African country (South Africa). In 2013, the shares of Asia Pacific and Middle East and Africa slightly improved to 19.76% (with 99 universities in 10 countries) and 1.80% (with nine universities in four countries) respectively. The expansion of these two regions implies the contraction of the others. Indeed, the number of European ARWU500 universities dropped to 213 (42.51%) while the American ones shrank to 180 (35.93%).

c2-fig-0001

Figure 2.1 Regional distribution of ARWU500 universities.

Source: Data from Academic Ranking of World Universities (2014).

Figure 2.2 depicts the performance of the top countries in Europe, America, and Asia Pacific over the sample period. The United States, the top country in America and also in the world, dominated the whole decade with around 31% of the ARWU500 universities on average. The number of US universities decreased slightly in recent years from the highest number of 167 in 2004 to 146 in 2013. By contrast, China, one of the top performers in the Asia Pacific region, had made impressive progress. The number of Chinese ARWU500 universities had increased nearly three-fold, from 10 in 2003 to 28 in 2012 and 27 in 2013. This improvement had allowed China to rise into the top 10 countries since 2010. Australia, another top performer in the Asia Pacific region, had made slight improvement over the years, jumping from 12 universities in ARWU500 to 18 universities in 2013. Despite having a small decrease in the number of British universities joining in ARWU500 over the years, the European champion was steady in its second place overall. These are four of the top 10 countries. Most of the other members of the top 10 are from Europe including Germany, France, Italy, and the Netherlands, leaving the other two regions with an additional top 10 country each: Canada (America) and Japan (Asia Pacific).

c2-fig-0002

Figure 2.2 Number of ARWU500 universities by the four top countries.

Source: Data from Academic Ranking of World Universities (2014).

Although the total number of US universities in the ARWU rankings has fallen over the 2003–2013 period, they consistently dominated the top spots. As shown in Table 2.2, most of the top 10 positions are occupied by US universities. The only non-US universities that made it to the top 10 are the Oxbridge duo from the United Kingdom.

Table 2.2 Top 10 university rankings.

Source: Elaborations on Academic Ranking of World Universities (2014).

Ranking 2003 2008 2013
 1 Harvard University Harvard University Harvard University
 2 Stanford University Stanford University Stanford University
 3 California Institute of Technology University of California, Berkeley University of California, Berkeley
 4 University of California, Berkeley University of Cambridge Massachusetts Institute of Technology
 5 University of Cambridge Massachusetts Institute of Technology University of Cambridge
 6 Massachusetts Institute of Technology California Institute of Technology California Institute of Technology
 7 Princeton University Columbia University Princeton University
 8 Yale University Princeton University Columbia University
 9 University of Oxford University of Chicago University of Chicago
10 Columbia University University of Oxford University of Oxford

Among the six indicators used in the ARWU rankings, the purpose of the last one is to capture per capita academic performance of all the other indicators. The main purpose of this indicator is to control for the size of universities. While this indicator is useful for comparison across institutions, it is less relevant for the comparison across countries. Hence, we exclude this indicator from computing the total research score for an individual university:

images

To keep it simple, the research score of the university is the unweighted sum of five different research dimensions, which is in contrast to the original weighted sum method by the ARWU. As no arbitrary weight is given to a particular research measure, all indicators are considered equally important. In this formulation, TotalScore is the university’s aggregate score, Staff is the score on staff being Nobel laureates or Fields medalists, Alumni is the score on alumni being Nobel laureates and Fields medalists, HC is the score on highly cited researchers, NS is the score on Nature and Science publications, and PUB is the score on indexed publications. In the same manner, the national measure is the total score of a country’s ARWU500 universities in that year.

Figure 2.3 presents the distribution of the new national scores across regions. Despite having the largest number of ARWU500 universities over the period, Europe only came second in terms of aggregate score (39.69% in 2003 and 34.80% in 2013 respectively). It is America that took the first place with 47.06% (52.64%) share in 2003 (2013) respectively, reflecting the fact that many top scored universities were located in the United States. Asia Pacific had the third largest share (12.83% in 2003 and 12.83% in 2013 respectively), leaving Middle East and Africa in the last place with a minimal share in either year.

c2-fig-0003

Figure 2.3 Regional distribution of research performance scores 2003 and 2013.

Source: Data from Academic Ranking of World Universities (2014).

Table 2.3 lists the top 10 countries in our sample based on the aggregated scores over the period 2003–2013. With a large number of ARWU500 universities being at the top end of the league tables, not surprisingly the United States secured its number one position every year. The United Kingdom occupied the second place while the third place belonged to Germany. While the top three seems quite stable over time, fourth and fifth places alternated between Japan and Canada. France seemed to be well secured in its sixth position. The last four places in the top 10 remained in competition between Italy, Australia, the Netherlands, Sweden, and China. The emergence of China over the last decade is stunning, with its national score increasing three-fold from 464 to 1432 over the sample period. In the last four years (2010–2013), China had overtaken many OECD countries and moved into the top 10 group.

Table 2.3 Top 10 country rankings in terms of aggregate scores.

Source: Elaborations on Academic Ranking of World Universities (2014).

2003 2005 2008 2011 2013
Ranking Country Score Country Score Country Score Country Score Country Score
 1 US 15,095 US 17,417 US 16,916 US 16,627 US 19,378
 2 UK 3,629 UK 4,229 UK 4,290 UK 4,124 UK 4,882
 3 DEU 2,680 DEU 2,988 DEU 2,973 DEU 2,914 DEU 3,548
 4 JPN 2,190 JPN 2,278 JPN 2,121 CAN 1,849 CAN 2,293
 5 CAN 1,715 CAN 1,878 CAN 1,827 JPN 1,807 JPN 2,075
 6 FRA 1,277 FRA 1,637 FRA 1,705 FRA 1,732 FRA 2,017
 7 ITA 1,153 ITA 1,257 ITA 1,226 AUS 1,310 CHN 1,896
 8 NLD 932 NLD 992 AUS 1,089 ITA 1,243 AUS 1,757
 9 AUS 850 AUS 971 NLD 1,018 CHN 1,184 ITA 1,499
10 SWE 815 SWE 957 SWE 912 NLD 1,094 NLD 1,172

Note: US – United States, UK – United Kingdom, DEU – Germany, JPN – Japan, CAN – Canada, FRA – France, ITA – Italy, NLD – Netherlands, AUS – Australia, SWE – Sweden, CHN – China.

It is worth pointing out that there is a huge gap between the United States and the rest in the top 10 group. Over the years, the United States scored more than four times the second-placed United Kingdom, and as high as 15 times the tenth-placed country. Although China has joined the top 10, its score is still more than 10 times smaller than that of the United States, as well as lagging behind all G7 countries except Italy.

As much as 30% of the original ARWU scores are allocated to association with Nobel laureates and Fields medalists.7 The inclusion of these two indicators in measuring quality of research is controversial in the sense that it benefits long-established universities as opposed to their younger counterparts. This is evident in the fact that these prizes are often awarded to scholars in US universities and, to some extent, Western European universities (Marginson 2013 ). Furthermore, to the extent that Nobel laureates and Fields medalists are likely to have some of the strongest publication records, which are already included in another three indicators of ARWU (HC, NS, and PUB), the scores associated with Nobel laureates and Fields medalists could lead to double counting. On the other hand, one might argue that the other three indicators do not sufficiently account for the impact differentials of publications by different researchers, and that could be captured by the Nobel prize and Fields medal achievement. However, even for the impact argument, given there are numerous international awards, some of them are equally prestigious (e.g., the Wolf Prize of the Wolf Foundation of Israel, the Leroy P. Steele Prize of the American Mathematical Society in Mathematics) and some of them cover areas outside, but no less important than pure sciences (e.g., the Wolf Prize of the Wolf Foundation in Agriculture, the American Academy of Arts and Letters Gold Medal in Architecture), it is debatable why only Nobel prizes and Fields medals are counted. Put it the other way, the awarding of merely Nobel prizes and Fields medals, as compared to the publication of journal articles, by nature are extremely rare events; as such, its associated score could have much greater “noise” as a measure of human capital.

To see whether the results in Table 2.3 are sensitive to such potential noise, we exclude these two indicators from the calculation of national scores. As a result, the unweighted aggregate is computed as the following:

images

From Table 2.4, it can be seen that there is no dramatic change in the top 10 countries. The only difference is that China would have entered the list of top 10 countries three years earlier in 2007 instead of 2010, and it would have been ranked the fifth in 2013 instead of the seventh, bypassing Japan and France. In terms of score, the gap between the United States and the other countries remains as large as before. This means that the country results are very robust to the exclusion of the Nobel prize and Fields medal factor.

Table 2.4 Top 10 country rankings in terms of aggregate scores (without Nobel prizes and Fields medals).

Source: Elaborations on Academic Ranking of World Universities (2014).

2003 2005 2008 2011 2013
Ranking Country Score Country Score Country Score Country Score Country Score
 1 US 13,497 US 13,916 US 13,462 US 13,147 US 16,094
 2 UK 3,256 UK 3,263 UK 3,370 UK 3,180 UK 4,129
 3 DEU 2,388 DEU 2,302 DEU 2,310 DEU 2,284 DEU 3,004
 4 JPN 2,107 JPN 2,071 JPN 1,929 CAN 1,627 CAN 2,071
 5 CAN 1,612 CAN 1,643 CAN 1,647 JPN 1,546 CHN 1,854
 6 FRA 1,080 FRA 1,042 FRA 1,168 FRA 1,181 JPN 1,795
 7 ITA 1,052 ITA 1,019 ITA 986 CHN 1,173 AUS 1,604
 8 NLD 851 AUS 863 AUS 935 AUS 1,154 FRA 1,483
 9 AUS 820 NLD 839 NLD 874 ITA 1,014 ITA 1,239
10 SWE 693 SWE 702 CHN 865 NLD 938 NLD 1,227

Note: US – United States, UK – United Kingdom, DEU – Germany, JPN – Japan, CAN – Canada, FRA – France, ITA – Italy, NLD – Netherlands, AUS – Australia, SWE – Sweden, CHN – China.

Comparison of Top-Quality Science and Engineering Research Output

As industrial production is more closely related to frontier research in science and engineering than in many other fields, we also exclusively consider an indicator of publications in Nature and Science, the leading journals in these fields. Figure 2.4 depicts the regional results for 2003 and 2013. The overall pattern is similar to Figure 2.3. However, now the shares of America and Europe fell slightly between 2003 and 2013, from 39.14% to 38.52% for Europe and from 49.76% to 47.30% for America. By contrast, the shares of Asian Pacific and Middle Eastern and African countries increased slightly over the decade, from 10.67% to 13.48% for Asia Pacific and from 0.43% to 0.71% for Middle East and Africa (see Zhou and Li in this volume, Chapter 5).

c2-fig-0004

Figure 2.4 Regional distribution of top-quality science and engineering research scores.

Source: Data from Academic Ranking of World Universities (2014).

Table 2.5 shows the top 10 country rankings over the period based entirely on the publications in Nature and Science. Interestingly, the results are very similar to Table 3. The US was again far ahead of all other countries. China was not able to break into the top 10 group until 2012 and its ranking lagged behind the G7 countries (except for Italy) and Australia. This means that the ARWU500 league table weighs heavily on performance in science and engineering and hence it is a useful indicator of a countries’ research capability in these fields.

Table 2.5 Top 10 country rankings in top-quality science and engineering research performance.

Source: Elaborations on Academic Ranking of World Universities (2014).

2003 2005 2008 2011 2013
Ranking Country Score Country Score Country Score Country Score Country Score
 1 US 3,497 US 3,508 US 3,232 US 3,308 US 3,220
 2 UK 851 UK 842 UK 848 UK 793 UK 769
 3 DEU 546 DEU 521 DEU 510 DEU 533 DEU 531
 4 JPN 446 JPN 436 CAN 401 JPN 362 JPN 335
 5 CAN 358 CAN 337 JPN 389 CAN 338 CAN 334
 6 FRA 257 FRA 264 FRA 304 FRA 331 FRA 281
 7 NLD 215 ITA 201 AUS 201 AUS 235 AUS 242
 8 ITA 197 NLD 191 SWZ 175 ITA 188 CHN 221
 9 SWZ 196 SWZ 188 NLD 171 NLD 184 SWZ 192
10 AUS 181 AUS 183 ITA 166 SWZ 180 NLD 185

Note: US – United States, UK – United Kingdom, DEU – Germany, JPN – Japan, CAN – Canada, FRA – France, ITA – Italy, NLD – Netherlands, AUS – Australia, CHN – China, SWZ – Switzerland.

Accounting for the Country Size

So far, the comparison of national research capability has been conducted without controlling for the size of the countries. An important argument is that the size of a country (either in terms of its market scale or population) matters to how universities from this country perform on the international front. Without a doubt, richer countries will be more likely to spare a larger amount of resources for tertiary education. Similarly, more highly populated countries will tend to benefit more from their larger stock of human capital. To explore these aspects, this section considers the intensity of research output. To that end, the total national aggregate scores calculated previously are deflated by either total population or GDP to give new intensity measures.

Table 2.6 lists top 10 countries based on the aggregate research scores weighted by population. Surprisingly, a completely different picture is presented. The United States and other G7 countries no longer dominated the top most places as seen in the previous section. By contrast, the table is now mainly occupied by a group of smaller countries from Northern Europe. The steadiest countries are Switzerland and Sweden that took the top two spots for the whole period of study. The next two positions were mainly competed for by Denmark and Israel. Despite being overtaken by Finland in 2010, the United Kingdom well secured its fifth place. In the bottom five, Norway and the Netherlands ranked ahead of Finland, Australia, Canada, and the United States. During 2003–2009, the United States mostly took a modest rank of number nine. However, with the progress of Australia during 2010–2012, the United States was driven out of the list. China fared even worse as it never made to the list over the whole sample period.

Table 2.6 Top 10 country rankings in terms of aggregate scores deflated by population.

Source: Elaborations on Academic Ranking of World Universities (2014).

Ranking 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
 1 SWZ SWZ SWZ SWZ SWZ SWZ SWZ SWZ SWZ SWZ
 2 SWE SWE SWE SWE SWE SWE SWE SWE SWE SWE
 3 DNK DNK ISR ISR ISR DNK DNK ISR ISR DNK
 4 UK ISR DNK DNK DNK ISR ISR DNK DNK ISR
 5 NLD UK UK UK UK UK UK FLD UK UK
 6 ISR NLD NOR NLD NOR NOR NOR UK NLD NLD
 7 FLD NOR NLD NOR NLD FLD NLD NOR NOR NOR
 8 CAN FLD US US FLD NLD FND NLD FLD AUS
 9 US US CAN FLD US US US CAN AUS FLD
10 NOR CAN FLD CAN CAN CAN CAN AUS CAN CAN

Note: SWZ – Switzerland, SWE – Sweden, DNK – Denmark, UK – United Kingdom, NLD – Netherlands, ISR – Israel, FLD – Finland, CAN – Canada, US – United States, NOR – Norway, AUS – Australia.

Table 2.7 presents top 10 country rankings in terms of research scores deflated by GDP instead of population. Again, the topmost spots are mainly occupied by countries located in Northern Europe. The United States did not even make it to this table over the sample period, let alone China. Israel topped the list for the whole period and was followed by Sweden and then Switzerland. The United Kingdom sits comfortably in the fourth place. The presence of either Denmark or New Zealand rounded up the top five. Other top 10 countries include Finland, the Netherlands, Hong Kong, Australia, and Canada.

Table 2.7 Top 10 country rankings in terms of aggregate scores deflated by GDP.

Source: Elaborations on Academic Ranking of World Universities (2014).

Ranking 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
 1 ISR ISR ISR ISR ISR ISR ISR ISR ISR ISR
 2 SWE SWE SWE SWE SWE SWE SWE SWE SWE SWE
 3 SWZ SWZ SWZ SWZ SWZ SWZ SWZ SWZ SWZ SWZ
 4 UK UK UK UK UK UK UK FLD UK UK
 5 DNK DNK DNK NZ NZ NZ DNK UK NZ NZ
 6 FLD CAN CAN DNK CAN FLD FLD NZ DNK DNK
 7 CAN HK HK CAN NLD DNK NLD DNK AUS AUS
 8 HK FLD NLD NLD DNK CAN CAN CAN NLD NLD
 9 NLD NLD FLD HK AUS NLD NZ AUS FLD FLD
10 AUS AUS AUS FLD FLD AUS AUS NLD CAN CAN

Note: ISR – Israel, SWE – Sweden, SWZ – Switzerland, UK – United Kingdom, DNK – Denmark, FLD – Finland, CAN – Canada, HK – Hong Kong, NLD – Netherlands, AUS – Australia, NZ – New Zealand.

Discussions and Conclusions

This chapter looks at the tally of top universities across countries and what that means to international competitiveness. Generally speaking, universities are institutions designed to create and transfer general or specific knowledge. They are also clusters of human capital with technological as well as non-technological knowledge. Therefore, to certain extent, top university rankings reveal countries’ capability of knowledge creation.

There has been a proliferation of academic rankings of world universities over the last 10 years. These rankings have cemented the notion that irrespective of the type, universities can be scored, sorted, and thus compared in a single league table (Marginson and van der Wende 2007 ). Although university rankings have not been in place for long, their importance has gone beyond the interests of students and university administrators and caught the attention of policymakers. More importantly, top university rankings are considered to contain economic implications in both the short and long term. In the long term, having top universities will secure global “soft power” in the future (in the sense of “knowledge is power”). The attraction and retention of international talent by top universities subsequently feeds back into their host countries’ knowledge system by boosting its innovation capacity. In the short term, top universities can command a better position in the global education market, which accounts for a significant part of service exports for countries like Australia, Canada, and the United Kingdom, as well as being increasingly important for the United States and European countries. Through generating billions of dollars in revenue, academia gradually becomes a major contributor to the GDP growth of those higher education exporting countries.

Making use of the published ARWU500 data set, this chapter has taken a bold step in quantifying university output through which inference on national innovative capacity could be made. However, instead of using the rankings provided, it utilizes research scores behind those rankings to construct different aggregate research measures for each country, which are then used to compare across countries and/or geographical regions.

Aggregate data on top universities reveal that US universities dominated all tables throughout the 2003–2013 period. More importantly, they occupied almost all of the top ranking spots. For example, in 2013, the United States accounted for 35 of the top 50 universities and eight of the top 10, followed by the United Kingdom with five top 50 and two top 10. However, the United States had lost some ground in the middle and lower end of the league table over the years. This suggests that universities in other countries, especially in other OECD countries, were catching up with their US counterparts (Li et al. 2011 ). In proportional terms, China was the best performer as it had nearly tripled its number of ARWU500 universities within a decade and those universities that were already on the list in earlier years on average had also improved their ranking over time (although none of them had managed to break into the top 100). This was probably due to great support from the Chinese government. From 1996 to 2000, it was estimated that the Chinese government had distributed US$2.2 billion under Project 211 to about 6% of the universities in order to raise their standards (Li 2004 ). This is to say that establishing a top university requires a lot of resources and time because human capital may take several decades to accumulate. However, with the labor market of academics being highly globalized, it is possible for countries to build (or actually buy) up its talent stock quickly. The results on countries’ academic research performance are robust with the exclusion of scores for Nobel prizes and Fields medals in aggregate research score calculation. The United States also took the lead in possessing top research in science and engineering.

Although the United States monopolized the world’s top universities and China consistently outperforms other countries, their performance is less impressive when it comes to research output intensity. The dominance of the United States is strongly associated with its large population and economic size. After controlling for country size, either in the form of population or GDP level, the United States was found to lag behind many smaller countries such as Switzerland, Israel, or Australia. And China did not even make to the top 10 once over the years studied.

Clearly, results obtained are very different depending on what benchmark is used. If one wishes to compare countries’ knowledge creation capability, then one should focus more on their aggregate research scores because embedded in that measure is the non-rivalrous characteristic of academic research. In this case, the United States clearly dominates the whole world in terms of innovative capacity and national competitiveness. In the meantime, China is quickly rising as a future challenger to the US hegemony. However, if examining how efficient a country is in conducting research given its limited resources is the main objective, then the research output intensity is a preferred measure. In this regard, small countries like Switzerland and Israel are truly the frontrunners.

Results of any league tables are obviously specific to the criteria used to rank the subjects, and different ranking systems have different assessment criteria and definitions of what constitutes university quality (Van Dyke 2005; Usher and Savino 2006 ). Another problem with rankings is that most systems evaluate universities as one unit, that is, different quality and performance characteristics are used to construct a composite index (Marginson 2007, 2012 ). Similarly, although the ARWU rankings are considered as the prime benchmark in their own fields, they are not without limitations and competing alternatives. For example, this league table has commonly been criticized for too much focusing on English language research output. Therefore, one should be cautious not to read too much into any snapshot results from them. Focusing on their trend results is recommended because systematic biases due to specific benchmarking criteria are likely to remain unchanged over time. Despite limitations, university league tables provide an informal measure of a country’s ability to compete in a knowledge-based world economy given the emphasis on research outputs (Aghion et al. 2007 ).

As mentioned above, universities contribute to economic development not only through their enhancement of national knowledge creation capacity but also through their revenue from providing education services to non-nationals. Regarding this latter aspect, in our modern time of globalization, universities are mimicking multinational corporations in transforming themselves into transnational organizations. Many universities start to offer programs to learners located outside the domicile country, making education services globally tradable (Gribble and Ziguras 2003 ). These tradable programs are referred to as transnational education (UNESCO/Council of Europe 2002 ). Gribble and Ziguras observe that in the last decade, the magnitude of transnational education has grown dramatically, especially in Asia where British, American, and Australian institutions are offering education services. One common way of offering transnational education services that universities often do is setting up a satellite campus abroad. For examples, the US Carnegie Mellon University has recently established a campus in Australia and the Australian Royal Melbourne Institute of Technology has been operating a campus in Vietnam for more than a decade. The United States has the highest number of overseas campuses, with 78 in 2011 (Obst, Kuder, and Banks 2011 ). Another way of exporting education services is via franchising, twinning, or validating degree programs of local partner organizations such as universities, colleges, and professional associations (Clark 2012 ). For example, according to the UK Council for International Student Affairs (2014 ), the number of students studying wholly overseas for a UK qualification increased from 570,665 in 2011/12 to 598,930 in 2012/13. Out of the figure for 2013, 2% study in an overseas campus, 17% in distance, flexible, and distributed learning, and 50% in an overseas partner organization. In the United States, a number of universities, including the top brass like Harvard University or Stanford University, have started to offer online introductory courses for free and expect students to enroll in further courses on-campus at some later stage in order to complete a degree (Lewin 2013 ). This will shape a new direction of development for universities in the era of globalization.

Besides offering transnational education services globally, universities for long have been recruiting staff and students from different parts of the world. More than ever, the traditional state boundaries are fading in the university context. Looking at how universities are internationalizing and the impact of this process on countries’ advancement in science and technology will be an exciting topic on our future research agenda.

References

  1. Academic Ranking of World Universities. 2014. http://www.shanghairanking.com/ (accessed December 18, 2014).
  2. Aghion, Philippe, Mathias Dewatripont, Caroline Hoxby, Andreu Mas-Colell, and Andre Sapir. 2007. Why Reform Europe’s Universities? Brussels: Bruegel.
  3. Aghion, Philippe, Mathias Dewatripont, and Jeremy Stein. 2008. “Academic Freedom, Private-Sector Focus, and the Process of Innovation.” The Rand Journal of Economics 39(3): 617–635.
  4. Arrow, Kenneth. 1962. “Economics of Welfare and the Allocation of Resources for Invention.” In The Rate and Direction of Inventive Activity , ed. Richard Nelson, 609–626. Princeton, NJ: Princeton University Press.
  5. Boulton, Geoffrey. 2009. “Global: What Are Universities for?” University World News 69.
  6. Clark, Nick. 2012. “Understanding Transnational Education, Its Growth and Implications.” World Education News and Reviews. http://wenr.wes.org/2012/08/wenr-august-2012-understanding-transnational-education-its-growth-and-implications/ (accessed December 18, 2014).
  7. Dasgupta, Partha, and Paul David. 1994. “Toward a New Economics of Science.” Research Policy 23(5): 487–521.
  8. Fagerberg, 1994. “Technology and International Differences in Growth Rates.” Journal of Economic Literature 32(3): 1167–1175.
  9. Goldstein, Harvey, and Catherine Renault. 2004. “Contributions of Universities to Regional Economic Development: A Quasi-Experimental Approach.” Regional Studies 38(7): 733–746.
  10. Greenspan, Alan. 2000. “Structural Change in the New Economy.” Speech to National Governors Association 92nd Annual Meeting. http://www.federalreserve.gov/boarddocs/speeches/2000/20000711.htm (accessed December 18, 2014).
  11. Gribble, Kate, and Christopher Ziguras. 2003. “Learning to Teach Offshore: Pre-Departure Training for Lecturers in Transnational Programs.” Higher Education Research and Development 22(2): 205–216.
  12. Hazelkorn, Ellen. 2007. ‘The Impact of League Tables and Ranking Systems on Higher Education Decision-Making.” Higher Education Management and Policy 19: 87–110.
  13. Jaffe, Adam, Manuel Trajtenberg, and Rebecca Henderson. 1993. “Geographic Localization of Knowledge Spillovers as Evidenced by Patent Citations.” Quarterly Journal of Economics 108: 577–598.
  14. Jensen, Richard, and Marie Thursby. 2001. “Proofs and Prototypes for Sale: The Licensing of University Inventions.” American Economic Review 91(1): 240–259.
  15. Lach, Saul, and Mark Schankerman. 2008. “Incentives and Invention in Universities.” The Rand Journal of Economics 39(2): 403–433.
  16. Lewin, Tamar. 2013. “Public Universities to Offer Free Online Classes for Credit.” The New York Times, January 23.
  17. Li, Lixu. 2004. “China’s Higher Education Reform 1998–2003: A Summary.” Asia Pacific Education Review 5(1): 14–22.
  18. Li, Mei, Sriram Shankar, and Kam Ki Tang. 2010. “Why Does the USA Dominate University League Tables.” Studies in Higher Education 36(8): 923–937.
  19. Li, Mei, Sriram Shankar, and Kam Ki Tang. 2011. “Catching Up with Harvard: Results from Regression Analysis of World Universities League Tables.” Cambridge Journal of Education 41(2): 121–137.
  20. Marginson, Simon. 2007. “Global University Rankings: Implications in General and for Australia.” Journal of Higher Education Policy and Management 29: 131–142.
  21. Marginson, Simon. 2012. “Global University Rankings: The Strategic Issues.” http://www.cshe.unimelb.edu.au/people/marginson_docs/Latin_American_conference_rankings_17-18May2012.pdf (accessed December 18, 2014).
  22. Marginson, Simon. 2013. “Nobels Aside, Local Unis Punching Above Weight.” The Australian, July 17.
  23. Marginson, Simon, and Marijk van der Wende. 2007. “To Rank or to be Ranked: The Impact of Global Rankings in Higher Education.” Journal of Studies in International Education 11(3–4): 306–329.
  24. Moffitt, Ursula. 2014. “Budget Cuts in Academic Institutions (Universities, Think Tanks, NGOs).” http://blog.inomics.com/en/budget-cuts-in-academic-institutions-universities-think-tanks-ngos/ (accessed December 18, 2014).
  25. Nelson, Richard. 1959. “The Simple Economics of Basic Scientific Research.” Journal of Political Economy 67: 297–306.
  26. Obst, Daniel, Mathias Kuder, and Clare Banks. 2011. Joint and Double Degree Programs in Global Context: Report on an International Survey . Institute of International Education.
  27. QS World University Rankings. 2014. http://www.topuniversities.com/university-rankings (accessed December 18, 2014).
  28. Robertson, Anne, and Bill Leumer. 2014. “Towards the Privatization of Public Education in America: Imposing a Corporate Culture.” Global Research. http://www.globalresearch.ca/towards-the-privatization-of-public-education-in-america/5364567 (accessed December 18, 2014).
  29. Sauermann, Henry, and Paula Stephan. 2010. “Twins or Strangers? Differences and Similarities Between Industrial and Academic Science.” NBER Working Paper Series 16113.
  30. Sorenson, Olav, and Lee Fleming. 2004. “Science and the Diffusion of Knowledge.” Research Policy 33: 1615–1634.
  31. Stephan, Paula. 1996. “The Economics of Science.” Journal of Economic Literature 34(3): 1199–1235.
  32. THE World University Rankings. 2014. http://www.timeshighereducation.co.uk/world-university-rankings/2013-14/world-ranking (accessed December 18, 2014).
  33. Thursby, Jerry, and Sukanya Kemp. 2002. “Growth and Productive Efficiency of University Intellectual Property Licensing.” Research Policy 31(1): 109–124.
  34. UK Council for International Student Affairs. 2014. “International Student Statistics: UK Higher Education.” http://www.ukcisa.org.uk/Info-for-universities-colleges--schools/Policy-research--statistics/Research--statistics/International-students-in-UK-HE/ (accessed December18, 2014).
  35. UNESCO/Council of Europe. 2002. “Code of Good Practice in Provision of Transnational Education.” http://www.coe.int/t/dg4/highereducation/recognition/code%20of%20good%20practice_EN.asp (accessed December 18, 2014).
  36. Usher, Alex, and Massimo Savino. 2006. “A World of Difference: A Global Survey of University League Tables.” Canadian Education Report Series.
  37. Van Dyke, Nina. 2005. “Twenty Years of University Report Cards.” Higher Education in Europe 30(2): 103–125.
  38. Zucker, Lynne, Michael Darby, and Maximo Toreto. 2002. “Labour Mobility from Academe to Commerce.” Journal of Labour Economics 20(3): 629–660.
  39. Zusman, Ami. 2005. “Challenges Facing Higher Education in the Twenty-First Century.” In American Higher Education in the Twenty-First Century , ed. P.G. Altbach, R.O. Berdahl, and P.J. Gumport, 115–160. Baltimore, MD: Johns Hopkins University Press.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.37.12