Introduction

Due to the rapid development of the knowledge economy and intensifying international competition, governments worldwide are increasingly recognizing the significance of human capital and technological innovation. As a result, there is a growing emphasis on attracting scientific researchers and enhancing the competitiveness of higher education systems. Universities in many countries are being viewed as the battleground for cultivating scientific research personnel, producing advanced knowledge, and promoting social development. Therefore, many governments have paid increased attention and provided support to universities. To improve the quality of higher education, cultivate scientific researchers and innovators, and enhance the global competitiveness of universities, countries have launched national higher education initiatives, often referred to as university excellence programs. Comparative analysis of such initiatives across countries can offer inspiration and significant reference value for the development of higher education globally, particularly in late-developing countries.

Since the late 20th century, several countries have successively implemented national higher education initiatives primarily aimed at improving the global competitiveness and research performance of higher education. Germany launched “Exzellenzinitiative” to promote excellent research universities and consortium, reshape international reputation of its universities, and promoting interdisciplinary and international cooperation(Fallon, 2008) . France’s Initiatives d’Excellence aims to reorganize the image of its higher education, improve the world university rankings, and build 5 to 10 world-class universities. Russia’s Project 5-100 aims to have at least five universities in the top 100 rankings of the world’s universities by 2020 (Agasisti et al., 2018; Matveeva et al., 2019, 2021; Poldin et al., 2017) . Japan has implemented a series of higher education initiatives such as “21st COE Program”, “Global COE Program” and “Top Global University Project”, seeking to build an excellent education and scientific research base, enhancing scientific research competitiveness, cultivating international cutting-edge talents, and focusing on building 30 world-class universities (Yonezawa, 2003, 2019) . South Korea has started “BK 21”, “BK 21plus” and “World-Class University Project” since 1999 and recently proposes to build a world-class research university and an outstanding local graduate school through the improvement of scientific research facilities and personnel training (Shin, 2009). India plans to build world-class universities in specific disciplines, construct 14 innovative universities, and enhance the global competitiveness of its universities through Universities with Potential for Excellence Project (Gourikeremath et al., 2015).

It is evident that these countries all aspire to attain world-class university status through their respective national higher education initiatives, with many projects explicitly emphasizing the goal of improving university rankings. The world university rankings encompass the evaluation of the main functions and goals of universities. If these projects improve the quality of universities as a whole or in several crucial aspects, it should be reflected in enhancing the ranking. The current academic circles have varying standards for university evaluation. However, the world university rankings that have emerged over the last two decades provide a relatively objective, stable, and quantifiable approach to evaluating universities. Consequently, utilizing the World University Rankings as a tool to measure the implementation effects of national higher education initiatives is a useful approach.

Policy context

Numerous countries have implemented national higher education initiatives in recent years. China was among the first to do so with the launch of the 211 Project in 1995, which aimed to improve the quality of ~100 universities in 21st century and involved a total investment of approximately $3 billion during its first three phases. Subsequently, in May 1998, China launched the 985 Project, which identified 39 institutions of higher education to receive preferential support, with a total investment of ¥90 billion during its first three phases. In 2015, based on these two projects, China implemented the Program of First-class Universities and Disciplines of the World (also called Double First-Class Program), aiming to develop 42 world-class universities and 95 world-class university disciplines.

Other East Asian countries, such as South Korea and Japan, have adopted similar initiatives. South Korea launched the three-phase Brain Korea 21 program, involving 14 graduate schools and 38 universities in phase one, and 243 research projects and 325 teams from 74 universities in phase two, with the two phases receiving an investment of 3.5 trillion Korean Won. The third phase commenced in 2013 and targeted 542 scientific research institutions of 67 universities, with an annual investment budget of 2.7 billion Korean Won. The South Korean government sequentially launched the World Class University project in 2008, investing830 billion Korean Won in 116 scientific research projects of 30 universities (Shin, 2009).

Japan, meanwhile, launched the 21st Century Center of Excellence (COE) program in 2002, with an investment of approximately ¥170 billion in 274 projects of 91 universities (Yonezawa, 2003). In 2007, it launched the Global COE program, investing ¥148.9 billion in 140 projects of 41 universities. In 2011, Japan began initiating a doctoral educational leadership program, investing ¥101.6 billion in 62 graduate school projects at 33 universities. In 2018, Japan launched the Doctoral Program for World-leading Innovative and Smart Education, also known as WISE program, supporting an additional 15 key projects of 13 graduate schools. Furthermore, Japan’s Top Global University Project identified 13 “Type A” (world-class) universities and 24 “Type B” (innovative) universities in 2014 to receive a total investment of approximately ¥96 billion with a goal to improve Japanese universities’ internationalization level (Yonezawa, 2019).

Countries in other regions of the world have also gradually implemented similar national higher education initiatives based on their own circumstance and future target. For example, Germany has implemented three phases of its Excellence Initiative Program known as “Exzellenzinitiative” since 2006. During the first two rounds of selection, a total of 45 doctoral graduate schools and 43 Clusters of Excellence were established, and 11 “Universities of Excellence” were chosen for extra funding. In the third round of selection, 10 universities and a university alliance were chosen for additional funding. In the first two phases, the German government invested a total of €4.6 billion, and the expected investment for the third phase is €150 million annually until 2026 (Fallon, 2008). Meanwhile, France invested approximately €12.3 billion in 2010 to implement its Initiatives d’Excellence program, supporting 10 universities in climbing international university rankings and building world-class universities (de Boer et al., 2017).

In addition, India’s Universities with Potential for Excellence involved 15 universities to build world-class universities since 2002, with each university receiving funding of up to ₹5 billion (Gourikeremath et al., 2015). Saudi Arabia launched the King Abdullah Project in 2007 and the AAFAQ Project in 2009 (Al-Mubaraki, 2011). Spain implemented the International Campus of Excellence program in 2009, with an investment of €53 million in the first year (de Boer et al., 2017). Denmark’s UNIK-Initiative invested 120 million Danish Krone in four projects involving three universities between 2009 and 2013 (Aagaard and de Boer, 2016). In order to have at least five universities among the top 100 in the world by 2020, Russia implemented Project 5-100 in 2012, which funded 21 universities with a special financial allocation of approximately ₽10 billion per year (Agasisti et al., 2018; Matveeva et al., 2019, 2021; Poldin et al., 2017). In 2019, Poland implemented the Excellence Initiative-Research University program, supporting 10 universities in Poland with a significant amount of funding. Of the 20 Polish universities entitled to participate in the program, 10 shortlisted universities will receive a funding increase of 10% between 2020 and 2026, while the latter 10 universities’ funding will increase by 2% (Kwiek, 2021).

Drawing upon the content of the aforementioned national policies and initiatives, as well as the scholarly research in the field (Altbach and Knight, 2007; De Wit et al. 2015; Kehm, 2013), we propose a definition for higher education initiatives as government-led efforts to enhance the quality and competitiveness of higher education sector through comprehensive measures of preferential economic and policy support. These initiatives often involve the allocation of significant funding and resources to support the development of world-class universities, research programs, and human capital. By fostering collaboration, promoting internationalization, and encouraging cutting-edge research, higher education initiatives seek to improve the overall performance of a nation’s higher education institutions and contribute to the country’s economic growth, technological advancement, and global influence.

In general, the goals of national higher education initiatives are to improve the quality of universities, enhance their international ranking, and strengthen the competitiveness of the education system. Using measures such as performance-based funding, government policy support, and subject discipline development, colleges and universities are encouraged to improve their personnel training, scientific research, and social utility (Guo et al., 2020). Some countries have invested significant sums of money to implement such initiatives, but rare research has been made on the impact of national higher education initiatives on university rankings. What is the impact of national higher education initiatives on university rankings? To what extent do universities from different countries and regions experience rank improvements when participating in these initiatives? What are the underlying reasons for these differences? This study strives to provide answers to these questions through quantitative research.

Literature review

The quality of a university is influenced by a multitude of factors. Alongside national-level factors, internal factors such as teaching faculty, infrastructure, funding, and management quality play a significant role (Aghion et al., 2010; Marconi and Ritzen, 2015). According to Pietrucha (2018), a country’s economic potential, research and development (R&D) expenditure, long-term political stability, and institutional variables like governance capabilities, impact university rankings. Furthermore, Li et al. (2011) identified a country’s income, population size, R&D expenditure, and language as influential factors of its performance in university rankings.

A crucial national-level factor affecting universities is the management model applied to higher education. While numerous countries strive to enhance the competitiveness of their universities and their performance in international rankings through national higher education initiatives, not all countries can adopt the appropriate management model. The decision to implement such initiatives is, to some degree, connected to a country’s political system and higher education management model. Academia typically categorizes higher education management models into two types, based on the relationship between the government and universities: the government-controlled type and the government-supervised type (Guo and Hao, 2020). The primary distinctions between the two approaches lie in the extent of government intervention in higher education and the strategies employed. In the government-controlled model, the government attempts to regulate all aspects of the higher education system through funding and other means, whereas the government-supervised model offers greater autonomy to universities (Zhu and Jiang, 2019). Scholars hold differing opinions on the advantages and disadvantages of these models. Some argue that the government-supervised approach is superior as it mitigates the risk of excessive government control over universities (Neave, 1995) and promotes the development and retention of elite universities(Yan and Min, 2017). In contrast, others contend that the government-controlled model has its merits, as it enables governments to allocate national resources towards initiatives aimed at cultivating world-class universities, resulting in a rapid increase in scientific research output and improved university rankings(Shin, 2009). Scholars have observed that continental European countries, along with Japan and South Korea, which have been influenced by the European model, predominantly adopt the government-controlled model, while Anglo-Saxon countries such as the US, the UK, and Canada utilize the government-supervised model (van Vught, 1991). Under the government-controlled model, governments are more inclined to implement national higher education initiatives as a means to exert influence on universities.

Measuring university quality is a complex task; however, the emergence of international university rankings has facilitated their evaluation (Hazelkorn, 2009). These rankings provide governments with a means to assess their higher education systems, allowing them to obtain a clearer understanding of their country’s higher education landscape by comparing their institutions with those worldwide. This enables them to develop major initiatives aimed at establishing world-class universities and advancing global higher education. The performance evaluation indicators of national higher education initiatives often align with those used by international university rankings, thus directly reflecting the initiatives’ effects on university rankings.

Several studies have assessed the implementation effects of national higher education initiatives from various perspectives, such as publication(Matveeva et al., 2019, 2021; Möller et al., 2016; Poldin et al., 2017; Shin, 2009), productivity and efficiency (Agasisti et al., 2018; Gawellek and Sunder, 2016), ability sorting of students, and perceptions of educational quality (Fischer and Kampkötter, 2017). However, these measurement standards primarily assess one or a few aspects of a university, with publication being the main focus. World university ranking, instead, consists of comprehensive indicators that balance reasonably between teaching, researching, and internationalization. There are numerous world university rankings, among which Times Higher Education, Quacquarelli Symonds (QS), US News, and the Academic Ranking of World Universities (ARWU) are located as the four most famous ones. QS rankings, for instance, contain quantitative indicators about teaching and scientific research as well as subjective reputation survey score that accounts for 50% of the final result. As world university rankings have been constantly improved in recent years, some studies have utilized these rankings as an indicator to evaluate the implementation effects of national higher education initiatives (Rodionov et al., 2015). This evaluation approach is more comprehensive and not limited to publications. Our research, which selects world university rankings as the evaluation index, offers a more objective perspective compared to existing research. Currently, while an increasing number of countries are introducing their own national higher education policies, there is a lack of horizontal comparison of national higher education initiatives across various countries. Our research contributes to the evaluation of this global higher education trend and its impact.

In light of this, our study employs world university rankings as the benchmark for assessing university quality. Utilizing a Staggered Difference-in-Differences Model and panel data, we analyze the impact of national higher education initiatives on university quality and investigate the heterogeneous influencing mechanisms of such initiatives, with a view to providing policy recommendations for the overall improvement of higher education around the world.

Research design and data

Methods

We classified countries as having implemented a national higher education initiative based on the presence of a formal, government-supported policy or initiative aimed at improving the global competitiveness of their higher education institutions. The treatment group comprises countries that have introduced such programs, while the control group consists of countries without such programs.

Considering the different implementation time across countries, we use Staggered Difference-in-Differences Model which is introduced by Beck et al. (2010) in their research to assess the effect of a policy that different states implemented in different years in 2010. The first countries to implement national higher education initiatives were in Asia, but other countries around the world have gradually followed suit. A feasible method to explore the impact of these initiatives, is to analyze changes in the rankings of countries’ universities before and after their implementation. Thus, this study used the Staggered Difference-in-Differences Model to construct the following regression model (1):

$$y_{it} = \beta _0 + \beta _1T + \beta _2X_i + \gamma _t + \eta _i + \varepsilon _{it}$$
(1)

where the dependent variable yit is the university’s position in world university rankings and its improved ranking. T is a dummy variable that denotes whether the country where a university is located has implemented a national higher education initiative. If the country has implemented an initiative that year, then T = 1 and the university is regarded as the DID treatment group; while if the country has not implemented one that year, then T = 0 and the university is regarded as the DID control group. Further, Xi is a vector of control variable of national-level characteristics, including urbanization rate (measures the country’s economic and social development level), total population (measures population size), Gross Domestic Product (GDP) (measures the size of the economy), education investment (measures the country’s investment in education), and gross enrolment rate in higher education (measures the development level of a country’s higher education system). yt is the fixed effect of time; ηi is the country fixed effect; and εit is the residual term. This study is concerned with the estimation of β1– the coefficient of T – as it reflects the direct impact of national higher education initiatives on university rankings.

Because universities with different rankings have different connotations to promote a ranking, there is less room for improvement for the top-ranked universities, and it is more challenging to improve a world ranking. For example, it is easier for a university ranked 100th to raise one rank, but it is more difficult for a university ranked 20th in the world. In other words, it is easier to climb the ranks from 100 to 99, which would only require an overall improvement of 1%, than from 20 to 19, which probably requires an overall improvement of 5%. Therefore, this article uses logarithms of the world rankings of universities in the dependent variables and uses the percentage of changes in the world rankings to measure the impact of the implementation of national higher education initiatives to enhance comparability between schools of different ranking levels.

It should be pointed out that the smaller the rank number, the better the representative ranking (one being the best ranking). Therefore, if the regression result of an independent variable and ranking is a positive number, then a change in the independent variable is correlated with a worse ranking (higher rank number). However, if the regression result of the independent variable and ranking is negative, then a change in the independent variable is correlated with a better ranking (lower rank number), which is the opposite of the general coefficients of regression results. In contrast, rank improvement is calculated as the previous year’s ranking minus the current ranking. If this is a positive number, the ranking improves (lower rank number), but if it is a negative number, the ranking worsens (higher rank number). Thus, if the regression result of the independent variable and rank improvement is positive, it shows that a change in the independent variable is correlated with a better ranking. If the regression result of the independent variable and rank improvement is negative, it indicates that the change in the independent variable is correlated with a worse ranking.

In order to complement the staggered difference-in-differences approach, we also employed the event study method for comparison, particularly to determine whether the parallel trends assumption holds true prior to the intervention. The event study method is an empirical analysis technique used to investigate the impact of a specific event on time-series data. The purpose of this method is to identify changes in the data before and after the occurrence of the event, thereby assessing the impact of the event on the variable of interest. Our choice of the time range of −5 years to 16 years is based on the available data, where these periods represent the longest time range before and after the implementation of national higher education initiatives at the universities.

Data

Given the need for long-term evaluations of universities, we used ranking data from two long-standing rankings: ARWU which is established in 2003 and QS which is established in 2004. The panel data used in this study is of good continuity, consisting of the top 300 universities in the QS rankings from 2004 to 2020, and the top 500 universities in the ARWU rankings from 2003 to 2020. Compared to QS ranking, ARWU ranking attaches greater importance to objectivity and scientific research, and gives zero weightage to reputation (Guo et al. 2020). These two rankings have different focuses and indicator system, so they are well-suited to check the robustness of the research results. The QS ranking covers the top 300 universities, whereas the ARWU ranking covers the top 500 universities.

Tables 1 and 2 describe the variables in the samples for the QS and ARWU rankings and show the differences between the treatment group (those involved in a national higher education initiative) and control group (those not involved in a national higher education initiative). It was found that the rank of universities in the treatment group was higher than the rank of universities in the control group in both rankings (7.5 places higher in the QS ranking and 27.2 places higher in the ARWU ranking). This indicates that universities in the sample that were involved in national higher education initiatives were generally relatively low in the rankings. In terms of rank improvement, however, the average value of the treatment group was positive and the average value of the control group was negative, indicating that, overall, universities involved in national higher education initiatives improved their rankings while those that were not had worse rankings. The preliminary findings were that the universities that participated in national higher education initiatives had higher rank numbers in general, but they are clearly approaching higher rankings. Of course, this conclusion needed to be further verified using a causal inference method.

Table 1 Descriptive statistics of QS ranking data.
Table 2 Descriptive statistics of ARWU ranking data.

In addition, the urbanization level, GDP, education investment, and gross enrollment rate in higher education of the treatment group were lower than the control group, while population size was larger. In the QS ranking, the treatment group’s urbanization level was 8.9 percentage points lower, GDP was 3.2 trillion dollars lower, education investment was 36 billion dollars lower, gross enrollment rate in higher education was 4.92 percentage points lower, and population was 0.22 billion higher. In the ARWU ranking, the treatment group’s urbanization level was 7.26 percentage points lower, GDP was 3.6 trillion dollars lower, education investment was 40 billion dollars lower, gross enrollment rate in higher education was 11.71 percentage points lower, and population was 0.22 billion higher. This indicates that, overall, countries that implemented national higher education initiatives had lower levels of socioeconomic development.

Results

Impact of higher education initiatives on university rankings

The regression results (Table 3) indicate that national higher education initiatives have a significant impact on universities’ rankings. Universities participating in these initiatives show significant improvements in their ranks compared to those that do not participate. Regression results 1 and 2 demonstrate the impact of national higher education initiatives on the QS ranking, and regression results 3 and 4 show their impact on the ARWU rankings. In the QS ranking, universities involved in national higher education initiatives improved by 53.04 places and experienced a rank improvement of 17.65 places compared to those not involved in such initiatives. Similar results were seen in the ARWU ranking, where the former rose 23.52 places, and the latter rose 12.09 places.

Table 3 Impact of implementing national higher education initiatives on university rank and rank improvement.

This study confirms that national higher education initiatives have improved the ranking of participating universities. The preceding section on descriptive statistics revealed that universities engaged in national higher education initiatives generally ranked lower overall. This reflects the fact that late-developing countries were more likely to implement projects to catch up with universities in early-developing countries. The data also demonstrates that national higher education initiatives played a significant role in swiftly enhancing the international rankings of universities and elevating the quality of higher education in late-developing countries within a relatively short time frame.

It is important to note that the governments of various countries served as indispensable organizing entities, albeit with differing degrees of control. Control was more pronounced in China, Japan, and Russia, where their governments emphasized planning and design, the strategic importance of higher education, and the need to develop higher education through national initiatives. In contrast, countries such as the United States and the United Kingdom exhibited relatively low levels of control. Governments with a higher degree of control were more committed to influencing university development through the implementation of national higher education initiatives, while supervising governments tended to adopt a laissez-faire attitude. In conclusion, national governments, as the primary entities responsible for developing national higher education systems, exert varying degrees of control over universities, and the decision to implement a national higher education initiative is related to the extent of government control.

Differences were also observed in the results of the two rankings. Comparing regression results 1 and 2 with regression results 3 and 4, it was found that the degree of change in both rank and rank improvement was greater in the QS ranking than in the ARWU ranking. This disparity may be attributed to differences in the indices of the two rankings. QS rankings are based on six indicators: academic reputation, employer reputation, faculty-student ratio, citations per faculty, international faculty ratio, and international student ratio. ARWU rankings, on the other hand, use six criteria: alumni and staff winning Nobel Prizes and Fields Medals, highly cited researchers, papers published in Nature and Science, papers indexed in major citation indices, and per capita performance of the university. In the QS ranking index, factors such as the student-to-teacher ratio, average citation rate of teachers, proportion of international teachers, and proportion of international students account for 50% of the index. These factors can all be improved in the short term through increased investment. Conversely, the ARWU ranking utilizes the number of alumni who have won a Nobel Prize or a Fields Medal to measure the quality of education, and the number of teachers who have won a Nobel Prize for science or a Fields Medal, as well as the number of scientists with the highest number of citations in their subject area, to measure the quality of teaching. These criteria are considerably stricter and more objective than those employed by the QS ranking.

The QS ranking, primarily based on Elsevier’s Scopus and Thomson Reuters’ Web of Science databases, calculates teacher’s paper citation rates, while the ARWU ranking focuses on the number of papers published in Nature and Science, exhibiting higher publication standards. Additionally, the QS ranking assigns a significant 50% weighting to reputation surveys, absent in ARWU, which may be influenced by public relations activities, impacting a university’s score. While national higher education initiatives significantly improve teaching and research quality, as evident in QS rankings, their short-term impact on producing top research outcomes is less significant, given exceptional results cannot be guaranteed solely by increased investment.

Regional differences in the effectiveness of higher education initiatives

As most of the world’s top universities are located in Europe and the Asia-Pacific region, this study further divided the samples into European universities and Asia-Pacific universities to examine the impact of national higher education initiatives on the rankings of universities in these distinct regions (Table 4). Regression result 1 represents the sample of European countries in the QS ranking, regression result 2 represents the sample of Asia-Pacific countries in the QS ranking, regression result 3 represents the sample of European countries in the ARWU ranking, and regression result 4 represents the sample of Asia-Pacific countries in the ARWU ranking

Table 4 Impact of implementing national higher education initiatives on rank improvement (by region).

In the QS ranking, universities in European countries that implemented higher education initiatives improved their rankings by 12.04 places more than universities in countries that did not implement them. In contrast, universities in the Asia-Pacific region improved their rankings by as much as 61.72 places. The trend in the ARWU ranking was consistent with the QS ranking. Universities in European countries that implemented higher education initiatives improved their rankings by 11.78 places more than universities in countries that did not, while universities in the Asia-Pacific region improved their rankings by 94.04 places.

This finding indicates that there are significant regional differences in the impact of national higher education initiatives on university rankings, with a much greater impact on universities in the Asia-Pacific region than on universities in the European region. This may be attributed to the following two reasons:

First, the differences between universities within the Asia-Pacific region are far greater than those in Europe. The Asia-Pacific region includes North American countries, which host many of the world’s top universities, as well as late-developing countries such as China, which have a weaker higher education foundation but have made rapid progress in recent years. This substantial difference provides scope for the higher education of late-developing countries to advance and catch up. In Europe, which is the birthplace of modern universities, higher education development is relatively mature overall, so the rankings of universities in various countries are relatively stable, and internal differences are small. Consequently, the room for improvement is also relatively limited.

Second, countries in the Asia-Pacific region generally implemented higher education initiatives earlier than European countries. Asia-Pacific countries, such as China and South Korea, implemented higher education initiatives as early as the end of the 20th century. Among European countries, Germany started implementation as early as 2006, but most European countries began to implement programs around 2010. Due to differences in the implementation and development timeline of higher education initiatives, it is evident that such initiatives are more mature in Asia-Pacific countries, and their positive impact on universities in that region may be greater, resulting in more notable rank improvements. For example, the success of national higher education initiatives in East Asian countries like China and South Korea can be attributed to their strong government support and investment in research and development, while the European countries’ more decentralized higher education systems may result in varying levels of impact across the region.

Parallel trend assumption test

This study used the event study method to test the parallel trend assumption, in which we plotted the dynamic rank improvement of each university before and after the implementation of a national higher education initiative. To do this, it was necessary to centralize the policy time and calculate the number of years since the implementation of the policy, in other words, to calculate the number of years between the year a university was appraised and the year that it was involved in a national higher education initiative. The time periods in this study range from −5 years to 16 years, that is, 5 years before, and 16 years after, policy implementation. Figures 1 and 2 show that the estimated coefficients for each period before the point of the implementation of a higher education initiative were not significantly higher than zero. This indicates that there was no obvious upward trend among the treatment group compared with the control group before the policy was implemented, and the Staggered Difference-in-Differences Model satisfies the parallel trend assumption. Therefore, the policy effect estimated by this method is accurate.

Fig. 1
figure 1

Parallel trend graph test (of the impact of national higher education initiatives on the ARWU ranking of universities).

Fig. 2
figure 2

Parallel trend graph test (of the impact of national higher education initiatives on the QS ranking of universities).

Conclusions and policy recommendations

This study used data from the top 300 universities in the QS ranking from 2004 to 2020 and the top 500 universities in the ARWU ranking from 2003 to 2020 as its research sample, with the rank and rank improvement of universities in these rankings over the years as the dependent variables. By constructing a Staggered Difference-in-Differences Model, we examined the net effect of the core independent variable of national higher education initiatives on the dependent variables. We also analyzed heterogeneity caused by differences in rank and region. Based on our descriptive statistics and regression analysis results, we formulated the following conclusions and policy recommendations:

Firstly, universities engaged in national higher education initiatives demonstrated a lower overall standing, exhibiting an inferior average rank compared to institutions not participating in these programs. This coincides with lower socioeconomic development levels in the countries housing such universities, partially accounting for their decision to execute these initiatives. Their higher education infrastructure significantly trailed behind traditional education powerhouses, thus leading to lower university rankings. Unlike these nations, countries like the UK and the US, despite not implementing similar initiatives, boast multiple highly-ranked institutions due to their historical advantages, allowing them to sustain their ranking dominance over time. For nations with a weaker higher education base, the adoption of national initiatives is a critical method to narrow the gap with well-established countries. Moreover, higher education’s evolution is largely tied to the local socioeconomic milieu, explaining why countries implementing these initiatives generally demonstrate lower development levels than developed nations like the UK and the US. In their quest to match and potentially outpace the UK and the US, these nations must concentrate their resources on national initiatives, giving precedence to the advancement of specific universities.

Secondly, national higher education initiatives have positively influenced university rankings. The degree of ranking enhancement was significantly more pronounced among initiative-participating universities than non-participating ones, signifying a sustained “acceleration” in the catch-up process. For most countries, rapidly elevating university rankings within a brief timeframe presents a considerable challenge. Yet, the implementation of national higher education initiatives enables targeted support for a select few universities’ expeditious development, propelling them into the echelon of top-ranked global institutions. Such initiatives provide a realistic pathway to foster world-class universities and leverage higher education’s beneficial role in advancing national socioeconomic progress.

Thirdly, the influence of national higher education initiatives on ARWU rankings, accentuating higher education’s pinnacle achievements, was less pronounced than on QS rankings. The differential regression outcomes between these rankings underscore the fact that national higher education initiatives do not prominently catalyze top-tier higher education accomplishments. While various university rankings objectively encapsulate each university’s governance with a comprehensive index system, they falter in fully representing the teaching and scientific research aspects of universities. University rankings tend to bias towards quantitative indicators, leading many universities to focus more on short-term achievements that enhance their ranking scores. National higher education initiatives have inadvertently nurtured this suboptimal tendency to some extent. Higher education initiatives should account for the real-world landscape and not solely hinge on international university rankings. National governments ought to inspire universities to make long-term investments in high-cost areas that yield slow returns and are difficult to discern in short-term data, to authentically bolster the comprehensive prowess and international competitiveness of their higher education systems.

Fourthly, East Asian nations pioneered the implementation of national higher education initiatives, with the most pronounced outcomes observed in the Asia-Pacific region. For instance, the successful execution of China’s ‘Double First-Class’ initiative and South Korea’s ‘Brain Korea 21’ program has significantly bolstered the global competitiveness of their higher education institutions through targeted research and development investments, thereby cultivating world-class universities and academic disciplines. Conversely, the UK’s ‘Research Excellence Framework’ and the decentralized higher education system in the US emphasize institutional autonomy and competition, fostering innovation and excellence via a bottom-up approach. Richard Levin, a former Yale University president, posited that resource concentration is a prerequisite for success in higher education (Levin, 2010). East Asian countries generally employ a highly centralized higher education management model, characterized by frequent government intervention. The primary aim of implementing a national higher education initiative is to rapidly centralize limited resources to establish benchmark universities that can spur the overall advancement of higher education (Shin, 2009). This successful model can provide valuable insights for enhancing higher education in developing nations.

National higher education initiatives face challenges, including short-termism, evaluation difficulties, and inefficient resource usage(Song, 2018). Developed countries like the UK and the US provide learning opportunities. Their elite universities emerged organically, standing out historically or through competition without special state intervention (Yan and Min, 2017). Government-supervised models, promoting autonomy and competition, foster their continued competitiveness (Altbach, 2003; Yonezawa, 2003). Conversely, initiative-implementing nations often control universities, limiting autonomy. Such universities should be granted more independence in areas like personnel and finance to bolster their vitality. Modernization of their higher education governance systems is also crucial (Marginson, 2006). Policymakers should adopt a long-term outlook, prioritizing sustainable development, interdisciplinary collaboration, and efficient resource allocation, to push their top institutions and disciplines to world-class status.

Consequently, based on these findings, we offer the following recommendations: For countries with underdeveloped higher education systems, establishing national initiatives can accelerate improvement. But policymakers should be cautious of the potential overemphasis on short-term achievements that could improve ranking scores. They should strive to promote a balanced view that gives equal importance to the long-term quality of education and research. It is critical to provide universities with greater autonomy, especially in areas such as personnel management and finance, to foster dynamism and competitiveness. The government should also ensure an environment of dynamic selection and competition to encourage efficient resource allocation and interdisciplinary collaboration. There is much to learn from countries like the UK and the US, where elite universities have evolved organically over time. This organic evolution was fueled by a mix of government supervision, institutional autonomy, and competitive forces. Policymakers in countries planning to implement national higher education initiatives should aim to strike a similar balance. Autonomy for universities can foster a quality-driven competition, enhancing the vitality and effectiveness of their higher education systems.